Visual Studio 2008 WPF does not like auto build increments

July 23rd, 2008

If you find WPF designer dieing on you with Unhandled exception in PresentationFramework.dll or some other DLL then turn off auto build increments in AssemblyInfo file – and if you can’t drag and drop controls from the Toolbox there then follow directions explained here (I actually could not find those *.PBD files and just renamed directory where they were supposed to be stored – this forced Toolbox rebuild on next VS restart which seems tohave solved the problem) highly annoying bugs…

Baby squirrels, ducks and geese!

July 3rd, 2008

Baby ducks, just born no more than 12 hours before photoshoot! All live in a local pond in Birmingham (UK) – see full gallery here! :)

Stay away from float/double in C#

June 30th, 2008

I stayed away from floating point variables for almost 2 decades now, mainly on the grounds of performance as back in the old days not every CPU even had a dedicated floating point unit in it, but today I decided to use floats in a couple of places and quickly an odd issue came up, here is the test case:

float fGross=1.18f;
float fTax=0.18f;

Console.WriteLine(“Gross: {0}, Tax: {1}, Net of tax: {2}”,fGross,fTax,fGross-fTax);
Nothing fussy here, two fairly small numbers, the only operation we do is substraction so no rounding issues should happen, we only use 2 digits after the dot so the float should handle it, right? Wrong. The printout on screen will be:

Gross: 1.18, Tax: 0.18, Net of tax: 0.9999999

rather than expected result of “Net of tax: 1″.

Using double variables with correct assignment of double precision values: 1.18d (not 1.18f – something that will result in more calculation errors even for initialised types) seems to work (for this test case), however this example shows that staying away from floats is not a bad idea – only use them unless you absolutely have to and if you do then don’t trust the results. Next time I use these data types will be in the next decade if not later.
Some relevant discussion on this topic is here.

Improving .NET garbage collection on multi-core setups: gcserver option

June 7th, 2008

These days programmers have to deal with setups that contain multiple cores so coding in a way that takes advantage of extra parallel processors is becoming matter of life and death. At Majestic-12 we have a small framework that allows to parallelise long running tasks but from time to time we run into strange things one of which happened again today. Our application was processing data on 8 cores, about 8 TB of data in fact so high IO can be expected to make processors wait for data to be crunched however it turned out that application was running slower than it should be – disk IO could not be to blame. Have a look at CPU usage history below:

CPU usage with default settings in a .NET application

Roughly CPU usage was about 60-63%, this is well below what it should have been. To cut long story short it turned out that adding the following configuration option to .NET application configuration file helped:

According to Microsoft gcServer can help in multi-CPU setups, and it does – just have a look at CPU usage of the same application when it was enabled:

CPU usage with gcserver set to true

This pretty much got CPU usage closer to 93-95% per CPU, which is about where it should have been in the first place when allowing for a fair amount of disk reads.

But what exactly happens here? It appears that default mode of garbage collection would stop ALL threads while collecting garbage, this was effectively pausing processing on multiple cores. You can see another option above – gcConcurrent - this in theory should make garbage collection even faster, however I found its usage is buggy in .NET 2.0, so I’d recomment to be very careful when enabling this option – I keep it turned off.

Finally since I started talking about memory stuff in .NET and garbage collection the biggest lesson of all to learn was to reuse memory – this is the best way to avoid overheads in parallel processing as well as save on actual memory usage – this is the key to high performance processing in .NET (and really any other language that uses garbage collection).

Hitachi 1 TB hard disk turning into 32 MB brick on Gigabyte motherboard

December 30th, 2007

I am a pretty big user of hard disks, and have had all sorts of troubles with them. Normally I buy value for money kind of disks losing which is not big of a deal, but when you buy top of the line hard disk – Hitachi 1 TB worth £250, and then lose it, then it makes you angry! At least 3 out of 20 turned into bricks that BIOS thought to be 32 MB (or 33 MB) big! Apparently this is a bug in Gigabyte motherboard, but no fix for my BIOS (even mobo is recent), and Hitachi’s feature tool that supposedly can fix it was crashing. All was saved however by a great little utility for Restoring Factory Hard Drive Capacity, it is free and it worked! Even all data on disk was not damaged, so today ended up with a smile on my face :)

This is going to be the last Hitachi I bought I think, I said it couple of years ago when discovering click-o’-death in some of my disks, and only had to go for them this year because there was no choice at the time…

Beware of the sorts you use!

August 1st, 2007

Usually I try to make incremental changes to a debugged big piece of software in order to avoid introducing new bugs that can bite rather painfully later when you least expect it. One reasonably good way of testing that new changes do not break old stuff is to use same inputs and save knowingly good (I call it “gold”) output from software and then compare it with the new output. It should be identical if the changes that you made were designed to improve things like performance or scalability of the same code but not change actual output. This output can be easily compared using fc /b command to know that outputs are exact or just visually check size (more dangerous). Say for example new code might do complex calculations on multiple CPU cores and then merge results, but such results should be exactly the same as if it was running serially on just one core. Sounds simple, but not always!

Read the rest of this entry »

64-bit compilation errors in Visual Studio 2005

July 20th, 2007

After just having spent some hours trying to compile a C/C++ piece into a 64-bit DLL I came across with a number of error messages searching for which does not yield a lot of great results, so I thought to post them here for the benefit of the others who come across:

  1. “unresolved external symbol __security_check_cookie”: Configuration Properties->C/C++->Code Generation switch off Buffer Security Check (/GC- command line) and this error goes away
  2. “unresolved external symbol _DllMainCRTStartup” (not to be confused with __DllMainCRTStartup@): Configuration Properties->Linker->Input – make sure “Ignore All Default Libraries” is set to “no”.
  3. Could not find kernel32.lib user32.lib etc – if you search for this files in VS2005 directory you will find them in various places, but for 64-bit mode they should be in AMD64 – if they are not there, then go to setup of VS and make sure you installed 64-bit compilers and tools: you can try install recent Platform SDK from Microsoft, however even though it contains these files in correct directory they don’t appear to help compile even if you provide path to them in “Additional Library Directories” in Linker options.
  4. “module machine type ‘X86′ conflicts with target machine ‘X64′” – this message appears if you have not got correct 64-bit kernel32.lib or other similar DLLs, so that VS takes 32-bit versions and then can’t link them since they are not 64-bit. This error comes up after you think you “fixed” error #3 by giving path to place where you think correct kernel32.lib exists – solution is the same as in error #3.

Hope this will save a few hours of fruitless efforts trying to understand what the heck Visual Studio is on about when trying to compile a wee 64-bit .DLL :)

Biting the hand that feeds it…

July 7th, 2007

In the last 10 days I tested and found good way to charm grey squirrels that otherwise can be called pretty shy, or to put it less diplomatically rather cowardly. However I managed to find a way that allowed them to overcome their shyness and finally eat tasty roasted peanuts from my hand! You just can’t beat having around 10 squirrels around you greedily looking at you waiting for their turn to grab a nut :)

When dealing with wild life one has to be careful since they are not called wild for nothing! Squirrels are very furry and cute, but they have very sharp teeth! Today a baby squirrel that was over-enthusiastic over getting a peanut run at high speed and grabbed by index finger rather than peanut, despite wearing a glove it was a successful bite :(

Well, it was time for a tetanus shot in arm as a precautinary measure as I have not been vaccinated in the last 10 years. Not to put you off feeding squirrels by hand, but one has to be careful – next time it is going to be a bigger glove and that baby squirrel will have to act like adults who come very politely and get nut very carefully, the younster has not learnt any manners yet, he will have to or no peanuts for him!

Crucial memory

May 30th, 2007

My long held belief that cheap memory costs more in a long run was confirmed in the last few days when 3 out of 4 chips in two OCZ 4 GB (2x2GB) PC2-5400 Vista Upgrade edition kits failed miserably in a new Intel Q6600 based system – one memory chip would have errors in memtest and with 2 others the system won’t post at all. New delivery of chips from Crucial sorted it all perfectly – 50% more expensive, but can you really afford bad memory that will lead to very weird crashes that would imply other hardware components are at fault?

While on the subject of memory – testing it with memtest also provides nice benchmark showing bandwidth available to processor in case of L1 and L2 caches and actual RAM: the stats for Q6600 (2.4 Ghz) system running with dual channel memory 4×2 GB at 667 Mhz (CL5) shows that actual speed of ram is just below 4 GB/sec – to put this into perspective it suggests that reading all RAM in that server would take whole 2 seconds! This is of course much faster than reading same 4 GB from hard disk, but still – RAM is anything but very fast insofar as processor speeds are concerned: L1 cache for example is rated as almost 400 GB/sec by the very same memtest, this is almost 100 times faster than accessing data from RAM! L2 cache is slower, but still almost 170 GB/sec.

What this means is that those who wish to obtain very high performance from their code should think carefully about algorithms used so that they are cache friendly – otherwise software might run slow because it is bottlenecked by comparatively slow RAM accesses.

Award BIOS secrets or how to get full 4 GB of RAM

May 27th, 2007

You may come across with a very annoying situation whereby your otherwise nice system won’t show more than 3.5 GB of RAM even though you have got 4 GB installed, or even more – granted you need a good 64-bit system to take advantage of that memory in the first place, but if BIOS won’t allow OS to see that memory then you will lose half a gig or more.

The trick is to use “H/w hole mapping” option and set it to Enabled in Award BIOS: this is present in Frequency/Voltage control submenu…only after you press secret key combination to show this option in the first place – Ctrl-Shift-F1, this the case on all of my NForce4 motherboards with AMD X2 CPUs. Information about solution to this problem is rather scarce, so I thought to post it here, enjoy your full 4 GB of RAM! :)