A Heisen Bug is a bug whose presence is affected by act of observing it.
For example, a bug which disappears in debug mode. It's normally dispersed by depressing dangerous digits in the direction of the dirty dangling doughnut.
For more examples, see Heisen Bug Examples.
Wow, that happened to me...
Back when I was hardware troubleshooting, a wise supervisor corrected my use of the phrase "intermittent problem". "There is no such thing!", he pronounced, "But some problems do have intermittent symptoms!"
This gives rise to the rather philosophical question: If no symptoms manifest, does a problem exist?
Once I had a bug which appeared only during step-by-step debugging. The Delphi debugger can display values of properties of objects. The values of properties can be computed by methods which can modify the object. So when I stepped through the code, the debugger displayed the value of the property and modified the observed object. Very confusing.
So you had a method to tell you what the properties of some object were, but that method actually modified the object? I'm confused as to how this can ever be a Good Thing and would appreciate being enlightened.
Yes. I am not sure whether this is always a Bad Thing, it was kind of Lazy Initialization. However in this particular case it was really confusing obscure stuff deep in the Delphi's framework VCL. The problem was that in the Watch window I have had displayed some properties of some object. I was debugging through the code trying to find out, when the method Initialize is being called. So I placed a breakpoint into the method Initialize and debugged some other methods. When I jumped over one method, Initialize has been called right after the other method terminated; when I debugged the method step by step, Initialize has been called right after the first line. Actually, the Watch window updated itself every time the debugger interrupted the program. So, after every break, the Watch window read the property and the property read-method called Initialize. So not the normal flow of the program, but the debugger called the method Initialize. A very true Heisen Bug indeed.
Here's one really nasty typo-turned-Heisen Bug: "assert( x = 5 );"... -- An Aspirant
See Assert Idea
I tripped over one of these "debugger changed my code" bugs. Visual Studio.NET allows me to put conditions on a breakpoint, but no one bothered to check whether the code I entered was a boolean phrase. It's surprising hard to see the difference between "x == 5" and "x = 5", especially with some fonts.
Interestingly enough, you can put any code in the breakpoint condition. And I do mean "any". -- Dean Chalker
To avoid this Compare Constants From The Left. And see also Code Complete or Writing Solid Code.
I had a similar problem with MS Dev Studio (C++) a few years back. Calling a third-party DLL (DB driver) caused GPF problems that were unreproduceable in debug mode, regardless of whether I was stepping through or letting it run (which is why it didn't show up during regular dev testing, but was instead caught by the external testers). Turns out that the compiler optimizations were turned off by default in debug mode, and turned on in release mode, and it was the optimization module in Dev Studio that introduced the bug. Turning on optimizations in debug let me watch the call into the DLL tromp the stack below the calling point for the function, thereby trashing data that didn't belong to the DLL. Nice. Turning optimization off for the compilation of the file that called the DLL function in question removed the problem. (As it turned out, I discovered the same problem with a driver DLL for a different DB by another vendor, which reinforced the probability that the problem really was with Micro Soft's C++ optimizer.) -- Earl Jenkins
I've had plenty of similar bugs that appear in release builds but not in debug builds. Without exception, the problem has always been a bug in my code. When you tell MSVC to compile in debug mode, it puts padding on the stack so that it can detect when your code trashes the stack. It will put an epilogue on each function that checks the contents of the stack padding to make sure it wasn't overwritten. That mechanism generally works, but not always. If you happen to trash the stack in a way that leaves the padding intact, then it will go undetected in debug builds, but in release builds, it will actually trash the stack. Other times, the epilogue doesn't get executed for whatever reason, so the padding just swallows the stack trash and gives you a false sense of security. -- Michael Sparks
FWIW: if you happen to overwrite the padding with zeros it won't be detected either, but if you overwrite real data with zeros you have got a big problem.
It is our destiny to create more and more hard-to-analyze Heisen Bugs by the creation of less intrusive debuggers. Maybe it is possible to create a system theoretic approach that actually makes the bugs show themselves rather than to hide.
This happens in event-driven debugging sessions. You can't effectively simulate Lost Focus, for example, when the debugger itself forces focus changes.
For the few occasions when I've debugged a problem like that (and knew that was the area to suspect), I've had luck using "remote debugging". That way the debugger's UI can't interfere with the debuggee's UI.
Ayep. Had a nasty one. A long time ago in a galaxy far far away I was working on a server application that needed to gracefully deal with DDOS attack level traffic. We'd slap 2 nics in the thing and let it burn for days. (Running NT4 and a bunch of Micro Soft C++ code.) It would always eventually crash spectacularly and in seemingly random places in the code. Memory usage was stable and relatively low. All of our databases were elsewhere and the server was ostensibly stateless.
We switched to release-mode libraries and the problem went away. Switch back, and it would yack. We scoured the code for weeks trying to find this bit of nasty.
After searching in arcane tomes of library lore it was revealed that the microsoft debug runtime libraries had a malloc counter and some *ahem* then junior level programmer adored standard library maps of strings. They were copied by value all over the place.
Finally, yes, the 32 bit signed malloc counter rolled over, and all hell broke loose.
And a junior programmer became a little less junior. -- Michael Wilson
when I worked at Microsoft, we saw this problem all the time, bugs that existed in live code disappeared in debug mode. The most common culprit is that somebody forgot to initialize a variable. When you run in the debugger it first zeros out all of your memory, thus a defacto init. But when you run live, you get whatever random value was already in that memory location. If your program then assumes that the var has already been initialized to zero, you are in trouble and with a very hard to find bug. I can't even count how many times this happened. It's one of the reasons that I came to the conclusion that C is a menace to the software industry.
-- codeslinger www.compsalot.com
After going through the phases of loving C, then hating C, then entertaining the notion that C is a menace, then realizing its the ideal tool for a huge number of rather specific things I gradually came to realize that C isn't a problem at all, the problem is choosing C when it is the wrong tool for the job at hand. -- Craig Everett
See also: Bug Theory, Bohr Bug, Mandel Bug, Schroedin Bug, Heisenberg Uncertainty Principle, Programmer Proximity Detector
See original on c2.com