Date: | April 26, 2018 / year-entry #97 |
Tags: | code |
Orig Link: | https://blogs.msdn.microsoft.com/oldnewthing/20180426-00/?p=98605 |
Comments: | 16 |
Summary: | Let's look at the problem rather than the question. |
A customer wanted their program to operate in a special mode if it is being debugged from inside the Visual Studio development environment. Say, if being run from Visual Studio via the "Start with debugging" menu, it should display a diagnostic window that contains additional information. You should think twice before you do this.
Sure. you could use a function like
You should have a command line switch that enables the diagnostic window. You can configure Visual Studio so that when you run the program under Visual Studio, it gets the command line switch. That way, when you have a bug that goes away when the diagnostic window is open, you can remove the command line switch and debug it. (It also means that when run outside Visual Studio, you can give the special command line switch and get the diagnostics window even though no debugger is running.)
As a compromise, you could enable the diagnostic window by default
if ¹ Maybe the diagnostic window calls some functions that have side effects which are masking a bug in the program. For example, the diagnostic window might perform extra logging, which introduces a change in timing that masks a race condition. |
Comments (16)
Comments are closed. |
I had to write this code once. I had some project that needed a slightly different codepath if its executable image was projectname.exe or projectname.vshost.exe (and yes it was the actual controlling variable — debug attach to process behaved normally) so I just checked for my executable image name.
I’ve done stuff like this. Most notably when working with WinForms and inheriting from built-in components, you’ll find design-time code sometimes will execute portions of your inherited class code you never intended to be run at design time and break the design time experience. I think there are a couple flags… this.DesignMode and and some LicenseSomething class… that together can be used to detect design mode. That’s the approach I usually use. Then I just bypass any inherited code so my custom code won’t run at design time and do things like mess with the control’s size.
Ah yes; we solved this a “simpler” way. The code that messed up in design time was modified to check for application not initialized and do nothing in that case (RPC calls in design time can fail rather spectacularly when the local RPC endpoint isn’t started yet).
You can also use preprocessor directives to ensure the diagnostic code is only compiled into DEBUG builds and not RELEASE so that your delivery does not have these tools exposed (if that is what you want).
But as stated you want to be careful since RELEASE could then contain bugs you’d never find by testing DEBUG. So following the advice of being able to disable the diagnostic tools even in a DEBUG build is wise.
The only issue with this suggestion is that it is possible to run the debug build independent of the IDE. So if you just check that it is a debug build then things may fail spectacularly if you run the debug build without a debugger attached.
If you want to have a ‘diagnostic/developer mode’, its better to switch it on with a command line arg.
I’d also recommend leaving this in your release builds too. It will come in useful when you are trying to resolve an issue on a customers machine over a remote control session.
The advice here is good: choose your mode explicitly rather than implicitly inferring one from the environment.
But, as we’ve learned prior, running under a debugger already changes things about how your program runs. https://blogs.msdn.microsoft.com/oldnewthing/20130103-00/?p=5653
A really hacky way to accomplish the task originally posed would be to detect whether your process is using the debug heap or the regular heap. :-S
You can also get this behavior for free just via Visual Studio. I’ve seen people complain about C# bugs that “disappear” under the debugger because the debugger display support for reading object properties causes things like lazy initialization to not be so lazy, or static fields and properties to be initialized at a very different time than they would be under normal execution.
This also reminds me of a story told by a EE professor in school about how in one of his own final projects as an undergrad, his team found that their mass of breadboards and wires would only function correctly if a multimeter was attached at a certain place. Without enough time left to figure out the real cause, they just left it connected during pass-off with the TA.
So, if you run into this problem with your software, just ship your product with WinDbg and have it set to attach automatically when the program is launched! :)
This reminds me of my intro to assembly class in college. It was taught on a VAX 11, and I at one point wrote a program that didn’t work when run, but ran perfectly under the debugger.
After hours of troubleshooting, I discovered that the VAX debugger initialized all of the registers to zero at startup, and my code was failing to clear a register before accumulating into it.
Humm… IMO this approach is worst than to just “touch” any class that you believe is lazy initializing and can cause problems in the constructor, and convert any LINQ queries to array before accessing it.
I remember having a problem loading my Windows Live Hotmail. (That’s what it was called back then.) I even authorized Microsoft support to access my account. (I didn’t give them any username or password though.) But they couldn’t reproduce the problem, whereas I managed to use several VPNs (back then, public VPN wasn’t a thing) to verify that this problem could occur from anywhere in the world.
I guess it was one those problems that couldn’t be reproduced from within Microsoft campus.
Following a chain of links, I came to a four-year-old post alluding to a customer who wanted to be able to tell whether their DLL had been loaded by a service. Many comments touched upon this and other which-execution-context scenarios, without ever mentioning one obvious rationale for such curiosities: the detection of licensing violations. You want to run our software as a service? That’s extra. Or maybe we deploy all of our modules, but you only get to run the ones you have paid for. Or maybe our licensing enforcement depends upon a service that is irresistibly tempting to tamper with. Etc., etc., etc.
“You want to run our software as a service? That’s extra.”
I wouldn’t mind one bit if this were difficult/impossible to enforce!
That seems a fairly vandalicious attitude towards the commercial reality that if customers don’t pay the producer doesn’t stay in business.
Using a service as an intermediary would be the obvious technique to attempt to bypass multi-user licensing and let a single-user license support multiple actual users.
Stealing isn’t nice. If you don’t like the licensing arrangements on offer, buy somebody else’s product whose licensing you approve of. If you can’t find that in the marketplace, there’s probably a reason.
The original comment specifically mentioned charging extra for the ability to run the software as a service, as opposed to a “regular” application. That would be analogous to selling someone a vehicle and declaring that they must not use it for business purposes unless they pay the manufacturer/dealer more money.
Similarly, if I want to run it on a different OS using a compatibility layer, I have a right to do that – they may not provide support for it, but that’s different from forbidding it altogether!
I’ll just mention that – at least here – the cost of registering your vehicle is indeed greater if you use it for any commercial purpose than if it is just for personal transport. OK, that money isn’t going to the manufacturer, but still…