Can you dllexport/dllimport an inline function?

Date:January 9, 2014 / year-entry #8
Tags:code
Orig Link:https://blogs.msdn.microsoft.com/oldnewthing/20140109-00/?p=2123
Comments:    48
Summary:Yes, but it won't actually do much.

The MSDN documentation on the subject of Defining Inline C++ Functions with dllexport and dllimport was written with compiler-colored glasses. The statements are perfectly true, but they use terminology that only compiler-writers understand.

The short version is that all modules which share an inline function are considered to be part of the same program, so all of the C++ rules regarding inline functions in programs need to be followed.

Let's look at the paragraphs one at a time and translate them into English.

You can define as inline a function with the dllexport attribute. In this case, the function is always instantiated and exported, whether or not any module in the program references the function. The function is presumed to be imported by another program.

Okay, first of all, what is instantiation?

In this context, the term instantiation when applied to an inline function means "The code is generated (instantiated) for the function as if it had not been marked inline."

For the purpose of discussion, let's say that you have a function written as

__declspec(dllexport)
inline int times3(int i) { return i * 3; }

Suppose that you compile this into a DLL, and that DLL also calls the inline function.

int times9(int i) { return times3(times3(i)); }

What code gets generated?

The times9 function sees that the times3 function is inline, so it inlines the function body and there is no trace of a times3 function at all. The compiler generates the code as if it had been written

int times9(int i) { return (i * 3) * 3; }

That would normally be the end of it, except that the times3 function was marked dllexport. This means that the compiler also generates and exports a plain old function called times3 even though nobody in the DLL actually calls it as such. The code is generated and exported because you told the compiler to export the function, so it needs to generate a function in order to export it.

This is not anything special about the dllexport keyword. This is just a side-effect of the rule that "If you generate a pointer to an inline function, the compiler must generate a non-inline version of the function and use a pointer to that non-inline version." In this case, the dllexport causes a pointer to the function to be placed in the export table.

Okay, next paragraph:

You can also define as inline a function declared with the dllimport attribute. In this case, the function can be expanded (subject to /Ob specifications), but never instantiated. In particular, if the address of an inline imported function is taken, the address of the function residing in the DLL is returned. This behavior is the same as taking the address of a non-inline imported function.

What this is trying to say is that if you declare an inline function as dllimport, the compiler treats it just like a plain old inline function: it inlines the function based on the usual rules for inlining. But if the compiler chooses to generate code for the function as if it were not inline (because the compiler decided to ignore the inline qualifier, or because somebody took the address of the inline function), it defers to the generated code from the original DLL, because you said, "Hey, the non-inline version of this function is also available from that DLL over there," and the compiler says, "Awesome, you saved me the trouble of having to generate the non-inline version the function. I can just use that one!"

The "I can just use that one!" is not just an optimization. It is necessary in order to comply with the language standard, which says [dcl.fct.spec] that "An inline function with external linkage shall have the same address in all translation units." This is the compiler-speak way of saying that the address of an inline function must be the same regardless of who asks. You can't have a different copy of the inline function in each DLL, because that would result in them having different addresses. (The "with external linkage" means that the rule doesn't apply to static inline functions, which is behavior consistent with static non-inline functions.)

Okay, let's try paragraph three:

These rules apply to inline functions whose definitions appear within a class definition. In addition, static local data and strings in inline functions maintain the same identities between the DLL and client as they would in a single program (that is, an executable file without a DLL interface).

The first part of the paragraph is just saying that an inline function defined as part of a class definition counts as an inline function for the purpose of this section. No big deal; we were expecting that.

Update: On the other hand, it is a big deal, because it results in inline functions being exported when you may not realize it. Consider:

class __declspec(dllexport) SimpleValue
{
public:
 SimpleValue() : m_value(0) { }
 void setValue(int value);
 int getValue() { return m_value; }
private:
 int m_value;
};

The Simple­Value constructor and the Simple­Value::get­Value method are exported inline functions! Consequently, any change to the constructor or to getValue requires a recompilation of all code that constructs a Simple­Value or calls the get­Value method. End update.

The second part says that if the inline function uses a static local variable or a string literal, it is the same static local variable or string literal everywhere. This is required by the standard [dcl.fct.spec] and is what you would naturally expect:

int inline count()
{
 static int c = 0;
 return ++c;
}

You expect there to be only one counter.

And the final paragraph:

Exercise care when providing imported inline functions. For example, if you update the DLL, don't assume that the client will use the changed version of the DLL. To ensure that you are loading the proper version of the DLL, rebuild the DLL's client as well.

This is just working through the consequences of the language requirement [dcl.fct.spec] that an inline function "shall have exactly the same definition" everywhere. If you change the definition in the exporting DLL and don't recompile the importing DLL with the new definition, you have violated a language constraint and the behavior is undefined.

So there you have it. The rules of inline exported functions translated into English.


Comments (48)
  1. Joshua says:

    I wonder who screwed this up.

    From past experience here it's almost always somebody.

  2. Matt says:

    The penultimate paragraph leads us to a different conclusion:

    Don't mark inline functions as dllexport.

    If you need the definition to change ever, or it does something complicated, make it a non-inline function; that way when you edit it all of your clients will automagically pick up the change. If you /don't/ ever need it to change, make it as inline and leave it at that. You might end up with two non-inlined versions in your process (one in yours.dll and one in client.dll), but you'll also not be calling that through a pointer, so it's all swings and roundabouts.

    If a feature makes it impossible to update your code without asking your clients to rebuild, what that really means it makes it impossible for you to update your code PERIOD. Ergo, don't mark inline functions as dllexport, at which point all of this discussion is moot.

  3. Shelby says:

    This is similar to optional parameters in C#, and why you should not use optional parameters in a library.  The default value of the optional parameter is placed in the client's code at compile time, not the library code.  To change the default value in the library, you have to recompile all the clients.

  4. Cesar says:

    @Shelby: IIRC, optional parameters work that way in C++ too.

  5. Azarien says:

    The "function defined as part of a class definition counts as an inline function" part of C++ standard is evil, unexpected and confusing. Whoever decided that must have had a bad day. Or week. Conclusion: avoid defining functions as part of a class definition.

  6. Ken in NH says:

    @Shelby

    That's a rather maximalist position to hold. Perhaps instead of banning optional/default parameters from public methods in libraries, you should apply a softer rule that actually solves the problem of default parameter values being compiled in client code: don't use constants as default values; use sentinel values instead. In other words:

    public void Frob(string value = "Bits") // BAD

    {

      DoSomething(value);

    }

    public void Frob(string value = null) // Good

    {

      DoSomething(value ?? "Bits");

    }

    Later version:

    public void Frob(string value = "Bytes") // Fails!

    {

      DoSomething(value);

    }

    public void Frob(string value = null) // Success!

    {

      DoSomething(value ?? "Bytes");

    }

  7. Mordachai says:

    I come to a very different conclusion:

    inline is fine… DLLs are evil.

  8. Mordachai says:

    DLLs were designed as a binary API based on C.  They were not designed for C++.  Extending them for C++ creates this sort of issue.  DLLs should be used for C interfaces, period.  If you use them for C++, then you have a lot of subtle issues that have to be considered carefully.

    Plus, DLLs lead to DLL hell, so we avoid them.  They're great for hosting resources, but no code.

    I have never understood other developer's adherence to a technology that was designed when memory was scarce, and C binary interfaces were the only well-defined & stable binary-interface available.

    Now, compile it all into your EXE and you're guaranteed to load what you actually use out of a library, instead of the entire library, the OS is good about paging in just what's needed, your installer is hugely simplified, and your dependency tree is massively simplified (DLL hell just doesn't exist).

  9. Joshua says:

    @Evan: What happened in old C++ if the compiler had to de-inline you got static.

  10. Gabe says:

    Steve Wolf: If you don't use DLLs, how do you handle plug-ins? OK, so maybe you make exceptions for plug-in architectures. Even worse then, how do you handle updates?

    Let's say your program can view images, so you use a 3rd-party library to decode them. However, most images are compressed, so the image decoding library uses a 3rd-party compression library. Now let's say that there's a bug in the compression library that allows a malformed image to execute code and take over your machine.

    If that compression library is loaded as a DLL, your users can search their hard drives for all instances of it, update them to make sure they're all versions that contain the bug fix, and they'll be safe.

    If that library is statically linked, there's no way to know what software is vulnerable. The security vulnerability could be embedded within any EXE that is capable of compression, whether it's viewing images, managing compressed archives, or just using compression in a file format or network protocol.

    Instead of being able to get a bug fix directly from the library vendor, users have to go to the app vendors, who may not even exist anymore or (more likely) just don't care about any software that isn't the current version. Even if it's all open source, you have to recompile every single binary affected by the bug, meaning you have to have a build environment for every one of those programs. Good luck with that.

  11. alegr1 says:

    Now there is another can of worms called template classes and functions…

  12. mikeb says:

    Anyone who tries to export a template class or function from a DLL is a programmer to be feared.

  13. Myria says:

    My standard practice for C++ DLLs is to have dllexported extern "C" functions to create an interface object.  This interface object then has nothing but virtual functions exposed externally, with no two functions having the same name.  In other words, it's similar to COM rules.

  14. Mordachai says:

    @Ken – that's called "DLL Hell" – because there's no such thing as changing code with only positive side-effects in the real world.

    @Kantos – our software doesn't need to support plug ins.  If it did, then I would restrict the plug-ins to a well-understood binary interface in C.  And in that case, the DLL is dependent on the app, not the other way around, so not a problem.

    It is everyone else who is stuck in 95/9x view thinking that DLLs actually do anything of value for them… when they mostly cause DLL hell, introduce security vulnerabilities, and increase support headaches.

    DLLs have their uses… but they're few and far between.  Windows developers were taught that DLL = goodness early on, and are still failing to grock why that is untrue and has been untrue since NT.

    Beyond plug-ins, I see no place where their value outweighs their negatives.  The usual list of "reasons" are well-known to be flat-out untrue – such as memory footprint, "fixing" bugs across multiple applications, or providing security.  They do none of those things reliably or better than linking a monolithic exe, while creating a long and very anti-stability list of problems.

  15. Myria says:

    @Steve Wolf: DLLs are useful if you have codebases whose namespaces conflict.  Each module has its own symbol table, so things won't conflict.  (In UNIX, the run-time symbol table is shared, but you can use -fvisibility=hidden and such to resolve that.)

    It would be nice to be able to have link.exe build with separate symbol tables within a single .exe file.  Basically, link each module individually, then cross-link the imports to make a single .exe file.  Other than with horribly hacky solutions, I haven't seen such a feature.

  16. Crescens2k says:

    @Steve:

    Memory footprint isn't flat out untrue. Windows does keep a DLL loaded only once if there isn't any rebasing involved. So linking via a DLL does save memory when you use shared code. Also, think of the case of static CRT vs. shared CRT. You have an executable at around 32KB that uses the DLL version and that DLL is loaded as few times as possible, vs. an executable where the same code is loaded multiple times as part of the executable, is that really a case where the DLL doesn't have a lower footprint?

    For fixing bugs across multiple applications. That is more likely to do with how well designed the DLL is. In my own experience, several applications have used the same DLL for some shared code. A couple of times a bug has been found in that DLL and it was the only thing updates. The updated version was distributed and that fixed the problems. However, when I see people bemoaning this, I usually notice that they do not use well designed DLLs.

    Also, I don't see where the DLL hell is involved with modern design, since you would be using private DLLs most of the time. This means that your application would be loading the DLL from its own private set of libraries most of the time, or a well controlled set, so it can be well tested. This is something that can't be done with people stuck in a 9x view because this is a more modern thing.

    In fact, with the things that NT brought with it, especially the changes in the search paths added in KB959462/Windows 7 I would say that DLL = goodness even more now.

  17. Crescens2k says:

    Made a mistake, the update is KB959426, I managed to transpose the last two digits.

  18. GuyWithHeadache says:

    One of the rare days when I get a mild headache. When I read quoted text, it gets worse. When I read Raymond's explanation, it goes away. Free medical care. Thank you!

  19. Matt says:

    @kantos: Since when does the Windows kernel support ELF files?

  20. Evan says:

    @Azarien

    I was curious if I could figure out why that was, so I took a look at Stroustrup's "The Design and Evolution of C++". I got part of the way to an answer, but still unsatisfactory.

    Early versions of C++ ("C With Classes") didn't have an inline keyword, but *did* support indicating that member functions should be inline by putting them into the class definition. So when later revisions to C++ came along to introduce inline, that behavior "had" to stay.

    Unfortunately, I don't see a description of why *that* was true. I can speculate though.

    Until relatively recently in the history of C and C++, the only way to get inline functions most of the time was to have the definition available to the *compiler*. (I'm not sure how popular LTO is even now.) That means that the definitions of the functions must be in the headers, which is the same place the definition of the class itself must be (if you want to share the class across translation units). So if you don't have "inline" available, how do you indicate whether a function should be inline? Putting the function definitions into the class definition actually *makes sense* — it means that the compiler has a way of telling inline from non-inline, and is still reasonably natural in the sense that inline defs have to go into the header anyway.

    In addition, making def'ns in the class definition *not* inline starts presenting problems, because what compilation unit would the code be generated to? Linkers at that time I don't think had the ability to collapse repeated definitions across compilation units*. You "can't" use "the definition still appears in the header but outside of the class definition", because the compiler just sees one big translation unit and doesn't know where the boundaries of the header and not-header are**. So the only other option I can think of would be to say that functions that are definined in the class definition are *static*, which seems equally confusing and surprising as making them inline.

    Anyway, like so many things in C++ that are awful if you look at them from the standpoint of today, I'd say that aspect actually developed pretty naturally if you accept a pretty dogmatic amount of "must maintain backwards compatibility!" as your goal.

    * Though now that I'm thinking this through more, I don't know what would happen when compilers at that time would refuse to inline a function defined in the class definition, so something has to give in this part of my argument, and maybe it's not so good. Maybe you'd just get a linker error and have to move that function out of the class.

    ** Yeah, practically speaking it kind of does, but it never actually *uses* that information except for out-of-band information like error messages and debugging info. Making the meaning of something change depending on whether it appears in a header or source file would probably be the most surprising option out of all of these. :-)

  21. Ken Hagan says:

    @Steve: If you have a common library implemented as a DLL, you can fix bugs in that library for all applications that use it, whether you are aware of the applications or not.

    @Evan: a very good point about inline and old linkers. In fact, most of the confusions surrounding inline would have been avoided if the keyword had been named "duplicate" (or "selectany"!). The only *sure* thing about such a function is that you can define it in several translation units and the compiler/linker is allowed to assume that they are all identical. Actual inlining is then merely a permitted optimisation for the code generator.

  22. kantos says:

    @Steve: You might want to re-learn how DLLs are loaded on Windows as you seem to be stuck in a 9x/XP era of assumptions. Any DLL marked as being without a Fixed Base Address and With ALSR enabled is considered to basically be Virtually Position Independent Code (vPIC)*. The OS will load that DLL into physical memory once when it is first loaded by a user mode process, and then keep a reference count of how many processes are using it. Because the DLL is vPIC it can load that DLL into the virtual address space of any user mode process where ever it needs to. Thus many processes share a single Page for a dll, the relocation table is the only non-shared data, any global variables are copy on write and will be shared until modified. So unless you're building your DLLs for an ancient version of windows they are probably not as heavy as you think.

    * The PE format doesn't actually support Position Independent Code in the truest definition. Theoretically the kernel does as it supports ELF binaries, but I'm not sure what's involved with getting an ELF binary to load on Windows and DLLs are by specification PE.

  23. Harry Johnston says:

    @Steve: you're not entirely alone; I'm also of the opinion that DLLs Are Bad.  Perhaps we should form a club. :-)

    @Crescens2k: sure you save a small amount of memory.  When was the last time that mattered?

    @Gabe: note that newer architectures generally put plug-ins into a separate process anyway.

  24. alegr1 says:

    >Anyone who tries to export a template class or function from a DLL is a programmer to be feared.

    Too late. See MSVCPxx.DLL

  25. Myria says:

    kantos: Programs on Windows can be position-independent if the compiler generates such code.  The PE format supports relocation, but there is very little in PE that actually *requires* relocation.  The only things I can immediately think of that require the use of a relocation table are __security_cookie and /SAFESEH.  That's only because IMAGE_LOAD_CONFIG_DIRECTORY_2 has pointers instead of RVAs for these fields, and these values are needed by ntdll.dll prior to a single instruction of the DLL executing.  No machine code has to be relocated, just an .rdata table.

  26. Matt says:

    I'd like to see how all of those people suggesting the death of DLLs interact with Windows APIs if not through some kind of DLL interface. I can only presume they are writing inline assembler to do syscalls directly against the kernel, have never programmed an application for Windows, or spend their time in the lofty heights of JavaScript, VB and C#-sans-pinvoke and have never questioned how their fluffy languages interact with the cold hard metal of the system as a whole.

    DLLs might be evil, but I defy you to come up with a better solution to the problem they solve; namely the ability to modularize code and have updates for the modules independent of the application. Even Linux has DLLs; they just call them Shared Objects (.so files).

  27. JRB says:

    With some C++11 Wizardry you can even use DLL's to share objects easily between Visual C++ Debug/Release and GCC and use std::string, vector, tuple, etc in the interface between the two. This can save you the headache of having to rebuild libraries multiple times when you want to upgrade/change compilers on Windows.

  28. Gabe says:

    Harry Johnston: Newer architectures may run plug-ins in a separate process, but that doesn't mean each plug-in is a separate process! The standard pattern is to use a single plug-in container process (like audiodg or splwow64) to load all plug-in DLLs.

    Can you imagine if plug-ins were all separate EXEs that had to run in their own process? Photoshop would be 100 processes instead of 1! Each plug-in would have to have the code to communicate with its host.

  29. Engywuck says:

    "Each plug-in would have to have the code to communicate with its host."

    Microkernels done in userspace?

  30. AC says:

    @JRB

    "With some C++11 Wizardry you can even use DLL's to share objects easily between Visual C++ Debug/Release and GCC and use std::string, vector, tuple, etc in the interface between the two."

    Now that makes me genuinely curious how you would do that?

    Release/Debug tend to have different class definitions, violating the ODR across modules, and VC and gcc have a different ABI.

    Could you elaborate on that part or link to an explaining article?

  31. Joker_vD says:

    @Gabe: Well, even 100 processes wouldn't be that bad if there was a way to reliably bring down the whole tree of child processes. Otherwise, Windows already has a built-in message passing, and shared memory regions. So not entirely unreasonable, but yeah — in-proc is still way faster. Mainly because all border-crossing checks'n'security isn't present.

  32. JDT says:

    Isn't it the case that if I write "inline" before a normal function definition, then it is still up to the compiler whether or not it inlines that function or not?

    The wording of the dllexport documentation you quoted seems to indicate that dllexport-ed functions marked inline are ALWAYS inlined — is that right?

    Also, are function definitions appearing within a class definition "inline" in the sense that it is as if they were marked inline (ie. they might be inlined and might not), or they are guaranteed to be inlined?

    [Whether an inline function actually gets inlined is at the discretion of the compiler. This is mentioned in the quoted documentation where it says "subject to /Ob specifications". -Raymond]
  33. JDT says:

    Just to balance the discussion on using optional parameters in C#: if you use method overloads in C# interfaces, and export these to COM, then this interface is ugly and brittle, because COM doesn't support overloads so it has to mangle the names; on the other hand, if you use optional parameters, then you can provide a cleaner interface safe to expose via COM.

    On the third hand you can just customise the names you export from C# to give them unique names.

    On the fourth hand, why are people so down on optional parameters? Surely you just make sure you don't change the default values, and everything is fine.

  34. Joshua says:

    @Matt: We actually have that in Linux. The syscall ABI never gets breaking changes, so anybody who preferred the static link rules could have them. I prefer to keep a static set of  tools around. One of my older systems has 3 different sets of libraries to run software from 3 eras.

  35. Daniel says:

    @JDT: Inline functions are not ALWAYS inlined. For example if you create a pointer to the function (let's say you want use it as a callback method somewhere): It's plain impossible to do that.

    Same applies to recursive functions: The function might be inlined (helps performance if the recursive part is only rarely called) but any recursive call will not be.

    I'm not sure how (or if) this works in C++, but if you want to be able to call the function by RTTI you also need a non-inlined version.

    About optional paramters: It's plain impossible (except for trivial cases) to define a default value that never changes. (The same as it's impossible to write a detailed specification without actually before starting to develop an application: The user always will find some necessary changes later on)

    DLL Hell:

    I think may of the defenders of DLL's don't realize what exactly a DLL hell is…

    While there are some good reasons for DLL's (like accessing the system API) there are serious issues if you depend on shared DLL's (And yes. I've seen "Shared by several processes" in almost all comments): Let's say you have 4 applications that use 4 shared dll's with 2 versions each (And every developer knows that 2 versions per dll is EXTREMELY low). This makes a total of 16 combinations between DLL's so you now test each your application with 16 different DLL-configurations.

    You might be able to eliminate some combinations by checking the version at startup (Apps will fail fast due to "unsupported" dll version). But you still cannot guarantee that all applications will play by the rules (an one may just install an older DLL version on top of a new one…)

    So the only way to play safe is to use "private" dll's. But in that case you can also ship a monolithic exe. (If anything changes for the app, you need to send some module(s) anyway. So you could always ship an exe instead).

    So finally: As I said there are a few reasons for dll's:

    – Plugin architectures

    – 3rd party components supplied in different languages (COM interfaces…)

    – System/OS Libraries (well, still DLL hell, but at least not our responsibility)

  36. JRB says:

    @AC

    I presented the underlying techniques at C++Now. The link to the presentation is github.com/…/easy_binary_compat.pdf

    The cppcomponents project at github.com/…/cppcomponents takes that and builds on it to produce a full component system with factories, Constructors and static functions, delegates, and events. It is basically like C++/CX except it works on Windows 7 as well as Linux, and only uses standard C++ so it works across multiple compilers.

    Take a look at plugin under examples. It demonstrates a simple way to write plugins. The build script will make the exe with g++ and the dll with Visual C++ (you need the 2013 version). The unit tests cover the the supported features are tested with g++ exe/ Visual C++ dll and vice versa. I am working on writing better documentation for this.

  37. Random User 5937128 says:

    When it comes to DLLs, it helps if you can actually tell what version something is. One product from the vendor my company supports has just shy of 200 binaries (EXE/DLL/OCX/etc). 20% of those do not have a VERSIONINFO resource at all. Another 20% have a resource, but have said "1.0.0.1" since forever. And all of them have had multiple revisions over time without updating the version number. Any more, if I have to know for sure which is the newest, I end up having to dig into the IMAGE_FILE_HEADER to find the TimeDateStamp. @_@

  38. Mark says:

    @Daniel

    RE: DLL Hell

    I'm pretty sure most people reading this site do realize what DLL hell is, however the defenders of DLLs are talking about a different use case than you are. DLLs are perfectly good when (for instance):

    a) You have multiple applications that rely on the same library that *you* wrote and is under your control. E.g. Office

    b) You're in a more complicated setup, where the user is responsible for the necessary libraries (mostly happens with other developers/dev tools), and the user can update a bunch of components by updating the library.

    And yes, we all realize that there can be tons of fun that can be had by everyone when some application comes along and installs its own version of DirectX that is 5 years outdated on your lovely 9x box, but that's largely been solved by Windows not letting applications do that anymore and making them keep their private DLLs to themselves.

  39. Anil says:

    Ok, I'll bite on the off-topic about DLL benefits.

    Here's a few more to consider:

    1) Ability to have different exception handling rules (throw vs non throw)

    2) Ability to use different runtimes

    3) CoCreateInstance

    4) Delay loading and the subsequent ability to have fallback code (Oh you haven't downloaded <foo>? No worries.)

    5) Namespace resolution, as already stated, such as the hInstance parameter in your RegisterClass WNDCLASS struct.

  40. Dimiter 'malkia' Stanev says:

    One more for DLLs – Better link times.

    With DLL one can optimize significant amount of code isolated as one or several libs with LTCG (LTO) – for example WebKit, Qt, etc. Such libraries are quite big to fit into the daily compile / link (statically) routine. I'd better have small .exe with lots of big dlls, rather than one big exe that I need to recompile everytime.

    Another + for DLL's is FFI (foreign function interface). Luajit, Python, Common Lisp and many other runtime implementations of the said languages deal much better with compiled DLL, rather than going through a lot of hoops to link statically with foreign code (like Go, and some other systems).

  41. mcmcc says:

    Having lived through the bad ol' days of building a multi-million line application on a pre-shared-library AIX platform, I can assure you of one thing:  dynamically-loaded libraries are a _very_ good thing.  Be careful what you wish for…

  42. Harry Johnston says:

    @mcmcc and @Dimitir: that's all down to how efficient (or not) your build tools are.

    @Anil: common wisdom says that having a DLL with a different runtime than the exectuable is bad news, because (for example) you might allocate memory in the DLL and try to free it in the exectuable, hence boom.  My opinion is that if the library is properly designed this shouldn't happen anyway, but it does mean one more thing to worry about.  In any case, there's no fundamental reason why you couldn't do this at build time if you really wanted to.  Same with exception handling.  Also, there's no need for fallback code if all the code you want is already in your executable.  And so on.

    @Gabe: I don't see any reason why Photoshop's built-in "extensions" couldn't be statically linked.  I wonder how many different third-party extensions the typical user has installed?  But, yes, in Windows you sometimes have no realistic option other than DLLs.  That doesn't make them a good thing, just a necessary one – and only because Windows doesn't provide better alternatives.

    @Matt: I don't know about Steve, but I don't object much to the Windows API being provided by way of DLLs.  They work reasonably well for that particular purpose and in that particular context.  (The only annoying thing I can think of at the moment is that they sometimes start up threads, but so long as you're aware that this may happen it isn't all that big a deal.)

  43. Dimiter 'malkia' Stanev says:

    Photoshop, Autodesk, XSI, etc. based plugins are whole business on it's own. There are companies (rendering, simulation, asset management, etc) that their main business are these plugins. And often they have to provide for many different versions of the products, and make sure they are compiled with the same compiler the original product was compiled with.

    Harry: I don't really see how static linking would practically work? In our gamedev studio we do plugins for our artists that export model/animation/etc. data for our pipeline. I can't imagine giving them new compiled version of Photoshop/Maya/MotionBuilder so they could work with. And why would Autodesk allow us to do so?

    DLLs (and .so, .dylibs) are really something that enables people to make money, without them, you would've been constricted to some form of IPC mechanism, which while it sounds great and maybe the right choice (in very long term), it would be terrible for certain solutions – for example imagine that you have to ask your host application (say 3D modelling one) for all your bones, weights, constraints, etc. – to perform some kind of real-time simualation or something like that – going through IPC would work, but it would complicate the whole thing, and might slow it down to a crawl.

    Once you are in RPC land, you also have to openly deal with broken communications (even if it's on the same computer) – for example the main application was shut down, or your plugin was killed – you have to come with plans to restart, re-establish connection, get data again, etc., etc.

    And then imagine a Photoshop effect written as DLL that deals with 100 of megabytes of data doing all through the IPC instead of directly touching the data.

  44. Joshua says:

    @John Doe: Right now, they change in service packs.

  45. Crescens2k says:

    @Harry Johnston:

    In general, I disagree with you about dynamic libraries and executables with different runtimes being bad. I also don't think much of "common wisdom/sense" because quite often it is not so common sense and wisdom that is more abundant in these circles.

    Anyway, the biggest problem is sharing things across the boundary assuming that both sides are using the same runtime. If you are in control of everything, then this isn't so bad. But if you aren't then there are simple ways of doing this without too much thought.

    Providing a deallocation function, and if you are using C++, then smart pointers would deal with a lot of this.

  46. Harry Johnston says:

    @Dimitir: I was talking specifically about the plugins that are shipped with Photoshop.  Obviously, third-party plugins can't (usually) be statically linked.

    I'm no expert, but doesn't COM already provide solutions to most of the IPC problems you discuss?  You can also use shared memory to avoid having to transfer large amounts of data over IPC.

    As a counter-example, consider what happens when a DLL-based plugin is built with a compiler that changes the FPU settings.  (Based on a true story!)

    All that said, it may well be that, despite the problems, DLLs are still the best currently available solution for these particular scenarios.  That doesn't mean we need to use them for everything.  (Note in particular that plugins are unlikely to be trying to import or export inline functions!)

    @Crescens2k: no, I agree with you, it shouldn't be necessary for the DLL and the exectuable to use the same runtime.  But a lot of people do get incensed about this, and it does create some risks which need to be managed.

  47. Harry Johnston says:

    @Crescens2k: the other case, of course, is where the DLL and the executable really do need to use different runtime versions, but can't because both of the runtime DLLs have the same name.

  48. John Doe says:

    For the purpose of full static linking, the Windows syscalls would have to be documented. Would it really be such a bad thing? I can see a few more things, other than static linking, where having syscalls documented would be *extremely* useful.

    Of Course™, Win32, COM, GUI et al would be a no-no in such applications.

Comments are closed.


*DISCLAIMER: I DO NOT OWN THIS CONTENT. If you are the owner and would like it removed, please contact me. The content herein is an archived reproduction of entries from Raymond Chen's "Old New Thing" Blog (most recent link is here). It may have slight formatting modifications for consistency and to improve readability.

WHY DID I DUPLICATE THIS CONTENT HERE? Let me first say this site has never had anything to sell and has never shown ads of any kind. I have nothing monetarily to gain by duplicating content here. Because I had made my own local copy of this content throughout the years, for ease of using tools like grep, I decided to put it online after I discovered some of the original content previously and publicly available, had disappeared approximately early to mid 2019. At the same time, I present the content in an easily accessible theme-agnostic way.

The information provided by Raymond's blog is, for all practical purposes, more authoritative on Windows Development than Microsoft's own MSDN documentation and should be considered supplemental reading to that documentation. The wealth of missing details provided by this blog that Microsoft could not or did not document about Windows over the years is vital enough, many would agree an online "backup" of these details is a necessary endeavor. Specifics include:

<-- Back to Old New Thing Archive Index