Having worked in C# and Visual Basic .NET for a long while, and having not used C++ much for two years, makes C++ seems strange. To me, it's not the pointers that are odd; it's the native types and the native operators. Especially the native operators.
Consider the multiplication of two variables of type int. What happens if there is an overflow? In C#, an exception of type OverflowException is thrown. In C++, you merely get an incorrect answer! To quote the C++ Language Reference published in MSDN:
Note Since the conversions performed by the multiplicative operators do not provide for overflow or underflow conditions, information may be lost if the result of a multiplicative operation cannot be represented in the type of the operands after conversion.
Weird! I also seem to remember that this glaring flaw can't be blamed on Microsoft; every other C++ compiler does the same thing. So what do you do if you want a type-safe multiplication of two integers, that is to say a multiplication routine which will give some error indication if there is an overflow? Off the top of my head I can't recall any function that will do it for you. Apparently you have to make your own. It would not be terribly difficult, but it's your problem.
(News Flash: right after I wrote the previous paragraph, Meg Weaver did a Google search and discovered that someone at Microsoft came to the same conclusion back in January 2004, well before this article was written. This person, David LeBlanc, wrote an elegant class called SafeInt which will throw an error if you try to do anything unsafe with an integer. Kudos to Mr. LeBlanc and the folks at Microsoft who supported him. The question is, why did it take this long?)
Another weird thing is exception handling. When you throw an exception, you can throw anything -- a pointer, a string, an instance of a class, anything. The C++ approach is to allow you to define your own class and throw that, or throw a pointer to something, or throw a primitive, or go off and do something else for that matter. The Standard C++ Library defines an exception class that you can use, if you don't mind the overhead of the library. Catching exceptions thrown by third-party code can be a real problem if you don't know what type of exception is being thrown. I don't know about you, but just finding out from a catch(...) block that an unknown exception was caught is very unsatisfying; I want at least an error message giving me a hint about what went wrong. (Hint: if you can't figure out what's going wrong, disable your catch (...) block and the debugger might tell you.)
The reason I got back into C++ after such a long absence is that I was having a COM and .NET interop problem. Specifically, I was having trouble with a SQL Server job running .NET code via a script. The script engine was returning a "class does not support automation" error which I knew to be incorrect. Since I was getting no satisfaction any other way, I decided to write a custom interop COM object, rather than rely on the usual interop mechanism, which didn't seem to be working. Anyway, that's neither here nor there. Some hints on how to make unmanaged code call managed code follow.
Make sure that you're using the right debugger. If you manually attach to a running process in order to debug, then just check the boxes to debug both native and managed code, and you're in business. If you have an .exe as part of your solution that you use specifically to test your code, then by default it may not know that it is supposed to start with both the native code and unmanaged code debuggers. If this happens, you won't be able to trace into certain code blocks, and you will have no information on some uncaught exceptions. (This revelation cost me several hours!) To make sure you start both debuggers, pull up the Project Properties dialog for your driver application. Make sure you're setting properties for the Debug configuration (the drop-down box at the top of the dialog). In the left pane, click to expand Configuration Properties, and click Debugging. In the right pane, set Debugger Type to "Mixed" (meaning both native and managed code debuggers).
Aside: seemingly the strangest thing about writing mixed code is that you can take some perfectly good working unmanaged code that works, and then compile it with the the /clr flag. You expect the /clr flag to make everything work differently somehow; surely, your MFC and ATL code won't compile! But it does, and the finished executable doesn't seem any different. In fact, nothing changes until you specifically tell the compiler that you want it to use managed code or emit managed code. You can even totally neglect to tell the compiler what parts of the code are managed and what aren't, and it just seems to figure out what to do without any trouble.
Set your Project Properties properly. (Say that fast five times.) If you are starting from an unmanaged project and you want to change it to include managed code, you will need to make a few changes in the Project Properties dialog. (If you lose these instructions, you can deduce the necessary changes from the build error messages.) After you make the changes, make sure you can build all your build configurations. These changes are for Visual Studio 2002; other versions are probably similar.
|All||Configuration Properties > General||Use Managed Extensions||Yes||adds /clr||This setting will carry down to C/C++ > General / Compile As Managed|
|All||Configuration Properties > C/C++ > General||Detect 64-bit Portablility Issues||No||removes /Wp64||This change will suppress warnings about unsafe code in #included files|
|All||Configuration Properties > C/C++ > Code Generation||Enable Minimal Rebuild||No||removes /Gm|
|All||Configuration Properties > C/C++ > Code Generation||Basic Runtime Checks||Default||removes /RTC1|
|Debug||Configuration Properties > C/C++ > General||Debug Information Format||Program Database (/Zi)||changes /ZI to /Zi|
MFC, ugh! After using several well-designed object-oriented frameworks, such as the one in Delphi (which I knew as the Visual Component Library, and is now called something else) and the .NET framework, I have a healthy appreciation for all the things a well-designed framework can do. After going back to MFC, it is obvious how badly-designed that framework is. It's obvious that it was written by a bunch of C hackers ("hacker" meaning "expert coder" in this case) who learned object-oriented programming, sort of, at the last moment and who then decided to write a framework. I've heard excuses why it's so lousy over the years, but I just don't believe them. After all, the Delphi framework is beautifully designed, easily extensible, uses native code, and quite fast. That example trumps all the many other arguments that have been made in favor of MFC over the years.
MFC programming is all about the wizards -- you could write code without them, but who wants to be an expert in MFC? Not me. Anyway, the main MFC wizard is invoked when you create a new project. It's pretty straightforward if you have used MFC before. I use MFC almost exclusively these days just to make driver apps to test other things, such as COM objects. This means almost all of my MFC apps are dialog-based apps. The main trouble I have is remembering how to hook up the dialog class to the user interface elements in the dialog box.
One thing about the wizards that you must constantly keep in mind is that the wizards are one-way only. All they do is generate code. In other words, once you click the Finish button, the wizard is gone, and you are left with the results. You can't go back into the wizard and change your mind; after the wizard is gone, all you can do is edit the source code the wizard generated for you. The wizards in MFC are pretty extensive. If you accidentally make a mistake, say select SDI when you wanted MDI, then you will have a lot of work to do to effect the change without the wizard. You may be better off starting over. If I can't start over, what I do is create two sample projects using the wizards, one with the correct setting and one with the wrong setting, and then compare all the files to see what the differences are, and then make the changes in the real application.
To connect a dialog: 1.) Give the user interface elements meaningful IDs. 2.) In the Class View window, right-click on the CDialog-derived class and click Add > Add Variable... 3.) In your event handler, call UpdateData(TRUE) at the beginning of the handler and UpdateData(FALSE) at the end. Not too difficult.
Another way to do it: right-click on the object in the form view and click "Add Variable...". I learned this one from Dynamic Help, which for once was actually helpful.
It's a nice, elegant framework, which isn't very easy to use unfortunately. The framework is definitely designed to optimize the goal of making lightweight COM objects possible; the tradeoff is that the framework is more difficult to use. Fortunately the classes are so well designed that you rarely need to worry about how they work. Usually, to implement the feature you need, you just make your class inherit from yet another base class, which provides the functionality you're looking for.
ATL, similar to MFC, is all about the wizards. The wizards are still one-way, but are not as volatile; what I mean is that if you forget one check box in the wizard, the difference is usually just that your generated class does or does not partly derive from a particular base class. This isn't so hard to fix if you make a mistake in the wizard. Of course, it is still much more preferable to not make a mistake in the wizard.
To get started making a COM object, first create a new "ATL Project". "Attributed" means that it uses .NET-style attributes instead of an IDL file, which is a bit more compact, but makes it less obvious what is going on.
Once you have your project, right-click the project in the Class View and click "Add > Add Class...". Then double-click ATL Simple Object. Fill in the Short name in the dialog that comes up to populate all the other boxes, and you will have a CoClass with a custom interface. (A CoClass is just a container for interfaces; it's not a class like a C++ class. It is implemented by a C++ class, but that's not the same thing.)
In general I have a high opinion of Native COM Support -- it works well and is easy to use most of the time. If you've forgotten, Native COM Support is Microsoft's non-standard enhancements to C++ to enable COM objects to be used relatively easily. However, it too has its difficulties.
Here is the cookbook approach to enabling an MFC app to call a COM object, for those who don't do it often.
#import "MyComObject.tlb" named_guids
MyComObject::IMyInterfacePtr p;Note: the CreateInstance() method takes either an CLSID or a ProgID. This is confusing to me because the smart pointer class is the name of your interface with "Ptr" appended on the end; that makes it seem as if the smart pointer class' CreateInstance() method would take an IID, but this is not the case. (Actually now that I think about it that makes sense, because an interface is just an interface, and any number of CoClasses can implement the same interface.) Once you have correctly created an instance of the smart pointer class, you can use the pointer as if it pointed to an instance of an ordinary class, using the "-> " operator, and not worry that the pointer actually points to a COM object. The smart pointer class takes care of the calls to AddRef() and Release(), so you don't need to worry about that.