Exploring New C++ and MFC Features in Visual Studio 2010
2011-05-20 22:57 Rollen Holt 阅读(877) 评论(0) 编辑 收藏 举报Visual Studio 2010 presents huge benefits for C++ developers. From the ability to employ the new features offered by Windows 7 to the enhanced productivity features for working with large code bases, there is something new and improved for just about every C++ developer.
In this article, I will explain how Microsoft has addressed some of the broad problems faced by C++ developers. Specifically, Visual Studio 2010 enables a more modern programming model by adding core language features from the upcoming C++0x standard, and by overhauling the standard library to take advantage of the new language features. There are new parallel programming libraries and tools to simplify the creation of parallel programs. You’ll also find enhanced overall performance and developer productivity thanks to IntelliSense and code-understanding features that scale to large code bases. And you’ll benefit from the improved performance of libraries and other features across design time, build time, compile time and link time.
Visual Studio 2010 migrates the build system to MSBuild to make it more customizable and to support native multi-targeting. And enhancements in the MFC library harness the power of new Windows 7 APIs, enabling you to write great Windows 7 applications.
Let’s take a closer look at these C++-focused advancements in Visual Studio 2010.
C++0x Core Language Features
The next C++ standard is inching closer to being finalized. To help you get started with the C++0x extensions, the Visual C++ compiler in Visual Studio 2010 enables six C++0x core language features: lambda expressions, the auto keyword, rvalue references, static_assert, nullptr and decltype.
Lambda expressions implicitly define and construct unnamed function objects. Lambdas provide a lightweight natural syntax to define function objects where they are used, without incurring performance overhead.
Function objects are a very powerful way to customize the behavior of Standard Template Library (STL) algorithms, and can encapsulate both code and data (unlike plain functions). But function objects are inconvenient to define because of the need to write entire classes. Moreover, they are not defined in the place in your source code where you’re trying to use them, and the non-locality makes them more difficult to use. Libraries have attempted to mitigate some of the problems of verbosity and non-locality, but don’t offer much help because the syntax becomes complicated and the compiler errors are not very friendly. Using function objects from libraries is also less efficient since the function objects defined as data members are not in-lined.
Lambda expressions address these problems. The following code snippet shows a lambda expression used in a program to remove integers between variables x and y from a vector of integers.
v.erase(remove_if(v.begin(), v.end(), [x, y](int n) { return x < n && n < y; }), v.end());
The second line shows the lambda expression. Square brackets, called the lambda-introducer, indicate the definition of a lambda expression. This lambda takes integer n as a parameter and the lambda-generated function object has the data members x and y. Compare that to an equivalent handwritten function object to get an appreciation of the convenience and time-saving lambdas provide:
class LambdaFunctor { public: LambdaFunctor(int a, int b) : m_a(a), m_b(b) { } bool operator()(int n) const { return m_a < n && n < m_b; } private: int m_a; int m_b; }; v.erase(remove_if(v.begin(), v.end(), LambdaFunctor(x, y)), v.end());
The auto keyword has always existed in C++, but it was rarely used because it provided no additional value. C++0x repurposes this keyword to automatically determine the type of a variable from its initializer. Auto reduces verbosity and helps important code to stand out. It avoids type mismatches and truncation errors. It also helps make code more generic by allowing templates to be written that care less about the types of intermediate expressions and effectively deals with undocumented types like lambdas. This code shows how auto saves you from typing the template type in the for loop iterating over a vector:
vector<int> v; for (auto i = v.begin(); i != v.end(); ++i) { // code }
Rvalue references are a new reference type introduced in C++0x that help solve the problem of unnecessary copying and enable perfect forwarding. When the right-hand side of an assignment is an rvalue, then the left-hand side object can steal resources from the right-hand side object rather than performing a separate allocation, thus enabling move semantics.
Perfect forwarding allows you to write a single function template that takes n arbitrary arguments and forwards them transparently to another arbitrary function. The nature of the argument (modifiable, const, lvalue or rvalue) is preserved in this forwarding process.
template <typename T1, typename T2> void functionA(T1&& t1, T2&& t2) { functionB(std::forward<T1>(t1), std::forward<T2>(t2)); }
A detailed explanation of rvalue references is out of scope for this article, so check the MSDN documentation at msdn.microsoft.com/library/dd293668(VS.100) for more information.
Static_assert allows testing assertions at compile time rather than at execution time. It lets you trigger compiler errors with custom error messages that are easy to read. Static_assert is especially useful for validating template parameters. For example, compiling the following code will give the error “error C2338: custom assert: n should be less than 5”:
template <int n> struct StructA { static_assert(n < 5, "custom assert: n should be less than 5"); }; int _tmain(int argc, _TCHAR* argv[]) { StructA<4> s1; StructA<6> s2; return 0; }
Nullptr adds type safety to null pointers and is closely related to rvalue references. The macro NULL (defined as 0) and literal 0 are commonly used as the null pointer. So far that has not been a problem, but they don’t work very well in C++0x due to potential problems in perfect forwarding. So the nullptr keyword has been introduced particularly to avoid mysterious failures in perfect forwarding functions.
Nullptr is a constant of type nullptr_t, which is convertible to any pointer type, but not to other types like int or char. In addition to being used in perfect forwarding functions, nullptr can be used anywhere the macro NULL was used as a null pointer.
A note of caution, however: NULL is still supported by the compiler and has not yet been replaced by nullptr. This is mainly to avoid breaking existing code due to the pervasive and often inappropriate use of NULL. But in the future, nullptr should be used everywhere NULL was used, and NULL should be treated as a feature meant to support backward compatibility.
Finally, decltype allows the compiler to infer the return type of a function based on an arbitrary expression and makes perfect forwarding more generic. In past versions, for two arbitrary types T1 and T2, there was no way to deduce the type of an expression that used these two types. The decltype feature allows you to state, for example, an expression that has template arguments, such as sum<T1, T2>() has the type T1+T2.
Standard Library Improvements
Substantial portions of the standard C++ library have been rewritten to take advantage of new C++0x language features and increase performance. In addition, many new algorithms have been introduced.
The standard library takes full advantage of rvalue references to improve performance. Types such as vector and list now have move constructors and move assignment operators of their own. Vector reallocations take advantage of move semantics by picking up move constructors, so if your types have move constructors and move assignment operators, the library picks that up automatically.
You can now create a shared pointer to an object at the same time you are constructing the object with the help of the new C++0x function template make_shared<T>:
auto sp = make_shared<map<string,vector>> (args);
In Visual Studio 2008 you would have to write the following to get the same functionality:
shared_ptr<map<string,vector>> sp(new map<string,vector>(args));
Using make_shared<T> is more convenient (you’ll have to type the type name fewer times), more robust (it avoids the classic unnamed shared_ptr leak because the pointer and the object are being created simultaneously), and more efficient (it performs one dynamic memory allocation instead of two).
The library now contains a new, safer smart pointer type, unique_ptr (which has been enabled by rvalue references). As a result, auto_ptr has been deprecated; unique_ptr avoids the pitfalls of auto_ptr by being movable, but not copyable. It allows you to implement strict ownership semantics without affecting safety. It also works well with Visual C++ 2010 containers that are aware of rvalue references.
Containers now have new member functions—cbegin and cend—that provide a way to use a const_iterator for inspection regardless of the type of container:
vector<int> v; for (auto i = v.cbegin(); i != v.cend(); ++i) { // i is vector<int>::const_iterator }
Visual Studio 2010 adds most of the algorithms proposed in various C++0x papers to the standard library. A subset of the Dinkumware conversions library is now available in the standard library, so now you can do conversions like UTF-8 to UTF-16 with ease. The standard library enables exception propagation via exception_ptr. Many updates have been made in the header <random>. There is a singly linked list named forward_list in this release. The library has a header <system_error> to improve diagnostics. Additionally, many of the TR1 features that existed in namespace std::tr1 in the previous release (like shared_ptr and regex) are now part of the standard library under the std namespace.
Concurrent Programming Improvements
Visual Studio 2010 introduces the Parallel Computing Platform, which helps you to write high-performance parallel code quickly while avoiding subtle concurrency bugs. This lets you dodge some of the classic problems relating to concurrency.
The Parallel Computing Platform has four major parts: the Concurrency Runtime (ConcRT), the Parallel Patterns Library (PPL), the Asynchronous Agents Library, and parallel debugging and profiling.
ConcRT is the lowest software layer that talks to the OS and arbitrates among multiple concurrent components competing for resources. Because it is a user mode process, it can reclaim resources when its cooperative blocking mechanisms are used. ConcRT is aware of locality and avoids switching tasks between different processors. It also employs Windows 7 User Mode Scheduling (UMS) so it can boost performance even when the cooperative blocking mechanism is not used.
PPL supplies the patterns for writing parallel code. If a computation can be decomposed into sub-computations that can be represented by functions or function objects, each of these sub-computations can be represented by a task. The task concept is much closer to the problem domain, unlike threads that take you away from the problem domain by making you think about the hardware, OS, critical sections and so on. A task can execute concurrently with the other tasks independent of what the other tasks are doing. For example, sorting two different halves of an array can be done by two different tasks concurrently.
PPL includes parallel classes (task_handle, task_group and structured_task_group), parallel algorithms (parallel_invoke, parallel_for and parallel_for_each), parallel containers (combinable, concurrent_queue, and concurrent_vector), and ConcRT-aware synchronization primitives (critical_section, event and reader_writer_lock), all of which treat tasks as a first-class concept. All components of PPL live in the concurrency namespace.
Task groups allow you to execute a set of tasks and wait for them all to finish. So in the sort example, the tasks handling two halves of the array can make one task group. You are guaranteed that these two tasks are completed at the end of the wait member function call, as shown in the code example of a recursive quicksort written using parallel tasks and lambdas:
void quicksort(vector<int>::iterator first, vector<int>::iterator last) { if (last - first < 2) { return; } int pivot = *first; auto mid1 = partition(first, last, [=](int elem) { return elem < pivot; }); auto mid2 = partition( mid1, last, [=](int elem) { return elem == pivot; }); task_group g; g.run([=] { quicksort(first, mid1); }); g.run([=] { quicksort(mid2, last); }); g.wait(); }
This can be further improved by using a structured task group enabled by the parallel_invoke algorithm. It takes from two to 10 function objects and executes all of them in parallel using as many cores as ConcRT provides and waits for them to finish:
parallel_invoke( [=] { quicksort(first, mid1); }, [=] { quicksort(mid2, last); } ); parallel_invoke( [=] { quicksort(first, mid1); }, [=] { quicksort(mid2, last); } );
There could be multiple subtasks created by each of these tasks. The mapping between tasks and execution threads (and ensuring that all the cores are optimally utilized) is managed by ConcRT. So decomposing your computation into as many tasks as possible will help take advantage of all the available cores.
Another useful parallel algorithm is parallel_for, which can be used to iterate over indices in a concurrent fashion:
parallel_for(first, last, functor); parallel_for(first, last, step, functor);
This concurrently calls function objects with each index, starting with first and ending with last.
The Asynchronous Agents Library gives you a dataflow-based programming model where computations are dependent on the required data becoming available. The library is based on the concepts of agents, message blocks and message-passing functions. An agent is a component of an application that does certain computations and communicates asynchronously with other agents to solve a bigger computation problem. This communication between agents is achieved via message-passing functions and message blocks.
Agents have an observable lifecycle that goes through various stages. They are not meant to be used for the fine-grained parallelism achieved by using PPL tasks. Agents are built on the scheduling and resource management components of ConcRT and help you avoid the issues that arise from the use of shared memory in concurrent applications.
You do not need to link against or redistribute any additional components to take advantage of these patterns. ConcRT, PPL and the Asynchronous Agents Library have been implemented within msvcr100.dll, msvcp100.dll and libcmt.lib/libcpmt.lib alongside the standard library. PPL and the Asynchronous Agents Library are mostly header-only implementations.
The Visual Studio debugger is now aware of ConcRT and makes it easy for you to debug concurrency issues—unlike Visual Studio 2008, which had no awareness of higher-level parallel concepts. Visual Studio 2010 has a concurrency profiler that allows you to visualize the behavior of parallel applications. The debugger has new windows that visualize the state of all tasks in an application and their call stacks.Figure 1 shows the Parallel Tasks and Parallel Stacks windows.
Figure 1 Parallel Stacks and Parallel Tasks Debug Windows
IntelliSense and Design-Time Productivity
A brand-new IntelliSense and browsing infrastructure is included in Visual Studio 2010. In addition to helping with scale and responsiveness on projects with large code bases, the infrastructure improvements have enabled some fresh design-time productivity features.
IntelliSense features like live error reporting and Quick Info tooltips are based on a new compiler front end, which parses the full translation unit to provide rich and accurate information about code semantics, even while the code files are being modified.
All of the code-browsing features, like class view and class hierarchy, now use the source code information stored in a SQL database that enables indexing and has a fixed memory footprint. Unlike previous releases, the Visual Studio 2010 IDE is always responsive and you no longer have to wait while compilation units get reparsed in response to changes in a header file.
IntelliSense live error reporting (the familiar red squiggles) displays compiler-quality syntax and semantic errors during browsing and editing of code. Hovering the mouse over the error gives you the error message (see Figure 2). The error list window also shows the error from the file currently being viewed, as well as the IntelliSense errors from elsewhere in the compilation unit. All of this information is available to you without doing a build.
Figure 2 Live Error Reporting Showing IntelliSense Errors
In addition, a list of relevant include files is displayed in a dropdown while typing #include, and the list refines as you type.
The new Navigate To (Edit | Navigate To or Ctrl+comma) feature will help you be more productive with file or symbol search. This feature gives you real-time search results, based on substrings as you type, matching your input strings for symbols and files across any project (see Figure 3). This feature also works for C# and Visual Basic files and is extensible.
Figure 3 Using the Navigate To Feature
Call Hierarchy (invoked using Ctrl+K, Ctrl+T or from the right-click menu) lets you navigate to all functions called from a particular function, and from all functions that make calls to a particular function. This is an improved version of the Call Browser feature that existed in previous versions of Visual Studio. The Call Hierarchy window is much better organized and provides both calls from and calls to trees for any function that appears in the same window.
Note that while all the code-browsing features are available for both pure C++ and C++/CLI, IntelliSense-related features like live error reporting and Quick Info will not be available for C++/CLI in the final release of Visual Studio 2010.
Other staple editor features are improved in this release, too. For example, the popular Find All References feature that is used to search for references to code elements (classes, class members, functions and so on) inside the entire solution is now more flexible. Search results can be further refined using a Resolve Results option from the right-click context menu.
Inactive code now retains semantic information by maintaining colorization (instead of becoming gray).Figure 4 shows how the inactive code is dimmed but still shows different colors to convey the semantic information.
Figure 4 Inactive Code Blocks Retain Colorization
In addition to the features described already, the general editor experience is enhanced in Visual Studio 2010. The new Windows Presentation Foundation (WPF)-based IDE has been redesigned to remove clutter and improve readability. Document windows such as the code editor and Design view can now float outside the main IDE window and can be displayed on multiple monitors. It is easier to zoom in and out in the code editor window using the Ctrl key and the mouse wheel. The IDE also has improved support for extensibility.
Build and Project Systems
Visual Studio 2010 also boasts substantial improvements in the build system and the project system for C++ projects.
The most important change is that MSBuild is now used to build C++ projects. MSBuild is an extensible, XML-based build orchestration engine that has been used for C# and Visual Basic projects in previous versions of Visual Studio. MSBuild is now the common Microsoft build system for all languages. It can be used both in the build lab and on individual developer machines.
C++ build processes are now defined in terms of MSBuild target and task files and give you a greater degree of customizability, control and transparency.
The C++ project type has a new extension: .vcxproj. Visual Studio will automatically upgrade old .vcproj files and solutions to the new format. There is also a command-line tool, vcupgrade.exe, to upgrade single projects from the command line.
In the past, you could only use the toolset (compiler, libraries and so on) provided with your current version of Visual Studio. You had to wait until you were ready to migrate to the new toolset before you could start using the new IDE. Visual Studio 2010 solves that problem by allowing you to target multiple toolset versions to build against. For example, you could target the Visual C++ 9.0 compiler and libraries while working in Visual Studio 2010. Figure 5 shows the native multi-targeting settings on the property page.
Figure 5 Targeting Multiple Platform Toolsets
Using MSBuild makes the C++ build system far more extensible. When the default build system is not sufficient to meet your needs, you can extend it by adding your own tool or any other build step. MSBuild uses tasks as reusable units of executable code to perform the build operations. You can create your own tasks and extend the build system by defining them in an XML file. MSBuild generates the tasks from these XML files on the fly.
Existing platforms and toolsets can be extended by adding .props and .targets files for additional steps into ImportBefore and ImportAfter folders. This is especially useful for providers of libraries and tools who would like to extend the existing build systems. You can also define your own platform toolset. Additionally, MSBuild provides better diagnostic information to make it easier for you to debug build problems, which also makes incremental builds more reliable. Plus, you can create build systems that are more closely tied to source control and the build lab and less dependent on developer machine configuration.
The project system that sits on top of the build system also takes advantage of the flexibility and extensibility provided by MSBuild. The project system understands the MSBuild processes and allows Visual Studio to transparently display information made available by MSBuild.
Customizations are visible and can be configured through the property pages. You can configure your project system to use your own platform (like the existing x86 or x64 platforms) or your own debugger. The property pages allow you to write and integrate components that dynamically update the value of properties that depend on context. The Visual Studio 2010 project system even allows you to write your own custom UI to read and write properties instead of using property pages.
Faster Compilation and Better Performance
In addition to the design-time experience improvements described so far, Visual Studio 2010 also improves the compilation speed, quality and performance for applications built with the Visual C++ compiler, as a result of multiple code generation enhancements to the compiler back end.
The performance of certain applications depends on the working set. The code size for the x64 architecture has been reduced in the range of 3 percent to 10 percent by making multiple optimizations in this release, resulting in a performance improvement for such applications.
Single Instruction Multiple Data (SIMD) code generation—which is important for game, audio, video and graphics developers—has been optimized for improved performance and code quality. Improvements include breaking false dependencies, vectorization of constant vector initializations, and better allocation of XMM registers to remove redundant loads, stores and moves. In addition, the__mm_set_**, __mm_setr_** and __mm_set1_** intrinsic family has been optimized.
For improved performance, applications should be built using Link Time Code Generation (LTCG) and Profile Guided Optimization (PGO).
Compilation speed on x64 platforms has been improved by making optimizations in x64 code generation. Compilation with LTCG, recommended for better optimization, usually takes longer than non-LTCG compilation especially in large applications. In Visual Studio 2010, the LTCG compilation has been improved by up to 30 percent. A dedicated thread to write PDB files has been introduced in this release, so you will see link time improvements when you use the /DEBUG switch.
PGO instrumentation runs have been made faster by adding support for no-lock versions of instrumented binaries. There is also a new POGO option, PogoSafeMode, that enables you to specify whether to use safe mode or fast mode when you optimize an application. Fast mode is the default behavior. Safe mode is thread-safe, but slower than fast mode.
The quality of compiler-generated code has been improved. There is now full support for Advanced Vector Extensions (AVX), which are very important for floating-point-intensive applications in AMD and Intel processors via intrinsic and /arch:AVX options. Floating-point computation is more precise with /fp:fast option.
Building Applications for Windows 7
Windows 7 introduced a number of exciting new technologies and features and added new APIs, and Visual Studio 2010 provides access to all the new Windows APIs. The Windows SDK components needed to write code for native Windows APIs are included in Visual Studio 2010. You can take advantage of innovations like Direct3D 11, DirectWrite, Direct2D and Windows Web Service APIs by using the SDK headers and libraries available in Visual Studio 2010.
In addition to making all the Windows APIs available to developers, this release of Visual Studio also makes it easier for you to write applications for Windows with the help of a beefed up MFC. You get access to substantial Windows 7 functionality through MFC libraries without having to write directly to native APIs. Your existing MFC applications will light up on Windows 7 just by recompiling. And your new applications can take full advantage of the new features.
MFC now includes improved integration with the Windows shell. Your application’s integration with Windows Explorer can now be much better by making use of the file handlers for preview, thumbnails and search that have been added in this release. These features are provided as options in the MFC Application Wizard as shown in Figure 6. MFC will automatically generate the ATL DLL project that implements these handlers.
Figure 6 MFC Application Wizard with File Handler Options
One of the most noticeable user interface changes in Windows 7 is the new taskbar. MFC allows you to quickly take advantage of features like jump lists, tabbed thumbnails, thumbnail preview, progress bars, icon overlay and so on. Figure 7 shows thumbnail previews and tabbed thumbnails for a tabbed MDI MFC application.
Figure 7 Tabbed Thumbnail and Thumbnail Preview in an MFC Application
The ribbon UI now has a Windows 7-style ribbon, too, and your application can swap the UI on the fly any time during development from several Office-style ribbons to the Windows 7 ribbon by using the style dropdown as shown in Figure 8.
Figure 8 Ribbon-Style Dropdown in an MFC Application
MFC enables your applications to become multi-touch aware and calls appropriate messages for you to handle when the various touch events occur. Just registering for touch and gesture events will route those events for your application. MFC also makes applications high-DPI-aware by default so they adapt to high-DPI screens and do not look pixelated or fuzzy. MFC internally scales and changes fonts and other elements to make sure your UI continues to look sharp on high DPI displays.
In addition to the new Windows 7 features, some other Windows features that have existed since Windows Vista but were not included in previous releases of MFC have been included now. For example, Restart Manager is a useful feature introduced in Windows Vista that enables an application to perform an application save before terminating. The application can invoke this feature and then restore its state when restarting. You can now take full advantage of Restart Manager in your MFC application to handle crashes and restarts more elegantly. Simply add a line of code to enable restart and recovery in your existing application:
CMyApp::CMyApp() { m_dwRestartManagerSupportFlags = AFX_RESTART_MANAGER_SUPPORT_RESTART; // other lines of code ... }
New MFC applications get this functionality automatically by using the MFC Application Wizard. The auto-save mechanism is available to applications that save documents, and the auto-save interval can be defined by the user. Applications can choose only restart support or application recover start (applicable to Doc/View type applications) in the MFC Application Wizard.
Another addition is the Windows Task dialog, which is an improved type of message box (see Figure 9). MFC now has a wrapper for the Task dialog that you can use in your applications.
Figure 9 Task Dialog
MFC Class Wizard Is back
Not only has new functionality been added in the MFC library, this release also makes it easier to work with MFC in the Visual Studio IDE. One of the most commonly requested features, the MFC Class Wizard (shown in Figure 10), has been brought back and improved. You can now add classes, event handlers and other elements to your application using the MFC Class Wizard.
Figure 10 MFC Class Wizard
Another addition is the Ribbon Designer, which allows you to graphically design your ribbon UI (instead of defining it in code as in Visual Studio 2008) and store it as an XML resource. This designer is obviously useful for creating new applications, but existing applications can also take advantage of the designer to update their UIs. The XML definition can be created just by adding a line of code temporarily to the existing code definition of the ribbon UI:
m_wndRibbonBar.SaveToXMLFile(L"YourRibbon.mfcribbon-ms");
The resulting XML file can then be consumed as a resource file and further modifications can be made using the Ribbon Designer.
Wrapping Up
Visual Studio 2010 is a major release in the evolution of Visual C++ and makes lives of developers easier in many ways. I have barely scratched the surface of various C++-related improvements in this article. For further discussion on different features, please refer to the MSDN documentation and to the Visual C++ team’s blog at blogs.msdn.com/vcblog, which was also used as the basis for some of the sections in this article.
Sumit Kumar is a program manager in the Visual C++ IDE team. He holds an MS degree in Computer Science from the University of Texas at Dallas.
Thanks to the following technical experts: Stephan T. Lavavej, Marian Luparu and Tarek Madkour
==============================================================================
本博客已经废弃,不在维护。新博客地址:http://wenchao.ren
我喜欢程序员,他们单纯、固执、容易体会到成就感;面对压力,能够挑灯夜战不眠不休;面对困难,能够迎难而上挑战自我。他
们也会感到困惑与傍徨,但每个程序员的心中都有一个比尔盖茨或是乔布斯的梦想“用智慧开创属于自己的事业”。我想说的是,其
实我是一个程序员
==============================================================================