NET Framework Performances Practices
Performance Best Practices at a Glance
J.D. Meier, Srinath Vasireddy, Ashish Babbar, Rico Mariani, and Alex Mackman
Summary
This document summarizes the solutions presented in Improving .NET Application Performance and Scalability. It provides links to the detailed material in the guide so that you can easily locate the information you need to implement the solutions that are listed.
Contents
- Architecture and Design Solutions
- Development Solutions
- Testing Solutions
Architecture and Design Solutions
If you are an architect, this guide provides the following solutions to help you design Microsoft® .NET applications to meet your performance objectives:
- How to balance performance with quality-of-service (QoS) requirements
Do not consider performance in isolation. Balance your performance requirements with other QoS attributes such as security and maintainability.
For more information, see Chapter 3, "Design Guidelines for Application Performance."
- How to identify and evaluate performance issues
Use performance modeling early in the design process to help evaluate your design decisions against your objectives before you commit time and resources. Identify your performance objectives, your workload, and your budgets. Budgets are your constraints. These include maximum execution time and resource utilization such as CPU, memory, disk I/O, and network I/O.
For more information about how to identify key performance scenarios and about how to create a performance model for your application, see Chapter 2, "Performance Modeling" and Chapter 3, "Design Guidelines for Application Performance."
- How to perform architecture and design reviews
Review the design of your application in relation to your target deployment environment, any constraints that might be imposed, and your defined performance goals. Use the categories that are defined by the performance and scalability frame promoted by this guide to help partition the analysis of your application and to analyze the approach taken for each area. The categories represent key areas that frequently affect application performance and scalability. Use the categories to organize and prioritize areas for review.
For more information, see Chapter 4, "Architecture and Design Review of a .NET Application for Performance and Scalability."
- How to choose a deployment topology
When you design your application architecture, you must take into account corporate policies and procedures together with the infrastructure that you plan to deploy your application on. If the target environment is rigid, your application design must reflect the restrictions that exist in that rigid environment. Your application design must also take into account QoS attributes such as security and maintainability. Sometimes you must make design tradeoffs because of protocol restrictions, and network topologies.
Identify the requirements and constraints that exist between application architecture and infrastructure architecture early in the development process. This helps you choose appropriate architectures and helps you resolve conflicts between application and infrastructure architecture early in the process.
Use a layered design that includes presentation, business, and data access logic. A well-layered design generally makes it easier to scale your application and improves maintainability. A well-layered design also creates predictable points in your application where it makes sense (or not) to make remote calls.
To avoid remote calls and additional network latency, stay in the same process where possible and adopt a non-distributed architecture, where layers are located inside your Web application process on the Web server.
If you do need a distributed architecture, consider the implications of remote communication when you design your interfaces. For example, you might need a distributed architecture because security policy prevents you from running business logic on your Web server, or you might need a distributed architecture because you need to share business logic with other applications, Try to reduce round trips and the amount of traffic that you send over the network.
For more information, see "Deployment Considerations" in Chapter 3, "Design Guidelines for Application Performance."
- How to design for required performance and scalability
Use tried and tested design principles. Focus on the critical areas where the correct approach is essential and where mistakes are often made. Use the categories described by the performance frame that is defined in this guide to help organize and prioritize performance issues. Categories include data structures and algorithms, communication, concurrency, resource management, coupling and cohesion, and caching and state management.
- How to pass data across the tiers
Prioritize performance, maintenance, and ease of development when you select an approach. Custom classes allow you to implement efficient serialization. Use structures if you can to avoid implementing your own serialization. You can use XML for interoperability and flexibility. However, XML is verbose and can require considerable parsing effort. Applications that use XML may pass large amounts of data over the network. Use a DataReader object to render data as quickly as possible, but do not pass DataReader objects between layers because they require an open connection. The DataSet option provides great flexibility; you can use it to cache data across requests. DataSet objects are expensive to create and serialize. Typed DataSet objects permit clients to access fields by name and to avoid the collection lookup overhead.
For more information, see "Design Considerations" in Chapter 12, "Improving ADO.NET Performance."
- How to choose between Web services, remoting, and Enterprise Services
Web services are the preferred communication mechanism for crossing application boundaries, including platform, deployment, and trust boundaries. The Microsoft product team recommendations for working with ASP.NET Web services, Enterprise Services, and .NET remoting are summarized in the following list:
- Build services by using ASP.NET Web services.
- Enhance your ASP.NET Web services with Web Services Enhancements (WSE) if you need the WSE feature set and if you can accept the support policy.
- Use object technology, such as Enterprise Services or .NET remoting, within the implementation of a service.
- Use Enterprise Services inside your service boundaries when the following conditions are true:
- You need the Enterprise Services feature set. This feature set includes object pooling, declarative transactions, distributed transactions, role-based security, and queued components.
- You are communicating between components on a local server, and you have performance issues with ASP.NET Web services or WSE.
- Use .NET remoting inside your service boundaries when the following conditions are true:
- You need in-process, cross-application domain communication. Remoting has been optimized to pass calls between application domains extremely efficiently.
- You need to support custom wire protocols. Understand, however, that this customization will not port cleanly to future Microsoft implementations.
When you work with ASP.NET Web services, Enterprise Services, or .NET remoting, you should consider the following caveats:
- If you use ASP.NET Web services, avoid using low-level extensibility features such as the HTTP Context object. If you do use the HttpContext object, abstract your access to it.
- If you use .NET remoting, avoid or abstract using low-level extensibility such as .NET remoting sinks and custom channels.
- If you use Enterprise Services, avoid passing object references inside Enterprise Services. Also, do not use COM+ APIs. Instead, use types from the EnterpriseServices namespace.
For more information, see "Prescriptive Guidance for Choosing Web Services, Enterprise Services, and .NET Remoting" in Chapter 11, "Improving Remoting Performance."
- How to design remote interfaces
When you create interfaces that are designed for remote access, consider the level of chatty communication, the intended unit of work, and the need to maintain state on either side of the conversation.
As a general rule, you should avoid property-based interfaces. You should also avoid any chatty interface that requires the client to call multiple methods to perform a single logical unit of work. Provide sufficiently granular methods. To reduce network round trips, pass data through parameters as described by the data transfer object pattern instead of forcing property access. Also try to reduce the amount of data that is sent over the remote method calls to reduce serialization overhead and network latency.
If you have existing objects that expose chatty interfaces, you can use a data facade pattern to provide a coarse-grained wrapper. The wrapper object would have a coarse-grained interface that encapsulates and coordinates the functionality of one or more objects that have not been designed for efficient remote access.
Alternatively, consider the remote transfer object pattern where you wrap and return the data you need. Instead of making a remote call to fetch individual data items, you fetch a data object by value in a single remote call. You then operate locally against the locally cached data. In some scenarios where you may need to ultimately update the data on the server, the wrapper object exposes a single method that you call to send the data back to the server.
For more information, see "Minimize the Amount of Data Sent Across the Wire" in the "Communication" section of Chapter 3, "Design Guidelines for Application Performance."
- How to choose between service orientation and object orientation
When you are designing distributed applications, services are the preferred approach. While object-orientation provides a pure view of what a system should look like and is good for producing logical models, a pure object-based approach often does not take into account real-world aspects such as physical distribution, trust boundaries, and network communication. A pure object-based approach also does not take into account nonfunctional requirements such as performance and security.
Table 1 summarizes some key differences between object orientation and service orientation.
Table 1: Object Orientation vs. Service Orientation
Object orientation Service orientation
Assumes homogeneous platform and execution environment. Assumes heterogeneous platform and execution environment.
Share types, not schemas. Share schemas, not types.
Assumes cheap, transparent communication. Assumes variable cost, explicit communication.
Objects are linked: Object identity and lifetime are maintained by the infrastructure. Services are autonomous: security and failure isolation are a must.
Typically requires synchronized deployment of both client and server. Allows continuous separate deployment of client and server.
Is easy to conceptualize and thus provides a natural path to follow. Builds on ideas from component software and distributed objects. Dominant theme is to manage/reduce sharing between services.
Provides no explicit guidelines for state management and ownership. Owns and maintains state or uses reference state.
Assumes a predictable sequence, timeframe, and outcome of invocations. Assumes message-oriented, potentially asynchronous and long-running communications.
Goal is to transparently use functions and types remotely. Goal is to provide inter-service isolation and wire interoperability based on standards.
1.
Common application boundaries include platform, deployment, trust, and evolution. Evolution refers to whether or not you develop and upgrade applications together. When you evaluate architecture and design decisions around your application boundaries, consider the following:
- Objects and remote procedure calls (RPC) are appropriate within boundaries.
- Services are appropriate across and within boundaries.
For more information about when to choose Web services, .NET remoting, or Enterprise Services for distributed communication in .NET applications, see "Prescriptive Guidance for Choosing Web Services, Enterprise Services, and .NET Remoting" in Chapter 11, "Improving Remoting Performance."
Development Solutions
If you are a developer, this guide provides the following solutions:
Improving Managed Code Performance
- How to conduct performance reviews of managed code
Use analysis tools such as FxCop.exe to analyze binary assemblies and to ensure that they conform to the Microsoft .NET Framework design guidelines. Use Chapter 13, "Code Review: .NET Application Performance" to evaluate specific features including garbage collection overheads, threading, and asynchronous processing. You can also use Chapter 13 to identify and prevent common performance mistakes.
Use the CLR Profiler tool to look inside the managed heap to analyze problems that include excessive garbage collection activity and memory leaks. For more information, see "How To: Use CLR Profiler" in the "How To” section of this guide.
- How to design efficient types
Should your classes be thread safe? What performance issues are associated with using properties? What are the performance implications of supporting inheritance? For answers to these and other class design-related questions, see "Class Design Considerations" in Chapter 5, "Improving Managed Code Performance."
- How to manage memory efficiently
Write code to help the garbage collector do its job efficiently. Minimize hidden allocations, and avoid promoting short-lived objects, preallocating memory, chunking memory, and forcing garbage collections. Understand how pinning memory can fragment the managed heap.
Identify and analyze the allocation profile of your application by using CLR Profiler.
For more information, see "Garbage Collection Guidelines" in Chapter 5, "Improving Managed Code Performance."
- How to use multithreading in .NET applications
Minimize thread creation, and use the self-tuning thread pool for multithreaded work. Avoid creating threads on a per-request basis. Also avoid using Thread.Abort or Thread.Suspend. For information about how to use threads most efficiently, see "Threading Guidelines" in Chapter 5, "Improving Managed Code Performance." For information about how to efficiently synchronize multithreaded activity, see "Locking and Synchronization Guidelines," also in Chapter 5.
Make sure that you appropriately tune the thread pool for ASP.NET applications and for Web services. For more information, see "How to tune the ASP.NET thread pool" later in this document.
- How to use asynchronous calls
Asynchronous calls may benefit client-side applications where you need to maintain user interface responsiveness. Asynchronous calls may also be appropriate on the server, particularly for I/O bound operations. However, you should avoid asynchronous calls that do not add parallelism and that block the calling thread immediately after initiating the asynchronous call. In these situations, there is no benefit to making asynchronous calls.
For more information about making asynchronous calls, see "Asynchronous Guidelines" in Chapter 5, "Improving Managed Code Performance."
- How to clean up resources
Release resources as soon as you have finished with them. Use finally blocks or the C# using statement to make sure that resources are released even if an exception occurs. Make sure that you call Dispose (or Close) on any disposable object that implements the IDisposable interface. Use finalizers on classes that hold on to unmanaged resources across client calls. Use the Dispose pattern to help ensure that you implement Dispose functionality and finalizers (if they are required) correctly and efficiently.
For more information, see "Finalize and Dispose Guidelines" and "Dispose Pattern" in Chapter 5, "Improving Managed Code Performance."
- How to avoid unnecessary boxing
Excessive boxing can lead to garbage collection and performance issues. Avoid treating value types as reference types where possible. Consider using arrays or custom collection classes to hold value types. To identify boxing, examine your Microsoft intermediate language (MSIL) code and search for the box and unbox instructions.
For more information, see "Boxing and Unboxing Guidelines" in Chapter 5, "Improving Managed Code Performance."
- How to handle exceptions
Exceptions can be expensive. You should not use exceptions for regular application logic. However, use structured exception handling to build robust code, and use exceptions instead of error codes where possible. While exceptions do carry a performance penalty, they are more expressive and less error prone than error codes.
Write code that avoids unnecessary exceptions. Use finally blocks to guarantee resources are cleaned up when exceptions occur. For example, close your database connections in a finally block. You do not need a catch block with a finally block. Finally blocks that are not related to exceptions are inexpensive.
For more information, see "Exception Management" in Chapter 5, "Improving Managed Code Performance."
- How to work with strings efficiently
Excessive string concatenation results in many unnecessary allocations that create extra work for the garbage collector. Use StringBuilder when you need to create complex string manipulations and when you need to concatenate strings multiple times. If you know the number of appends and concatenate strings in a single statement or operation, prefer the + operator. Use Response.Write in ASP.NET applications to benefit from string buffering when a concatenated string is to be displayed on a Web page.
For more information, see "String Operations" in Chapter 5, "Improving Managed Code Performance."
- How to choose between arrays and collections
Arrays are the fastest of all collection types, so unless you need special functionalities like dynamic extension of the collection, sorting, and searching, you should use arrays. If you need a collection type, choose the most appropriate type based on your functionality requirements to avoid performance penalties.
- Use ArrayList to store custom object types and particularly when the data changes frequently and you perform frequent insert and delete operations. Avoid using ArrayList for storing strings.
- Use a StringCollection to store strings.
- Use a Hashtable to store a large number of records and to store data that may or may not change frequently. Use Hashtable for frequently queried data such as product catalogs where a product ID is the key.
- Use a HybridDictionary to store frequently queried data when you expect the number of records to be low most of the time with occasional increases in size.
- Use a ListDictionary to store small amounts of data (fewer than 10 items).
- Use a NameValueCollection to store strings of key-value pairs in a presorted order. Use this type for data that changes frequently where you need to insert and delete items regularly and where you need to cache items for fast retrieval.
- Use a Queue when you need to access data sequentially (first in is first out) based on priority.
- Use a Stack in scenarios where you need to process items in a last–in, first-out manner.
- Use a SortedList for fast object retrieval using an index or key. However, avoid using a SortedList for large data changes because the cost of inserting the large amount of data is high. For large data changes, use an ArrayList and then sort it by calling the Sort method.
For more information, see "Arrays" and "Collection Guidelines" in Chapter 5, "Improving Managed Code Performance."
- How to improve serialization performance
Reduce the amount of data that is serialized by using the XmlIgnore or NonSerialized attributes. XmlIgnore applies to XML serialization that is performed by the XmlSerializer. The XmlSerializer is used by Web services. The NonSerialized applies to .NET Framework serialization used in conjunction with the BinaryFormatter and SoapFormatter. The BinaryFormatter produces the most compact data stream, although for interoperability reasons you often need to use XML or SOAP serialization.
You can also implement ISerializable to explicitly control serialization and to determine the exact fields to be serialized from a type. However, using ISerializable to explicitly control serialization is not recommended because it prevents you from using new and enhanced formatters provided by future versions of the .NET Framework.
If versioning is a key consideration for you, consider using a SerializationInfoEnumerator to enumerate through the set of serialized fields before you try to deserialize them.
To improve DataSet serialization, you can use column name aliasing, you can avoid serializing both the original and the updated data values, and you can reduce the number of DataTable instances that you serialize.
For more information, see "How To: Improve Serialization Performance" in the "How To" section of this guide.
- How to improve code access security performance
Code access security ensures that your code and the code that calls your code are authorized to perform specific privileged operations and to access privileged resources like the file system, the registry, the network, databases, and other resources. The permission asserts and permission demands in the code you write and call directly affects the number and the cost of the security stack walks that you need.
For more information, see "Code Access Security" in Chapter 5, "Improving Managed Code Performance."
- How to reduce working set size
A smaller working set produces better system performance. Fewer larger assemblies rather than many smaller assemblies help reduce working set size. Using the Native Image Generator (Ngen.exe) to precompile code may also help. For more information, see "Working Set Considerations" in Chapter 5, "Improving Managed Code Performance."
- How to develop SMP friendly code
To write managed code that works well with symmetric multiprocessor (SMP) servers, avoid contentious locks and do not create lots of threads. Instead, favor the ASP.NET thread pool and allow it to decide the number of threads to release.
If you run your application on a multiprocessor computer, use the server GC) instead of the workstation GC. The server GC is optimized for throughput, memory consumption, and multiprocessor scalability. ASP.NET automatically loads the server GC. If you do not use ASP.NET, you have to load the server GC programmatically. The next version of the .NET Framework provides a configurable switch.
For more information, see "Server GC vs. Workstation GC" in Chapter 5, "Improving Managed Code Performance."
- How to time managed code in nanoseconds
Use the Microsoft Win32® functions QueryPerformanceCounter and QueryPerformanceFrequency to measure performance. To create a managed wrapper for these functions, see "How To: Time Managed Code Using QueryPerformanceCounter and QueryPerformanceFrequency" in the "How To" section of this guide.
- How to instrument managed code
Instrument your application to measure your processing steps for your key performance scenarios. You may need to measure resource utilization, latency, and throughput. Instrumentation helps you identify where bottlenecks exist in your application. Make your instrumentation configurable; be able to control event types and to switch your instrumentation off completely. Options for instrumentation include the following:
- Event Tracing for Windows (ETW). Event Tracing for Windows is the recommended approach because it is the least expensive to use in terms of execution time and resource utilization.
- Trace and Debug classes. The Trace class lets you instrument your release and debug code. You can use the Debug class to output debug information and to check logic for assertions in code. These classes are in the System.Diagnostics namespace.
- Custom performance counters. You can use custom counters to time key scenarios within your application. For example, you might use a custom counter to time how long it takes to place an order. For implementation details, see "How To: Use Custom Performance Counters from ASP.NET" in the "How To" section of this guide.
- Windows Management Instrumentation (WMI). WMI is the core instrumentation technology built into the Microsoft Windows® operating system. Logging to a WMI sink is more expensive compared to other sinks.
- Enterprise Instrumentation Framework (EIF). EIF provides a framework for instrumentation. It provides a unified API. You can configure the events that you generate, and you can configure the way the events are logged. For example, you can configure the events to be logged in the Windows event log or in Microsoft SQL Server™. The levels of granularity of tracing are also configurable. EIF is available as a free download at http://www.microsoft.com/downloads/details.aspx?FamilyId=80DF04BC-267D-4919-8BB4-1F84B7EB1368&displaylang=en.
For more information, see "How To: Use EIF" In the "How To" section of this guide.
For more information about instrumentation, see Chapter 15, "Measuring .NET Application Performance."
- How to decide when to use the Native Image Generator (Ngen.exe)
The Native Image Generator (Ngen.exe) allows you to run the just-in-time (JIT) compiler on your assembly's MSIL to generate native machine code that is cached to disk. Ngen.exe for the .NET Framework version 1.0 and version 1.1 was primarily designed for the common language runtime (CLR), where it has produced significant performance improvements. To identify whether or not Ngen.exe provides any benefit for your particular application, you need to measure performance with and without using Ngen.exe. Before you use Ngen.exe, consider the following:
- Ngen.exe is most appropriate for any scenario that benefits from better page sharing and working set reduction. For example, it is most appropriate for client scenarios that require fast startup to be responsive, for shared libraries, and for multiple-instance applications.
- Ngen.exe is not recommended for ASP.NET version 1.0 and 1.1 because the assemblies that Ngen.exe produces cannot be shared between application domains. At the time of this writing, the .NET Framework 2.0 (code-named "Whidbey") includes a version of Ngen.exe that produces images that can be shared between application domains.
If you do decide to use Ngen.exe:
- Measure your performance with and without Ngen.exe.
- Make sure that you regenerate your native image when you ship new versions of your assemblies for bug fixes or for updates, or when something your assembly depends on changes.
For more information, see "Ngen.exe Explained" and "Ngen.exe Guidelines" in Chapter 5, "Improving Managed Code Performance."
Improving Data Access Performance
The solutions in this section show how to improve ADO.NET data access performance. The majority of the solutions are detailed in Chapter 12, "Improving ADO.NET Performance."
- How to improve data access performance
Your goal is to minimize processing on the server and at the client and to minimize the amount of data passed over the network. Use database connection pooling to share connections across requests. Keep transactions as short as possible to minimize lock durations and to improve concurrency. However, do not make transactions so short that access to the database becomes too chatty.
For more information, see Chapter 12, "Improving ADO.NET Performance," and Chapter 14, "Improving SQL Server Performance."
- How to page records
You should allow the user to page through large result sets when you deliver large result sets to the user one page at a time. When you choose a paging solution, considerations include server-side processing, data volumes and network bandwidth restrictions, and client-side processing.
The built-in paging solutions provided by the ADO.NET DataAdapter and DataGrid are only appropriate for small amounts of data. For larger result sets, you can use the SQL Server SELECT TOP statement to restrict the size of the result set. For tables that do not have a strictly-increasing key column, you can use a nested SELECT TOP query. You can also use temporary tables when data is retrieved from complex queries and is prohibitively large to be transmitted and stored on the Web layer and when the data is application wide and applicable to all users.
For general data paging design considerations, see "Paging Records" in Chapter 12, "Improving ADO.NET Performance." For paging solution implementation details, see "How To: Page Records in .NET Applications" in the "How To" section of this guide.
- How to serialize DataSets efficiency
Default DataSet serialization is not the most efficient. For information about how to improve this, see "How To: Improve Serialization Performance" in the "How To" section of this guide. For alternative approaches to passing data across application tiers, see Chapter 12, "Improving ADO.NET Performance."
- How to manipulate BLOBs
Avoid moving binary large object (BLOB) data repeatedly, and consider storing pointers in the database to BLOB files that are maintained on the file system. Use chunking to reduce the load on the server, and use chunking particularly where network bandwidth is limited. Use the SequentialAccess enumerator to stream BLOB data. For Microsoft SQL Server 2000, use READTEXT and UpdateText function to read and write BLOBs. For Oracle, use the OracleLob class.
For more information, see "Binary Large Objects" in Chapter 12, "Improving ADO.NET Performance."
- How to choose between dynamic SQL and stored procedures
Stored procedures generally provide improved performance in comparison to dynamic SQL statements. From a security standpoint, you need to consider the potential for SQL injection and authorization. Both approaches are susceptible to SQL injection if they are poorly written. Database authorization is often easier to manage when you use stored procedures because you can restrict your application's service accounts to only run specific stored procedures and to prevent them from accessing tables directly.
If you use stored procedures, follow these guidelines:
- Try to avoid recompiles.
- Use the Parameters collection to help prevent SQL injection.
- Avoid building dynamic SQL within the stored procedure.
- Avoid mixing business logic in your stored procedures.
If you use dynamic SQL, follow these guidelines:
- Use the Parameters collection to help prevent SQL injection.
- Batch statements if possible.
- Consider maintainability. For example, you have to decide if it is easier for you to update resource files or to update compiled statements in code.
For more information, see Chapter 12, "Improving ADO.NET Performance."
- How to choose between a DataSet and a DataReader
Do not use a DataSet object for scenarios where you can use a DataReader object. Use a DataReader if you need forward-only, read-only access to data and if you do not need to cache the data. Do not pass DataReader objects across physical server boundaries because they require open connections. Use the DataSet when you need the added flexibility or when you need to cache data between requests.
For more information, see "DataSet vs. DataReader" in Chapter 12, "Improving ADO.NET Performance."
- How to perform transactions in .NET
You can perform transactions using T-SQL commands, ADO.NET, or Enterprise Services. T-SQL transactions are most efficient for server-controlled transactions on a single data store. If you need to have multiple calls to a single data store participate in a transaction, use ADO.NET manual transactions. Use Enterprise Services declarative transactions for transactions that span multiple data stores.
When you choose a transaction approach, you also have to consider ease of development. Although Enterprise Services transactions are not as quick as manual transactions, they are easier to develop and lead to middle tier solutions that are flexible and easy to maintain.
Regardless of your choice of transaction type, keep transactions as short as possible, consider your isolation level, and keep read operations to a minimum inside a transaction.
For more information, see "Transactions" in Chapter 12, "Improving ADO.NET Performance."
- How to optimize queries
Start by isolating long-running queries by using SQL Profiler. Next, identify the root cause of the long-running query by using SQL Analyzer. By using SQL Analyzer, you may identify missing or inefficient indexes. Use the Index Tuning Wizard for help selecting the correct indexes to build. For large databases, defragment your indexes at regular intervals.
For more information, see "How To: Optimize SQL Queries" and "How To: Optimize Indexes" in the "How To" section of this guide.
Improving ASP.NET Performance
The solutions in this section show how to improve ASP.NET performance. The majority of the solutions are detailed in Chapter 6, "Improving ASP.NET Performance."
- How to build efficient Web pages
Start by trimming your page size and by minimizing the number and the size of graphics, particularly in low network bandwidth scenarios. Partition your pages to benefit from improved caching efficiency. Disable view state for pages that do not need it. For example, you should disable view state for pages that do not post back to the server or for pages that use server controls. Ensure pages are batch-compiled. Enable buffering so that ASP.NET batches work on the server and avoids chatty communication with the client. You should also know the cost of using server controls.
For more information, see "Pages" in Chapter 6, "Improving ASP.NET Performance."
- How to tune the ASP.NET thread pool
If your application queues requests with idle CPU, you should tune the thread pool.
- For applications that serve requests quickly, consider the following settings in the Machine.config file:
Set maxconnection to 12 times the number of CPUs.
Set maxIoThreads and maxWorkerThreads to 100.
Set minFreeThreads to 88 times the number of CPUs.
Set minLocalRequestFreeThreads to 76 times the number of CPUs.
- For applications that experience burst loads (unusually high loads) between lengthy periods of idle time, consider testing your application by increasing the minWorkerThreads and minIOThreads settings.
- For applications that make long-running calls, consider the following settings in the Machine.config file:
Set maxconnection to 12 times the number of CPUs.
Set maxIoThreads and maxWorkerThreads to 100.
Now test the application without changing the default setting for minFreeThreads. If you see high CPU utilization and context switching, test by reducing maxWorkerThreads or increasing minFreeThreads.
- For ASP.NET applications that use the ASPCOMPAT flag, you should ensure that the total thread count for the worker process does not exceed the following value:
75 + ((maxWorkerThread + maxIoThreads) * #CPUs * 2)
For more information and implementation details, see “Formula for Reducing Contention” in Chapter 6, "Improving ASP.NET Performance." Also see “Tuning Options" in the "ASP.NET Tuning” section in Chapter 17, “Tuning .NET Application Performance.”
- How to handle long-running calls
Long-running calls from ASP.NET applications block the calling thread. Under load, this may quickly cause thread starvation, where your application uses all available threads and stops responding because there are not enough threads available. It may also quickly cause queuing and rejected requests. An ASP.NET application that calls a long-running Web service is an application that blocks the calling thread. In this common scenario, you can call the Web service asynchronously and then display a busy page or a progress page on the client. By retaining the Web service proxy in server-side state by polling from the browser by using the <meta> refresh tag, you can detect when the Web service call completes and then return the data to the client.
For implementation details, see "How To: Submit and Poll for Long-Running Tasks" in the "How To" section of this guide. Also see “Formula for Reducing Contention” in Chapter 6, "Improving ASP.NET Performance."
If design changes are not an alternative, consider tuning the thread pool as described earlier.
- How to cache data
ASP.NET can cache data by using the Cache API, by using output caching, or by using partial page fragment caching. Regardless of the implementation approach, you need to consider an appropriate caching policy that identifies the data you want to cache, the place you want to cache the data in, and how frequently you want to update the cache. For more information, see "Caching Guidelines" in Chapter 6, "Improving ASP.NET Performance."
To use effective fragment caching, separate the static and the dynamic areas of your page, and use user controls.
You must tune the memory limit for optimum cache performance. For more information, see "Configure the Memory Limit" in the "ASP.NET Tuning" section of Chapter 17, "Tuning ASP.NET Application Performance."
- How to call STA components from ASP.NET
STA components must be called by the thread that creates them. This thread affinity can create a significant bottleneck. Rewrite the STA component by using managed code if you can. Otherwise, make sure you use the ASPCOMPAT attribute on the pages that call the component to avoid thread switching overhead. Do not put STA components in session state to avoid limiting access to a single thread. Avoid STA components entirely if you can.
For more information, see "COM Interop" in Chapter 7, "Improving ASP.NET Performance."
- How to handle session state
If you do not need session state, disable it. If you do need session state, you have three options:
- The in-process state store
- The out-of-process state service
- SQL Server
The in-process state store offers the best performance, but it introduces process affinity, which prevents you from scaling out your solution in a Web farm. For Web farm scenarios, you need one of the out-of-process stores. However, the out-of-process stores incur the overhead of serialization and network latency. Be aware that any object that you want to store in out-of-process session state must be serializable.
Other optimizations include using primitive types where you can to minimize serialization overhead and using the ReadOnly attribute on pages that only read session state.
For more information, see "Session State" in Chapter 6, "Improving ASP.NET Performance."
Improving Web Services Performance
The solutions in this section show how to improve Web service performance. The majority of the solutions are detailed in Chapter 10, "Improving Web Services Performance."
- How to improve Web service performance
Start by tuning the thread pool. If you have sufficient CPU and if you have queued work, apply the tuning formula specified in Chapter 10. Make sure that you pool Web service connections. Make sure that you send only the data you need to send, and ensure that you design for chunky interfaces. Also consider using asynchronous server-side processing if your Web service performs extensive I/O operations. Consider caching for reference data and for any internal data that your Web service relies upon.
For more information, see Chapter 10, "Improving Web Services Performance."
- How to handle large data transfer
To perform large data transfers, start by checking that the maxRequestLength parameter in the <httpRuntime> element of your configuration file is large enough. This parameter limits the maximum SOAP message size for a Web service. Next, check your timeout settings. Set an appropriate timeout on the Web service proxy, and make sure that your ASP.NET timeout is larger than your Web service timeout.
You can handle large data transfer in a number of ways:
- Use a byte array parameter. Using a byte array parameter is a simple approach, but if a failure occurs midway through the transfer, the failure forces you to start again from the beginning. When you are uploading data, this approach can also make your Web service subject to denial-of-service attacks.
- Return a URL. Return a URL to a file, and then use HTTP to download the file.
- Use streaming. If you need to transfer large amounts of data (such as several megabytes) from a Web method, consider streaming to avoid having to buffer large amounts of data in memory at the server and client. You can stream data from a Web service either by implementing IList or by implementing IXmlSerializable.
For more information, see "Bulk Data Transfer" in Chapter 10, "Improving Web Services Performance."
- How to handle attachments
You have various options when you are handling attachments by using Web services. When you are choosing an option, consider the following:
- WS-Attachments. Web Services Enhancements (WSE) version 1.0 and 2.0 support Web services attachments (WS-Attachments). WS-Attachments use Direct Internet Message Encapsulation (DIME) as an encoding format. While DIME is a supported part of WSE, Microsoft is not investing in this approach long term. DIME is limited because the attachments are outside the SOAP envelope.
- Base64 encoding. For today, you should use Base64 encoding in place of WS-Attachments when you have advanced Web services requirements such as security. Base64 encoding creates a larger message payload that may be up to two times the original size. For messages that have large attachments, you can implement a WSE filter to compress the message by using tools like GZIP before you send the message over the network. If you cannot afford the message size that Base64 introduces and if you can rely on the transport for security (for example, Secure Sockets Layer SSL or Internet Protocol Security IPSec), consider the WS-Attachments implementation in WSE. Securing the message is preferred to securing the transport so that messages can be routed securely. Transport security only addresses point-to-point communication.
- SOAP Message Transmission Optimization Mechanism (MTOM). MTOM, which is a derivative work of SOAP Messages with Attachments (SwA), is the likely future interop technology. MTOM is being standardized by the World Wide Web Consortium (W3C) and is easier to compose than SwA.
SwA, also known as WS-I Attachments Profile 1.0, is not supported by Microsoft.
For more information, see "Attachments" in Chapter 10, "Improving Web Services Performance."
Improving .NET Remoting Performance
The solutions in this section show how to improve .NET remoting performance. The majority of the solutions are detailed in Chapter 11, "Improving Remoting Performance."
- How to improve .NET remoting performance
Remoting is for local, in-process, cross-application domain communication or for integration with legacy systems. If you use remoting, reduce round trips by using chunky interfaces. Improve serialization performance by serializing only the data you need. Use the NonSerialized attribute to prevent unnecessary fields from being serialized.
- How to serialize DataSet instances efficiently over remoting
Try to improve serialization efficiency in the following ways:
- Use column name aliasing to reduce the size of column names.
- Avoid serializing the original and new values for DataSet fields if you do not need to.
- Serialize only those DataTable instances in the DataSet that you require. DataSet instances serialize as XML.
To implement binary serialization, see Knowledge Base article 829740, "Improving DataSet Serialization and Remoting Performance," at http://support.microsoft.com/default.aspx?scid=kb;en-us;829740.
Improving Enterprise Services Performance
The solutions in this section show how to improve the performance of your Enterprise Services applications and serviced components. The majority of the solutions are detailed in Chapter 8, "Improving Enterprise Services Performance."
- How to improve Enterprise Services performance
Only use Enterprise Services if you need a service. If you need a service, prefer library applications for in-process performance. Use Enterprise Services transactions if you need distributed transactions, but be aware that manual transactions that use ADO.NET or T-SQL offer superior performance for transactions against a single resource manager. Remember to balance performance with ease of development. Declarative Enterprise Services transactions offer the easiest programming model. Also consider your transaction isolation level.
Use object pooling for objects that take a long time to initialize. Make sure that you release objects back to the pool promptly. A good way to do this is to annotate your method with the AutoComplete attribute. Also, clients should call Dispose promptly on the service component. Avoid using packet privacy authentication if you call your serviced components over an IPSec encrypted link. Avoid impersonation, and use a single service identity to access your downstream database to benefit from connection pooling,
For more information, see Chapter 8, "Improving Enterprise Services Performance."
- When to call ReleaseComObject
Consider calling ReleaseComObject if you call COM components. You might want to call ReleaseComObject if you create and destroy COM objects under load from managed code. ReleaseComObject helps release the COM object as soon as possible. Under load, garbage collection and finalization might not occur soon enough, and performance might suffer.
For more information about ReleaseComObject and how it works, see "Marshal.ReleaseComObject" in Chapter 7, "Improving Interop Performance." Also see "Resource Management" in Chapter 8, "Improving Enterprise Services Performance."
Improving Interop Performance
- How to improve interop performance
Carefully consider the amount and the type of data you pass to and from unmanaged code to reduce marshaling costs. Prefer blittable types where possible. Blittable types do not require conversion and avoid ANSI to UNICODE conversions for string data. Avoid unnecessary marshaling by using explicit in and out attributes.
To help minimize managed heap fragmentation, avoid pinning objects for longer than the duration of a P/Invoke call. In heavily loaded server applications, consider calling ReleaseComObject to ensure that COM objects are released promptly.
For more information, see Chapter 7, "Improving Interop Performance."
Testing Solutions
If you are an administrator, this guide provides the following solutions:
- How to measure performance
Start to measure performance as soon as you have a defined set of performance objectives for your application. Measure performance early in the application design phase. Use tools such as System Monitor, network monitoring tools such as Netmon, profiling tools such as CLR Profiler, SQL Profiler, SQL Query Analyzer, and application instrumentation to collect metrics for measuring,
For more information, see Chapter 15, "Measuring .NET Application Performance."
- How to test performance
Use a combination of load testing, stress testing, and capacity testing to verify that your application performs under expected conditions and peak load conditions and to verify that it scales sufficiently to handle increased capacity. Before starting, identify a stress test tool, such as Microsoft Application Center Test (ACT), to run performance tests and to identify your performance-critical scenarios. Next, identify the performance characteristics or the workload that is associated with each scenario. The performance scenario should include the number of users, and the rate and pattern of requests. You also have to identify the relevant metrics to capture. Next, use a set of test cases that are based on your workload to begin to test the application by using a stress test tool. Finally, analyze the results.
For more information about how to determine the appropriate metrics to capture during testing, see Chapter 15, "Measuring .NET Application Performance." For more information about testing and processes for load testing and stress testing, see Chapter 16, "Testing .NET Application Performance."
- How to tune performance
You tune to eliminate bottlenecks and improve performance. You can tune application, platform, system, and network configuration settings. Use an iterative and repeatable process. Start by establishing a baseline, and ensure you have a well-defined set of performance objectives, test plans, and baseline metrics. Next, simulate load to capture metrics, and then analyze the results to identify performance issues and bottlenecks. After you identify the performance issues and bottlenecks in your application, tune your application setup by applying new system, platform, or application configuration settings. Finally, test and measure to verify the impact of your changes and to see whether your changes have moved your application closer to its performance objectives. Continue the process until your application meets its performance objectives or until you decide on an alternate course of action, such as code optimization or design changes.
For more information, see Chapter 17, "Tuning .NET Application Performance."