Pages

Monday, January 28, 2008

SQL Server Integration Services

SQL Server Integration Services (SSIS) is a component of Microsoft SQL Server 2005. It replaces Data Transformation Services, which has been a feature of SQL Server since Version 7.0. Unlike DTS, which was included in all versions, SSIS is available in the "Standard", "Professional" and "Enterprise" editions.

Integration Services provides a platform to build data integration and workflow applications. The primary use for SSIS is data warehousing, as the product features a fast and flexible tool for data extraction, transformation, and loading (ETL). The tool may also be used to automate maintenance of SQL Server databases, update multidimensional cube data, and perform other functions.

FeaturesSSIS features a wizard that lets the user readily create a package which moves data from a single data source to a destination with no transformations. This tool is called the Import/Export Wizard. The Wizard is appropriate for use to quickly move data into or out of SQL Server from or to a variety of sources, including text files and other SQL Server instances.

Developers tasked with creating or maintaining SSIS packages utilize a visual development tool based on Microsoft Visual Studio called the SQL Server Business Intelligence Studio. It allows users to edit SSIS packages using a drag-and-drop user interface. A scripting environment in which to write programming code is also available in the tool. The package holds a variety of elements that define the workflow for the package. Upon package execution, the tool provides color-coded, real-time monitoring.

Connections A connection includes the information necessary to connect to a particular data source. Tasks can reference the connection by its name, allowing the details of the connection to be changed or configured at runtime.

Tasks A task is an atomic work unit that performs some action. There are a couple of dozen tasks that ship in the box, ranging from the file system task (which can copy or move files) to the data transformation task. The data transformation task actually copies data; it implements the ETL features of the product.

Precedence Constraints
Tasks are linked by precedence constraints. The precedence constraint preceding a particular task must be met before that task executes. The runtime supports executing tasks in parallel if their precedence constraints so allow. Constraints may otherwise allow different paths of execution depending on the success or failure of other tasks. Together with the tasks, precedence constraints comprise the workflow of the package.
Event Handlers
A workflow can be designed for a number of events in the different scopes where they might occur. In this way, tasks may be executed in response to happenings within the package —such as cleaning up after errors.
Variables
Tasks may reference variables to store results, make decisions, or affect their configuration. A package may be saved to a file or to a store with a hierarchical namespace within a SQL Server instance. In either case, the package contents is persisted in XML.

Once completed, the designer also allows the user to start the package's execution. Once started, the package may be readily debugged or monitored.

Data Transformation Services

Data Transformation Services, or DTS, is a set of objects and utilities to allow the automation of extract, transform and load operations to or from a database. The objects are DTS packages and their components, and the utilities are called DTS tools. DTS is included with Microsoft SQL Server, and is almost always used with SQL Server databases, although it can be used independently with other databases.

DTS allows data to be transformed and loaded from heterogeneous sources using OLE DB, ODBC, or text-only files, into any supported database. DTS can also allow automation of data import or transformation on a scheduled basis, and can perform additional functions such as FTPing files and executing external programs. In addition, DTS provides an alternative method of version control and backup for packages when used in conjunction with a version control system, such as Microsoft Visual SourceSafe .

Sunday, January 20, 2008

SQL queries case sensitive

If you installed SQL Server with the default collation options, you might find that the following queries return the same results:

CREATE TABLE mytable
(
mycolumn VARCHAR(10)
)
GO

SET NOCOUNT ON

INSERT mytable VALUES('Case')
GO

SELECT mycolumn FROM mytable WHERE mycolumn='Case'
SELECT mycolumn FROM mytable WHERE mycolumn='caSE'
SELECT mycolumn FROM mytable WHERE mycolumn='case'

You can alter your query by forcing collation at the column level:

SELECT myColumn FROM myTable
WHERE myColumn COLLATE Latin1_General_CS_AS = 'caSE'

SELECT myColumn FROM myTable
WHERE myColumn COLLATE Latin1_General_CS_AS = 'case'

SELECT myColumn FROM myTable
WHERE myColumn COLLATE Latin1_General_CS_AS = 'Case'

-- if myColumn has an index, you will likely benefit by adding
-- AND myColumn = 'case'

If you want to do this in a more global way, instead of modifying each individual query, you can force the collation at the database level, or at the column level, using the ALTER DATABASE and ALTER TABLE commands, respectively.

Minus operator in SQL Server

Minus operator in Oracle: Consider two tables T1 and T2. Hence T1-T2 or T1[minus]T2 means return all rows in T1 that are NOT in T2.

Consider the tables as,
T1
Col1.........Col2
1...............A
2...............B
3...............C

T2
Col1..........Col2
1................A
2................B
3................X

As per Oracle,
Query: Select * from T1 minus Select * from T2
Result: 3.........C
Query:Select * from T2 minus Select * from T1
Result: 3.........X

Minus operator in SQL Server: But unfortunately SQL Server does not support this operator. But here is work around to make it simulate in SQL Server.
Query:
Select * from T1 where
IsNull(cast(Col1 as varchar, '')) +
IsNull(cast(Col2 as varchar, ''))
Not in
(Select IsNull(cast(Col1 as varchar, '')) +
IsNull(cast(Col2 as varchar, '')) from T2)

Result: 3.......C

Explanation: The "In" operator works as per syntax. But it could be applied only to single column. Hence the basic idea is to concatenate all the columns to a single column. Similarly with the other table columns are also concatenated to a single column. Now using the "In" operator they are filtered out. The "cast" is for converting column values to varchar, the "IsNull" to remove NULL values. This is one such idea of doing it.

Saturday, January 19, 2008

Import CSV file into SQL Sever

I am getting quite few queries from my friends and also from various forums on importing the data into SQL Server from csv file.

Use the below query.

SELECT *
INTO theEMPImportTable
FROM
OPENROWSET('MSDASQL',
'Driver={Microsoft Text Driver (*.txt; *.csv)};
DEFAULTDIR=D:\;Extensions=CSV;',
'SELECT * FROM CSVEmployeeFile.csv')

Friday, January 18, 2008

Identity columns

If you are inserting data from some other source to a table with an identity column and you need to ensure you retain the indentity values, you can temporarily allow inserts to the indentity column. Without doing so explicitly you will receive an error if you attempt to insert a value into the indentity column.

For example, if I have a table named EMP and I want to allow inserts into it's identity column, I can execute the following:


set identity_insert EMP on

Once you execute the command you will be able to insert values into the table's identity column. This will stay in effect in until you turn it off by executing the following:


set identity_insert EMP off

Be aware that at any time, only a single table in a session can have the identity_insert set to on. If you attempt to enable this for a table and another table already has this enabled, you will receive an error and will not be able to do so until you first turn this off for the other table. Also, if the value used for the indentity is larger than the current identity value then the new value will be used for the identity seed for the column.


Reseeding the identity value:

You can reseed the indentity value, that is, to have the identity values reset or start at a new predefined value by using DBCC CHECKIDENT. For example, if I have a table named MYTABLE and I want to reseed the indentity column to 30 I would execute the following:


dbcc checkident (EMP, reseed, 30)

If you wanted to reseed the table to start with an identity of 1 with the next insert then you would reseed the table's identity to 0. The identity seed is what the value is currently at, meaning that the next value will increment the seed and use that. However, one thing to keep in mind is that if you set the identity seed below values that you currently have in the table, that you will violate the indentity column's uniqueness constraint as soon as the values start to overlap. The identity value will not just “skip” values that already exist in the table.

EXEC in SQL Server

EXEC() is an inferior solution to sp_executesql and the CLR. The reason for this is that since EXEC() does not take parameters, you have to build a query string with parameter values interpolated.

You are on SQL 7 or SQL 2000, and the query string could exceed 4000 characters. As you can say
EXEC(@sql1 + @sql2 + @sql3 ...)
there is no practical limit to the length of the query string with EXEC(). On SQL 2005 you can use nvarchar(MAX), with sp_executesql, so this issue does not exist there.

Casting a Value

In most cases, a value the that you submit to your database is primarily considered a string. This is convenient if that's what you are expecting. If the value that you provide must be treated as something other than a string, for example, if you provide a number, before using such a value, you should first convert it to the appropriate type, that is, from a string to the expected type.

To assist with conversion, you can use either the CAST() or the CONVERT() function. The syntax of the CAST() function is:

CAST(Expression AS DataType)
The Expression is the value that needs to be cast. The DataType factor is the type of value you want to convert the Expression to.

In the following example, two variables are declared and initialzed as strings. Because they must be involved in a multiplication, each is converted to a Decimal type:

DECLARE @StrSalary Varchar(10),
@StrHours Varchar(6),
@WeeklySalary Decimal(6,2)
SET @StrSalary = '22.18';
SET @StrHours = '38.50';

SET @WeeklySalary = CAST(@StrSalary As Decimal(6,2)) *
CAST(@StrHours As Decimal(6,2));
SELECT @WeeklySalary;
GO

Microsoft Unveils SQL Server “Katmai"

SQL Server code-named "Katmai," the next release of Microsoft SQL Server is designed as the Redmodn Company's upcoming version of data management and analysis platform. The first Microsoft Business Intelligence (BI) Conference, was the stage where Katmai was unveiled as the successor of SQL Server 2005. Microsoft revealed to the audience its vision of the next release of SQL Server.

At this time, Microsoft estimates that Katmai will be made available for customers in 2008, but the company failed to mention additional details. "We developed SQL Server with the goal of providing a data management and analysis platform for all companies regardless of size or budget," said Ted Kummert, corporate vice president of the Data and Storage Platform Division at Microsoft. "With the release of ‘Katmai,’ we’ll take the next step on our data platform vision by delivering a comprehensive and integrated business intelligence solution. Expanding the usability of data across businesses will give customers more value for their IT investments."

But Microsoft will continue to showcase Katmai through the voice of Ted Kummert. According to the Redmond Company, Kummert will deliver an insight on the SQL Server "Katmai" on May 10, 2007, in a keynote address at the conference.

Microsoft's Katmai aims to be adopted in the corporate environment and implemented into data explosion management scenarios. Microsoft looks to outperform the success of SQL Server 2005 with Katmai.

"Kummert will show how SQL Server "Katmai" is part of Microsoft’s vision for pervasive BI, illustrating the company’s unified commitment to offer a complete and integrated BI solution that customers can rely on to meet all critical BI needs. Scheduled to be available in 2008, SQL Server "Katmai" will deliver on Microsoft’s overall data platform vision to meet the needs of the coming data explosion and the next generation of data-driven applications," Microsoft revealed via a press release.

Thursday, January 17, 2008

SQL Server 2005 Tools and Utilities

SQL Server 2005 Best Practices Analyzer

Use this tool to scan your SQL Server systems and verify that common best practices have been implemented.

SQL Server 2005 Performance Dashboard Reports

The Microsoft SQL Server 2005 Performance Dashboard Reports are used to monitor and resolve performance problems on your SQL Server 2005 database server. The SQL Server instance being monitored and the Management Studio client used to run the reports must both be running SP2 or later.

SQL Server 2005 JDBC Driver

Download the SQL Server 2005 JDBC Driver, a Type 4 JDBC driver that provides database connectivity through the standard JDBC application program interfaces (APIs) available in J2EE (Java2 Enterprise Edition).

SQL Server 2005 Upgrade Advisor

Analyze your SQL Server 7.0 and SQL Server 2000 database servers in preparation for upgrading to SQL Server 2005, and get reports that identify deprecated features and necessary configuration changes that will impact your database upgrade process.

SQL Server Migration Assistant for Oracle

SQL Server Migration Assistant (SSMA) for Oracle is a free tool for migrating Oracle databases to Microsoft SQL Server 2005. SSMA for Oracle converts Oracle database objects (including stored procedures) to SQL Server database objects, loads those objects into SQL Server, migrates data from Oracle to SQL Server, and then validates the migration of code and data.

SQL Server Migration Assistant for Access

Microsoft SQL Server Migration Assistant (SSMA) for Access is a tool for migrating databases from Microsoft Access versions 97 through 2003 to Microsoft SQL Server 2005SSMA for Access converts Access database objects to SQL Server database objects, loads those objects into SQL Server, and then migrates data from Access to SQL Server.

SQL Server Migration Assistant for Sybase

Microsoft SQL Server Migration Assistant (SSMA) for Sybase Adaptive Server Enterprise (ASE) is a tool for migrating Sybase ASE databases to SQL Server 2005. SSMA for Sybase ASE converts Sybase ASE database objects to SQL Server database objects, creates those objects in SQL Server, and then migrates data from Sybase ASE to SQL Server.

Microsoft SQL Web Data Administrator

The SQL Server Web Data Administrator enables you to easily manage your SQL Server data, wherever you are.

SQL Server 2005 Report Packs


These free report packs will simplify your report tasks, improve your business visibility and speed your decision making.

SQL Server Health and History Tool (SQLH2)

The Microsoft SQL Server Health and History Tool (SQLH2) allows you to collect information from instances of SQL Server, store this information, and run reports against the data in order to determine how SQL Server is being used.

SQL Server 2005 System Views Map

Download the Microsoft SQL Server 2005 System Views Map which show the key system views and their relationships in SQL Server 2005.

SQL Server Express Utility

This utility enables to your connect to the main instance or a user-instance of SQL Server and manage your SQL Server to new levels.

Microsoft SQL Server 2005 Express Edition Toolkit

Microsoft SQL Server 2005 Express Edition Toolkit provides additional tools and resources for SQL Server 2005 Express Edition and SQL Server 2005 Express Edition with Advanced Services.

SSIS Log Provider Reports Using SQL Server Reporting Services

The download file extracts a Visual Studio 2005 Solution that helps you understand the SQL Server 2005 report pack.

SQL Server Management Pack for MOM 2005

The Microsoft SQL Server Management Pack monitors SQL Server 2000 and 2005 for critical conditions indicating potential operational problems.

Microsoft SQL Server Management Studio Express
Microsoft SQL Server Management Studio Express (SSMSE) is a free, easy-to-use graphical management tool for managing SQL Server 2005 Express Edition and SQL Server 2005 Express Edition with Advanced Services.

SQL Server 2005 Mobile Edition Server Tools

SQL Server 2005 Mobile Edition Server Tools updates the servers running IIS to support connectivity solutions from SQL Server Mobile database on a mobile device to database servers running SQL Server 2000 SP3a and above.

SQL Server Mobile Device SDK

Microsoft SQL Server 2005 Mobile Edition (SQL Server Mobile) is the compact database for rapidly developing applications in both native mode and the .NET Compact Framework that extend enterprise data management capabilities to mobile devices.

Find all the foreign keys in a database

I am getting so many mails and msgs from new bees on how to find out the relationship between the tables to understand the tables in better way. After spending good amount finally came up with the query to get this info.



SELECT
FK_Table = FK.TABLE_NAME,
FK_Column = CU.COLUMN_NAME,
PK_Table = PK.TABLE_NAME,
PK_Column = PT.COLUMN_NAME,
Constraint_Name = C.CONSTRAINT_NAME
FROM
INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS C
INNER JOIN
INFORMATION_SCHEMA.TABLE_CONSTRAINTS FK
ON C.CONSTRAINT_NAME = FK.CONSTRAINT_NAME
INNER JOIN
INFORMATION_SCHEMA.TABLE_CONSTRAINTS PK
ON C.UNIQUE_CONSTRAINT_NAME = PK.CONSTRAINT_NAME
INNER JOIN
INFORMATION_SCHEMA.KEY_COLUMN_USAGE CU
ON C.CONSTRAINT_NAME = CU.CONSTRAINT_NAME
INNER JOIN
(
SELECT
i1.TABLE_NAME, i2.COLUMN_NAME
FROM
INFORMATION_SCHEMA.TABLE_CONSTRAINTS i1
INNER JOIN
INFORMATION_SCHEMA.KEY_COLUMN_USAGE i2
ON i1.CONSTRAINT_NAME = i2.CONSTRAINT_NAME
WHERE i1.CONSTRAINT_TYPE = 'PRIMARY KEY'
) PT
ON PT.TABLE_NAME = PK.TABLE_NAME
-- optional:
ORDER BY
1,2,3,4

Avoid NULLs in database

Joe Celko said it best: "NULLs confuse people..."

My sentiments exactly. Of course, I don't expect to convince you by flashing a few quotes from reputable authors. Let's talk for a minute about what exactly NULLs do that cause this type of reaction. The first problem is that the definition of NULL is "unknown." So, one problem is determining whether one value is (not) equal to another value, when one or both values are NULL. This trickles down to many problems for a database engine and any associated applications. The following list details some of those problems:
 they are interpreted differently depending on compatibility level and ANSI settings;

For example, let's consider two values, x and y, that are both NULL. Since the definition of NULL is unknown, then you can't say x = y. However, with the ANSI setting ANSI_NULLs, this can be different. When this setting is FALSE, x = y ... however, when TRUE, x <> y. Confusing, no?

 the storage engine has to do extra processing for each row to determine if the NULLable column is in fact NULL -- this extra bit can have performance implications;
 they produce weird results when using calculations, comparisons, sorting and grouping;
 they create problems with aggregates and joins, such as different answers for COUNT(*) vs. COUNT(fieldname);
 they produce unpredictable results in statistics computations, particularly WITH ROLLUP and WITH CUBE;
 applications must add extra logic to handle inserting and retrieving results, which may or may not include NULL values;
 they cause unpredictable results with NOT EXISTS and NOT IN subqueries (working backwards, SQL determines that NULL columns belong or do not belong to the result set, usually for the wrong reasons);
 no language that supports embedded SQL has native support for NULL SQL values.

Wednesday, January 16, 2008

AndAlso & OrElse Operators in C#

AndAlso(&&)

The logical operator && is the AndAlso in C#. This is used to connect two logical conditions checking like if ( && ). In this situation both conditions want to be true for passing the logic. Looking at the e.g. below

int a = 100;
int b = 0;
if (b > 0 && a/b <100)

{
Console.Write("ok");
}

The point here is, the condition "b > 0" will check first. Because "b > 0" a false condition, second condition "a/b <100" won't need to check and our "&&" operator won't perform the second condition checking. So actually our execution time is saving in a logical operation in which more conditions are combined using "&&" operator.

And(&)

The difference of "&" operator compared to above is mentioned below

int a = 100;
int b = 0;
if (b > 0 & a/b <100)
{
Console.Write("ok");
}

The point here is, the condition "b > 0" will check first. Because "b > 0" a false condition, second condition "a/b <100" won't need to check. But our "&" operator will perform the second condition checking also. So actually our execution time is losing for a useless checking. Also executing the "a/b <100" unless b>0 is generating an error also.

Or(|) and OrElse(||)

The difference of Or ("|") and OrElse ("||") are also similar to "And" operators, as first one will check all conditions and second one won't execute remaining logical checking, if it's found any of the previous checking is true and saving time. You can analysis these by the below code.

//Or

if (b > 0 | a/b <100)
{
Console.Write("ok");
}

//OrElse
if (b >= 0 || a/b <100)
{
Console.Write("ok");
}

Microsoft Application Blocks

Application Blocks are C# and VB.NET classes distributed as Visual Studio projects that can be downloaded from Microsoft's Web site and used in any .NET application, including ASP.NET Web applications. They are useful and powerful tools that can make applications more maintainable, scaleable and efficient. In 2002, Microsoft released two Application Blocks that encapsulate two of the most common programming tasks: data access and exception management. They are:
1.)Data Access Application Block
2.)Exception Management Application Block
These two Application Blocks contain classes, static methods and interfaces that greatly simplify SQL Server data access and application exception management.
The Data Access Block provides static methods located in the SqlHelper class that encapsulate the most common data access tasks performed with Microsoft SQL server.Static method means that the class methods can be called without instantiating an instance of the class.For example, the method ExecuteReader() within the SqlHelper class can be called by simply using the statement SqlHelper.ExecuteReader() -- no object instantiation of the SqlHelper class is required. These static methods can be used to return SqlDataReader, DataSet and XmlReader objects containing database information. You can call stored procedures and SQL commands and specify stored procedure parameter details. The SqlHelper class also supports transactions.
To get started with the Data Access Application Block you must first download the installer at http://download.microsoft.com/download/VisualStudioNET/daabref/RTM/NT5/EN-US/DataAccessApplicationBlock.msi.

The second Application Block encapsulates exception management. The Exception Management block provides a framework enabling you to log exception information, isolate exception code from your business logic and manage exceptions efficiently and consistently. The Exception block also provides excellent base classes and interfaces that can be used to create your own exception management framework or extend the Application Block to met the needs of your particular system. The Exception Management Block can be downloaded at http://download.microsoft.com/download/VisualStudioNET/emabref/RTM/NT5/EN-US/ExceptionManagementApplicationBlock.msi.

The Benefits of Using Application Blocks
Arguably, the biggest benefits to using Application Blocks are faster and more modular development. Because much of the "nuts and bolts" of data access and exception handling is wrapped in the Application Blocks, you don't have to worry about the tedious details required to return a DataReader or write an exception to the System Log. This allows us to concentrate on the business logic in our applications. Application blocks cut down on redundant code and provide a buffer between you and the .NET framework. Additionally, because Application Blocks are distributed as Visual Studio projects, you are free to browse the source code and make changes and/or additions to suit the needs of your own application. Finally, Application Blocks come with very good documentation and quickstart samples to get you up and running quickly.

Monday, January 14, 2008

Globalization and Localization in .NET

Localization means "process of translating resources for a specific culture", and
Globalization means "process of designing applications that can adapt to different cultures".

Proper Globalization: - Your Application should be able to Accept, Verify, and Display All global kind of data; It should well also be able to operate over this Data, accordingly. We will discuss more about this "Accordingly operations over diff. culture data".
Localizability and Localization: - Localizability stands for clearly separating the components of culture based Operations regarding the user interface, and other operations from the executable code.

The namespaces involved in creation of globalize, localizing applications are
System.Globalization
System.Resources
System.Text
The utmost basic entity in this scenario is Culture.
The Culture information (usually called CultureInfo) includes the Language, Country/region, Calendar, Formats of Date, Currency, number System, And Sorting Order.
This all information is defined inside CultureInfo class, which is inside the System.Globalization namespace.

The Invariant Culture is culture / language insensitive, we can refer to it by empty string "". Or by it's Culture Identifier 0x007F. The Language is English by default but it's not associated with any country, although using it may lead to a wrong sort order (but it's very rare). When you don't specify a culture, it's by default Invariant.

Neutral Culture is only associated with language but not country, while Specific Culture is associated with the language as well as the country and region.

.NET Performance Counters

The Performance Counter is a Windows mechanism that allows you to capture and publish performance information about the system and your applications. The .NET Framework provides support for performance counters with a set of easy-to-use types. In addition, the Common Language Runtime (CLR) exposes its own set of predefined counters. A performance counter is a single metric about some performance aspect of the system or your application. Examples include metrics such as the number of active threads in a process or the percentage of elapsed time used by threads of a process in executing instructions.

Performance counters are organized and grouped into performance counter categories. For example, the Processor category contains all counters related to the operation of the processor such as processor time, idle time, interrupt time, etc.
Windows provides myriad predefined performance counters that can be retrieved programmatically or displayed using the Performance Monitor. These counters are used to monitor the usage of operating system resources.
In the .NET world, the CLR exposes its own set of performance counters. These cover every aspect of CLR operation ranging from exception processing to security checking.
The Performance Monitor, PERFMON, is a system performance viewer that comes with Windows NT, Windows 2000, and Windows XP. PERFMON allows you to select counters from a list of available performance counter categories and display them either in graph, histogram, or report format.

Macro to Add Strong Names to VS.NET Projects

When writing class libraries in Visual Studio .NET that are going to be placed into the GAC you will need to add strong name to the assembly. This is normally done by dropping out to a dos prompt and running the .NET utility sn.exe and then updating the assembly to reference the newly created.snk file. This looked like a great place for a macro to be able to do this all within the IDE. Below is a macro that when run opens up a standard windows dialog box allowing you to navigate to the assembly you would like to add the strong name too. Next, the sn.exe utility will be run on this assembly and added to the current project in the IDE.

There are a couple of things to keep in mind when using this macro. First is that if you have a solution open with several projects in it make sure that the project that you want to add the snk file too has focus. Next, you are required to compile the project before using the macro do to the fact that you must navigate to the assembly in order to run sn.exe on it. Finally, the sn.exe utility is currently set to run with the -k option, if you want other options then you will need to change this.

To use this macro navigate to Tool/Macros/Macro IDE. Next, navigate to the File / Add New Item and choose a Module. Finally, take all the code listed below and paste it inside the newly create module and save the project. The macro should now show up in the Macro explorer and you should be able
to run it from right inside the IDE.

Imports EnvDTE
Imports System.Diagnostics
Imports Microsoft.VisualBasic
Imports Microsoft.VisualBasic.ControlChars
Imports System.Windows
Imports System.Windows.Forms
Imports System
Public Module Module1
Public Class WinWrapper
Implements System.Windows.Forms.IWin32Window
Overridable ReadOnly Property Handle() As System.IntPtr Implements
System.Windows.Forms.IWin32Window.Handle
Get
Dim iptr As New System.IntPtr(DTE.MainWindow.HWnd)
Return iptr
End Get
End Property
End Class
Sub AddStrongNameToProject()
Dim init_dir, outfile_name, outdirectory As String
Dim stemp, Macroprojname As String
Dim prjSolution As EnvDTE.Project
Dim prjVSProject As VSLangProj.VSProject
Dim openfile As Forms.FileDialog
Dim result As Forms.DialogResult
Dim snPath As Microsoft.Win32.RegistryKey
Dim winptr As WinWrapper
Dim myProj As EnvDTE.Project
Try
winptr = New WinWrapper
openfile = New Forms.OpenFileDialog
' set the initial directory to SystemDrive
init_dir = System.Environment.SystemDirectory()
init_dir = Left(init_dir, InStr(init_dir, "\",
CompareMethod.Text))
openfile.InitialDirectory = init_dir
If openfile.ShowDialog(winptr) = result.OK Then
' create the output filename
outfile_name = Right(openfile.FileName,
Len(openfile.FileName) - Len(System.Environment.CurrentDirectory) - 1)
outfile_name = Left(outfile_name, InStr(outfile_name, ".",
CompareMethod.Text) - 1)
outfile_name = outfile_name & ".dll"
' set the output directory to the VsMacros dir
outdirectory = Left(DTE.FullName, InStr(DTE.FullName,
"devenv.exe", CompareMethod.Text) - 1)
outdirectory = outdirectory & "PublicAssemblies\"
snPath =Microsoft.Win32.Registry.LocalMachine.OpenSubKey("software\microsoft\.NetFramework(")")
If Not snPath Is Nothing Then
stemp = snPath.GetValue("sdkinstallroot", "") & "bin"
End If
If stemp = "bin" Then
MsgBox("Unable to get sn.exe location from registry")
Exit Sub
End If
Dim strsnkFileName As String
strsnkFileName = openfile.FileName.Replace(".dll", ".snk")
Microsoft.VisualBasic.ChDir(stemp)
' Shell out to the CMD sn.exe file
Microsoft.VisualBasic.Shell("cmd /c sn.exe -k """ &
strsnkFileName & """", AppWinStyle.NormalFocus, True)
' Add the snk file to the project
DTE.ItemOperations.AddExistingItem(strsnkFileName)
' Navigate to the AssemblyInfo file
DTE.Windows.Item("AssemblyInfo.cs").Activate()
'Update the Assembly file with the location here
Dim ts As TextSelection = DTE.ActiveWindow.Selection
ts.SelectAll()
Dim strNewStrongNameLoc As String
strNewStrongNameLoc = "[assembly: AssemblyKeyFile(@" &
Chr(34) & strsnkFileName & Chr(34) & ")]"
ts.ReplacePattern("[assembly: AssemblyKeyFile("""")]",
strNewStrongNameLoc)
End If
Catch err As System.Exception
MsgBox(err.Message)
End Try
End Sub
End Module

Sunday, January 13, 2008

A New Member of the T-SQL Family–The MERGE Statement

One revolutionary addition is the new MERGE statement. On par with other core T-SQL CRUD features such as INSERT, SELECT, UPDATE, and DELETE, the MERGE statement is an ISO-2003 compliant command that is primarily intended to handle what many database users refer to as “UPSERT” functionality. For example, say you have an application where you either need to log a new entry for something that hasn’t been added to your system, or update it if it’s already been added previously. Without the MERGE statement you must either run a SELECT statement to see if a row has already been logged and then UPDATE or INSERT if it’s not there. Or you can try to UPDATE first, and then INSERT if the UPDATE doesn’t affect any rows. With the MERGE statement you can do this all in one fell swoop. Here’s a rather extensive example of the syntax :

MERGE Production.ProductInventory AS pi
USING (SELECT ProductID, SUM(OrderQty) FROM Sales.SalesOrderDetail sod
JOIN Sales.SalesOrderHeader soh
ON sod.SalesOrderID = soh.SalesOrderID
AND soh.OrderDate = GETDATE()
GROUP BY ProductID) AS src (ProductID, OrderQty)
ON (pi.ProductID = src.ProductID)
WHEN MATCHED AND pi.Quantity - src.OrderQty <> 0
THEN UPDATE SET pi.Quantity = pi.Quantity - src.OrderQty
WHEN MATCHED AND pi.Quantity - src.OrderQty = 0
THEN DELETE;

Friday, January 11, 2008

Serialization

Serialization is the process of converting an object into a stream of bytes. We generally perform serialization for persisting an object and saving a state of an object. The persisted location can be a file or a database. In .NET Framework when an object is serialized it not only persists the data, but also persists some metadata. If an object is serialized it also persists its Type information, Assembly information, etc. De-serialization is the reverse process of serialization, where a serialized object is again made to be an active object.

Different Types of encoding

A .NET object can be serialized with different types of encodings which are:

Binary Serialization – The encoding used for this type of serialization is binary encoding. Using this serialization when an object is serialized, all the public and private fields are serialized. This means that the exact binary copy of the object is replicated. This brings about the concept of cloning an object (using binary serialization we can clone an object). One of the major advantages of Binary Serialization is the performance. The serialization and de-serialization cost would be minimal. However, binary serialization is not easily portable especially with cross platforms.

XML Serialization – XML Serialization serializes only the public properties and does not bother the private variables. It is to be noted that the type fidelity is also not preserved. When using XML serialization we cannot guarantee the original state of the objects. The primary purpose of the XML serialization would be conversion of XML documents into .NET objects and .NET objects into XML documents. In spite of having many issues with XML serialization, the main reason it is used is because of its support over cross platforms. In .NET 2.0, the XmlSerializer class takes care of the serialization. It has 2 methods, Serialize and DeSerialize, for serializing and de-serializing.

SOAP Serialization
– SOAP serialization is a special form of XML serialization, but conforms to the SOAP specification. SOAP is a protocol based on XML designed for transporting method calls over the web. The .NET Framework supports the serialization that conforms to the SOAP standards. We can use the XmlSerializer class that can serialize classes in SOAP standards. SOAP is an example of custom serialization. We will see more about basic and custom serialization in the next sections.

Designer Serialization – Designer serialization is a special form of serialization which involves object persistence usually associated with development tools. Designer serialization is generally used in cases of displaying graphs, designer tools, etc. The designer shown in Visual studio is a type pf designer serialization. The emphasis on the designer serialization will be on the object's exact state with respect to other objects visually. It will always help if the designer serialization format is in a human readable format.

COM+ Components

COM+ is a programming model based on Component Object Model (COM). COM+ does not replace the COM. In fact, COM+ technology uses the COM by extending its features to provide enterprise level services.

COM was the very first methodologies to facilitate object oriented software implementation. It is mainly used to encapsulate business task and logic into one business object, called COM component in the distributed environment which can be shared and accessed by different applications.

In the real world scenario, more than one COM component work altogether to achieve a high density task. To manage the integrity and security among different components, COM uses another service of Microsoft, called Microsoft Transaction Server (MTS).

In other words, COM+ is nothing but a combination of both COM and MTS. It means COM+ = COM + MTS. COM+ was first introduced in Microsoft Windows 2000 Server. COM+ also includes additional features like object pooling, load balancing, component queuing, etc. These services were being known as the COM+ Component Services. When multiple components are grouped together it is referred to as COM+ applications.

COM+ Component Service using .NET

Microsoft Visual Studio has come up with the libraries and classes through which we can create, install and consume COM components. In fact, .NET has been designed to offer COM+ services through these and becomes easier. Once COM+ component is installed correctly it can be used by a variety of applications (including web applications), and these applications are termed as clients of the COM+ application. Components developed in .NET are called managed components.

Building a managed assembly in VB 2005 for COM+ Service

Building a managed component is nothing but developing an assembly using VS 2005 "Class Library" project. After getting the error free DLL, we will setup it to make it accessible in COM+ environment. Let us proceed for building the managed component step-by-step. I have used VB 2005 for this example.

Create a library project to build a valid serviced component assembly

Open Visual Studio 2005 IDE. Create a class project as:

File >> New >> Projects

Choose Visual Basic as in Project Types and "Class Library" in project templates.

Give the project name as COMPlusObject and click OK.

It will create the COMPlusObject project containing a default class Class1.vb. Rename this class to COMService.vb.

In this we need to import System.EnterpriseServices namespace and before doing it add a reference of System.EnterpriseServices in the project.

Project >> Add Reference >> .NET >> Choose System.EnterpriseServices >>

it is the time to setup this assembly in COM+ context. To achieve we need to do two things.

1. Install the assembly in Global Assembly Cache (GAC).

2. Install the assembly in Component Services list to run as the COM component.

Install the assembly in GAC

If you want to place the assembly in a shared location GAC then you can follow following steps.

Step 1: Open the Visual Studio 2005 Command Prompt and go to the bin directory of project.

C: \COMPlusObject\COMPlusObject\bin>

Step 2: Execute the following command to install the assembly in GAC.

C: \COMPlusObject\COMPlusObject\bin> gacutil /i Release/COMPlusObject.dll

Common Type System

1. CTS provides base set of datatypes which is responsible for cross language integration.

2. Most languages use aliases to implement those types (Eg: C# implements int as alias for Int32).

3. CTS is categorized into two types called ValueTypes and ReferenceTypes.

4. Each object in the hierarchy has the common interface.

5. Ensures Type - Safe Code.

Type - Safe Code means given a valid object reference, type-safe code can access memory at fixed offsets corresponding to actual field members.

6. Defines rules that languages must follow, which helps ensure that objects written in different languages can interact with each other.

7. Set of Base Types is called CTS.

8. Multiple Inheritance is not allowed in .NET and this can be treated as one of the rules.

9. System.Object is a common base type where all other types are derived from.

10. CTS standardizes the conventions that all the languages must follow.

11. CTS is responsible for defining types that can be used across the .NET Languages.

12. Variables that are Value Type have their own copy of the data.

13. Value Types and Reference Types, all are derived from type called System.Object.

14. Common Language Specification is the subset of Common Type System. (Defines Language rules to be followed.)

15. CTS provides types that are available to be used by programs in .NET and CLS specifies how those type are to be used in a consistent manner to ensure compatibility with other languages.

Wednesday, January 9, 2008

Creating a DataTableReader with more than one DataTables

One of the nice features of DataTableReader is that it can contain more than one DataTables, as read-only and forward-only recordsets. When you load more than one DataTables in a DataTableReader, it is really faster to iterate and it will automatically deals with the unwanted records during the iteration. You can load bunches of DataTables by creating an object of DataTableReader to contain an array of DataTables. Let us see the sample code to create it.

private void FetchTwoDataTablesInDataTableReader()
{
string sql = "Select * from Customers";
SqlDataAdapter da = new SqlDataAdapter(sql,”YourConnectionString”);
DataTable dtCus = new DataTable();
da.Fill(dtCus);

string sql1 = "Select * from Country";
SqlDataAdapter da1 = new SqlDataAdapter(sql1,”YourConnectionString”);
DataTable dtCountry = new DataTable();
da1.Fill(dtCountry);

DataTableReader dtr = new DataTableReader(new DataTable[] {dtCus, dtCountry});
if (dtr.HasRows)
{
do
{
while (dtr.Read())
{
Response.Write(dtr[1].ToString() + "
");
}
} while (dtr.NextResult());
}
else
Response.Write("No Data");
}

Binding DataTableReader with GridView control


Another great feature of the DataTableReader class is that you can use it as the data source to populate Dataset or DataTable information into an Asp.Net 2.0 GridView control in a very easy way. This is done by using the Load method of the DataTableReader. Let us see the sample code for this.


private void LoadGridViewWithDataTableReader()
{
DataSet ds = new DataSet();

string sql = "Select * from Customers";
SqlDataAdapter da = new SqlDataAdapter(sql,”YourConnectionString”);
DataTable dtCus = new DataTable();
da.Fill(dtCus);
ds.Tables.Add(dtCus);

string sql1 = "Select * from Country";
SqlDataAdapter da1 = new SqlDataAdapter(sql1,”YourConnectionString”);
DataTable dtCountry = new DataTable();
da1.Fill(dtCountry);
ds.Tables.Add(dtCountry);

DataTable dtGrid = new DataTable();
DataTableReader dtr = new DataTableReader(ds.Tables[0]);
dtGrid.Load(dtr);

GridView1.DataSource = dtGrid;
GridView1.DataBind();
}

DataTableReader in ADO.NET 2.0

DataTableReader obtains the contents of one or more DataTable objects in the form of one or more read-only, forward-only result sets. As the name suggests, it is a combination of both DataTable and SqlDataReader. In a DataTable, we can store a single database table records, with all constraints, in a disconnected mode from the database server. And we can perform all sorts of database manipulations in it. A SqlDataReader can contain single database table records, with read-only and forward-only record sets, for which we need an active connection with the database server. And we cannot perform other database manipulations in a SqlDataReader. A DataTableReader can contain more than one DataTable(s), in a disconnected mode, as a read-only and forward-only record sets.

Advantages of using DataTableReader:

SqlDataReader are much faster than DataSet and consume less memory. But the major drawback of using SqlDataReader is that it always required an open connection to operate, that is, it is connection oriented. Hence we needed to explicitly close the database connections when we were done using it.

In ADO.NET 2.0, DataTableReader class has been developed similar to it but with one exception – it works in a disconnected mode. Clearly opening and closing of database server connection is taken care by the DataTableReader itself. The iteration of rows is done from the cache. The cached data can be modified while the DataTableReader is active, and the reader automatically maintains its position.

Creating a simple DataTableReader

DataTableReader can be created from any DataTable’s CreateDataReader method. Let us see the syntax to create a simple DataTableReader and iterate the records in it.

private void FetchDataTableReader()
{
string sql = "Select * from Customers";
SqlDataAdapter da = new SqlDataAdapter(sql,”YourConnectionString”);
DataTable dt = new DataTable(); da.Fill(dt);
DataTableReader dtr = dt.CreateDataReader();
if (dtr.HasRows)
{
while (dtr.Read())
{
Response.Write(dtr[“Cus_Name”].ToString() + "
");
}
}
else
Response.Write("No Data");
}

Sunday, January 6, 2008

What is a web service

Web services make software functionality available over the Internet so that programs like PHP, ASP, JSP, JavaBeans, the COM object, and all our other favorite widgets can make a request to a program running on another server (a web service) and use that program’s response in a website, WAP service, or other application.

So when I need to get some programming task done, and am too busy (or not crazy enough) to attempt it myself, I can make use of a web service by calling it over the Internet. By passing parameter data with the request, I can expect to receive a response containing the result generated by the web service.

Anyone who has used Hotmail recently has had a brush with web services: the passport authentication system is one of the web services in Microsoft’s .NET initiative, and is available without charge for the moment, so developers can use passport authentication within their own sites.

The principles behind web services are stunningly simple, and are nothing new in the world of distributed computing and the Internet:

the web service provider defines a format for requests for its service and the response the service will generate
a computer makes a request for the web services across the network
the web service performs some action, and sends the response back
This action might be retrieving a stock quote, finding the best price for a particular product on the net, saving a new meeting to a calendar, translating a passage of text to another language, or validating a credit card number.

Saturday, January 5, 2008

Using the Domain Objects Persistence Pattern in .NET

Domain objects in an application represent the core data and business validation rules relating to it. And, domain objects are usually central to the entire application and used by most subsystems. Therefore, their good design is critical for a good application design that is robust, high performing, and yet flexible.

When it comes to developing object oriented applications that use relational databases, the domain object design has to be consistent with the database design. This make them easier to understand because they represent real-life "entities" and their relationships with each other. Therefore, in many situations, the domain objects are "mapped" to the relational database tables and relationships between tables. However, it is very easy to get this mapping wrong and end up with an undesirable domain object design. A good design for domain objects requires a solid understanding of object oriented and relational fundamentals on the part of developers.

Domain Objects Persistence Pattern attempts to provide a solution for domain object mapping to the relational databases that decouples the domain objects from the persistence logic. The domain objects in this pattern are unaware of the objects that persist them because the dependency is only one-way (from persistence objects to domain objects). This makes the domain objects design much simpler and easier to understand. It also hides the persistence objects from other subsystems in the application that are using the domain objects. This also works in distributed systems where only the domain objects are passed around. In this context, an attempt is made to incorporate the Factory Pattern into this pattern to help decouple domain objects and persistence logic.



Problem Definition

Domain objects form the backbone of any application. They capture the core data model from the database and also the business rules that apply to this data. It is very typical for most subsystems of an application to rely on these common domain objects. This means that the closer the domain objects map to the data model in the database, the easier it is for the application developers to understand and use them because they mimic real-life "entities" and "relationships" as represented in the database.

If domain objects are not separated from the rest of the application, we end up with duplication of code everywhere. Similarly, if domain objects are not separated from the persistence code, we face situations where any subsystem using the domain objects also knows and depends on the persistence objects. And, any change in persistence objects affects the entire application, hence a bad design.


Solution
One way to achieve the above mentioned goals is to separate the domain objects into a separate subsystem and let the entire application use them wherever it needs domain data. Additionally, we should separate domain objects from the persistence code. This double-decoupling allows us on one hand to avoid code duplication and on the other to hide the persistence details from the domain objects and make it more flexible in case it needs to change. The domain objects and the rest of the application is totally unaffected whether the data is coming from a relational database or any other source (e.g. XML, flat files, or Active Directory/LDAP).

In separating the persistence logic from domain objects, we ensure that the domain objects have no dependency on the persistence code. This allows the domain objects to become available in environments where we don't even want to expose our persistence code.


Sample Code

In this sample, we will look at a Customer object from Northwind database mapped to the "Customers" table in the database.

public class Customer {
// Private data members
String _CustomerID;
String _companyName;
String _contactName;
String _contactTitle;

public Customer() {}

// Properties for Customer object
public String CustomerId {
get { return _customerId; } set { _customerId = value;}
}

public String CompanyName {
get { return _companyName; } set { _companyName = value;}
}

public String ContactName {
get { return _contactName; } set { _contactName = value;}
}

public String ContactTitle {
get { return _contactTitle; } set { _contactTitle = value;}
}
}

public interface ICustomerFactory
{
// Standard transactional methods for single-row operations
void Load(Customer cust);
void Insert(Customer cust);
void Update(Customer cust);
void Delete(Customer cust);

// Query method to return a collection
ArrayList FindCustomersByState(String state);

}

public class CustomerFactory : ICustomerFactory
{
// Standard transactional methods for single-row operations
void Load(Customer cust) { /* Implement here */ }
void Insert(Customer cust) { /* Implement here */ }
void Update(Customer cust) { /* Implement here */ }
void Delete(Customer cust) { /* Implement here */ }

// Query method to return a collection
ArrayList FindCustomersByState(String state) { /* Implement here */ }
}
Below is an example of how a client application will use this code.

public class NorthwindApp
{
static void Main (string[] args) {
Customer cust = new Customer();
CustomerFactory custFactory = new CustomerFactory();

// Let's load a customer from Northwind database.
cust.CustomerId = "ALFKI";
custFactory.load(cust);

// Pass on the Customer object
FooBar(cust);

// custList is a collection of Customer objects
ArrayList custList = custFactory.FindCustomersByState("CA");
}
}

As you can see above, the "load" method loads the Customer object from the database based on the CustomerId. Once the Customer is loaded, then it can be passed on to any subsystem in the application without exposing the persistence code. Similarly, if you get an ArrayList of Customer objects, you can pass on the ArrayList which has no persistence code dependency

Thursday, January 3, 2008

With Check Option

This clause is very important because it prevents changes that do not meet the view's criteria.

Example: Create a view on database pubs for table authors, that shows the name, phone number and state from all authors from California. This is very simple:

CREATE VIEW dbo.AuthorsCA
AS
SELECT au_id, au_fname, au_lname, phone, state, contract
FROM dbo.authors
WHERE state = 'ca'

This is an updatable view and a user can change any column, even the state column:

UPDATE AuthorsCA SET state='NY'

After this update there will be no authors from California. This might not be the desired behavior.

Example: Same as above but the state column cannot be changed.

CREATE VIEW dbo.AuthorsCA2
AS
SELECT au_id, au_fname, au_lname, phone, state, contract
FROM dbo.authors
WHERE state = 'ca'
With Check Option

The view is still updatable, except for the state column:

UPDATE AuthorsCA2 SET state='NY'

This will cause an error and the state will not be changed.

Encrypting Configuration Information in ASP.NET 2.0 Applications

The .NET Framework 2.0 libraries include the capabilities to encrypt most any configuration sections within the Web.config or machine.config files. Configuration sections are those XML elements that are children of the or elements.

Encryption Options
Protecting configuration sections in ASP.NET 2.0 uses the provider model, which allows for any implementation to be seamlessly plugged into the API. The .NET Framework 2.0 ships with two built-in providers for protecting configuration sections:

The Windows Data Protection API (DPAPI) Provider (DataProtectionConfigurationProvider) - this provider uses the built-in cryptography capabilities of Windows to encrypt and decrypt the configuration sections. By default this provider uses the machine's key. You can also use user keys, but that requires a bit more customization.Since the keys are machine- or user- specific, the DPAPI provider does not work in settings where you wan to deploy the same encrypted configuration file to multiple servers.
RSA Protected Configuration Provider (RSAProtectedConfigurationProvider) - uses RSA public key encryption to encrypt/decrypt the configuration sections. With this provider you need to create key containers that hold the public and private keys used for encrypting and decrypting the configuration information.

Programmatically Encrypting Configuration Sections
The System.Configuration.SectionInformation class abstractly represents a configuration section. To encrypt a configuration section simply use the SectionInformation class's ProtectSection(provider) method, passing in the name of the provider you want to use to perform the encryption. To access a particular configuration section in your application's Web.config file, use the WebConfigurationManager class (in the System.Web.Configuration namespace) to reference your Web.config file, and then use its GetSection(sectionName) method to return a ConfigurationSection instance. Finally, you can get to a SectionInformation object via the ConfigurationSection instance's SectionInformation property.

private void ProtectSection(string sectionName,
string provider)
{
Configuration config =
WebConfigurationManager.
OpenWebConfiguration(Request.ApplicationPath);

ConfigurationSection section =
config.GetSection(sectionName);

if (section != null &&
!section.SectionInformation.IsProtected)
{
section.SectionInformation.ProtectSection(provider);
config.Save();
}
}

private void UnProtectSection(string sectionName)
{
Configuration config =
WebConfigurationManager.
OpenWebConfiguration(Request.ApplicationPath);

ConfigurationSection section =
config.GetSection(sectionName);

if (section != null &&
section.SectionInformation.IsProtected)
{
section.SectionInformation.UnprotectSection();
config.Save();
}
}

Partial Classes

Partial classes allows you to have multiple pieces of a class definition. Functionally, partial classes are not at all different from classes written as full classes. Which means you can have one single class definition or a class written as a few distinct parts, without breaking existing functionality.

How Partial Classes Work

The basic idea of a partial class is very easy to understand. You keep the definition of a class split into pieces. During the compilation of the code, the compiler will finds all of the pieces of the class and lump them together. This task accomplished by the compiler removes that separation, so functionally there is no difference between a class made of partial classes and a whole class.

Creating Partial Classes
Partial classes allow you to separate code for a class into multiple class definitions. So as an example I will write a few class definitions. One will be a whole definition of a class without using partial classes, and the other one will be the same class split into two partial classes.My single definition class of AIPlayer.

Listing 1: A Standard Class

public class AIPlayer
{
public AIPlayer()
{
// Construct your class here.
}
public Move GetMove()
{
// Choose the best move and return it.
}
}


Listing 2: A Class Split Into Two Partial Classes

public partial class AIPlayer
{
public AIPlayer()
{
// Construct your class here.
}
}
public partial class AIPlayer
{
public Move GetMove()
{
// Choose the best move and return it.
}
}Without having the partial keyword in there I would probably get a compiler error. The reason for this is that I would be declaring two different classes in the same namespace with the same name, and that would generate the error. Since I have the partial keyword there, the compiler knows that I am just extending the existing class. It will take that into account and will combine the code before trying to compile it.

As a note, make sure to use the partial keyword on ALL definitions of the class even the original one. This is not necessary for VB, but in C# all of the classes must have the partial keyword. I think it is great that this is included, because it is a hint to a programmer that there might be another piece of a class somewhere else. If you ever see the partial keyword on classes you're working with, it is a nice warning that there might be more to the class elsewhere.

When to Use Partial Classes
There are plenty of cases where a partial class can assist you in developing your applications. Most people have already seen them and might not have noticed them in ASP.NET pages, because you don't actually need to know that it is using these partial classes in order to use them. They're also quite popular being used alongside generated code, because it lets people modify the generated code. Sometimes people will also use it when working in larger groups who will not be committing code often. This is because it lets them make large amounts of changes without having painful merges later.

Send emails with dynamic contents through asp.net 2.0.

These two namespaces plays a major role in sending emails with dynamic content :

System.Web.Mail Namespace

The System.Web.Mail namespace contains classes that enable you to construct and send messages using the CDOSYS (Collaboration Data Objects for Windows 2000) message component. The mail message is delivered either through the SMTP mail service built into Microsoft Windows 2000 or through an arbitrary SMTP server. The classes in this namespace can be used from ASP.NET or from any managed application.

System.IO Namespace
The System.IO namespace contains types that allow reading and writing to files and data streams, and types that provide basic file and directory support.