Pages

Thursday, November 29, 2007

ADO.NET 2.0 Data Source Controls

ASP.NET 2.0 has following data source controls :

SqlDataSource
AccessDataSource
ObjectDataSource
DataSetDataSource
XmlDataSource
SiteMapDataSource

SqlDataSource: Designed to work with SQL Server databases. It uses Sql Server .NET data provider internally. Sql Server .NET data provider classes are defined in the System.Data.SqlClient namespace.
The data source is SQL Server.

AccessDataSource: Designed to work with Microsoft Access. It uses OleDb data provider internally. OleDb data provider classes are defined in the System.Data.OleDb namespace.The data source is Microsoft Access.

ObjectDataSource : Designed to work with objects.
The data source is Object (usually classes provide a well-defined structure).

XmlDataSource: Designed to work with XML documents.The data source is XML documents.

SiteMapDataSource: Designed to work with SiteMap objects, which is a new concept in ASP.NET 2.0. The data source is SiteMap.

DataSetDataSource: Designed to work with DataSet objects.The datasource is dataset.

Tuesday, November 27, 2007

New Caching Features in ASP.NET 2.0

The four main new features are:

1.)SQL Cache Invalidation
2.)Post-Cache Substitution
3.)Fragment Caching API
4.)Cache Configuration

SQL Cache Invalidation:

Let us assume you've got a web page which queries data from a database. If this is data that doesn't change very often, then there's really no need to query the database for the data each time a user requests the web page, so we decide to cache the data. The problem then becomes, how long do we cache the data for? If we cache it for too short a period then we make our web server work harder then it needs to. In the larger scheme of things, this will increase our operating costs since the server won't be able to handle as many requests and we'll need to add another server sooner then we really should. If, on the other hand, we cache the data for too long a period then we risk users being presented with out of data information. How big a problem this is really depends on the actual application and the data being cached, but it's generally not good to be showing users out of data information.

So finding the optimal length of time for which to cache a certain piece of data is not an easy task. It depends on a lot of factors. Some of those include -- how quickly the application needs to respond to users, the amount of users it needs to support, how frequently the data in the database changes, how quickly those changes must be reflected in the web pages, and what are the potential consequences of displaying old data?

What if instead of us having to continually check to see if there have been any changes, we could simply ask the database to tell us when there's been a change made. This is where SQL cache invalidation comes in. Instead of just picking a length of time to cache our data for, with ASP.NET 2.0's SQL cache notification we can set it up so that when the data is changed in the database, the cached version of the data is automatically cleared.

Post-Cache Substitution

Post-cache substitution is for that situation where most everything on a page can be cached except for one or two little exceptions that must be handled dynamically. In ASP.NET 1.x, the only way to handle this type of scenario was to split the page up into sections that could be cached and then make those into user controls. It worked, but it could be really confusing because you had to sort of reverse your thinking about the problem. It was no longer "let's cache everything but this one little section", but instead became "let's find everything we need to cache and turn it into a user control so we can cache it." All of a sudden your page is split up into ten different user controls and everything got complicated simply because we wanted to do something like put a current timestamp at the bottom of the page.
Post-cache substitution is exactly what it says it is. We take something that has been already been cached and simply substitute some dynamic data back into it. In effect, we are caching the whole page and then just executing the one little part that we didn't cache.
There are two ways to implement post-cache substitution. You can either use the Response.WriteSubstitution command or the control.

Fragment Caching API

The method used most often to cache sections of a page is called fragment caching. Fragment caching is what I described earlier where you move the sections to be cached into user controls and then set the OutputCache directive at the top of the control to cache it. This works fine in all versions of ASP.NET, but in ASP.NET 2.0, we now have access to the fragment caching API. This means that we are no longer stuck just choosing a finite number of minutes to cache the control. Now we can programmatically adjust the caching options.

Cache Configuration

There have been two main advances in this area: the ability to create and use cache profiles and to set cache properties via your application's configuration files.

Cache Profiles :
In ASP.NET 1.x you needed to set the length of time for which a file should be cached via the OutputCache directive at the top of the file. This made changing caching configuration relatively difficult because in order to change the setting you had to modify the setting in each file which implemented caching.
In ASP.NET 2.0, you can define what are called cache profiles. This allows you to create named sets of settings which are defined in your web.config file. Then if you find you need to make a change to one of the profiles, all you need to do is edit the profile in the config file and the change is picked up by all the scripts using that profile.

Cache Configuration via Config Files :
You can now modify caching parameters via ASP.NET's configuration files . You can enable or disable output and fragment caching, modify a number of parameters and even specify how much memory the system should allow caching to use.

Deploying web part on virtual server gallery

These steps are more specific to the windows share point server 2003 & web part developed in ASP.net 2003:

1.) Open the web part project in visual studio

2.) Choose file -> Add Project -> New Project -> Setup and deployment project -> Add -> Project Output

3.) Select the setup project in the solution explorer and choose project ->Add - > Project output.

4.) Select Primary output and content files from the web part project and choose OK, visual studio adds those items to the setup project.

5.) Choose Build -> Rebuild Solution (By doing this visual studio rebuilds the web part assembly and package the assembly and content files in the CAB file)

6.) Copy the resulting CAB file to the webparts folder

7.) Run stsadm.exe to install the CAB file on the server.

You can locate stsadm.exe file at following location:

C:\program files\common files\Microsoft shared\web server extension\60\bin\

You can run following command using command prompt

Stsadm - o addwppack - filename "c:\inetpub\wwwroot\Calendar.CAB"
This will install Calendar web part to the "Bin" directory of the virtual server. And this will be available to drag and drop in share point server 2003 under Virtual Gallery section.

For removing a web part from the list we have to use following command on the command prompt.

Stsadm-o deletewppack -name "Calendar.CAB"

Impersonation with ASP.NET 2.0

Impersonation:

Impersonation is the process of executing code in the context of another user identity. By default, all ASP.NET code is executed using a fixed machine-specific account. To execute code using another identity we can use the built-in impersonation capabilities of ASP.NET. We can use a predefined user account or user's identity, if the user has already been authenticated using a windows account.

We can use the impersonation in this two scenarios:

1.) To give each web application different permissions.
2.) To use existing Windows user permission.

These two scenario are fundamentally different. In the first one, impersonation defines a single, specific account. In this case, no matter what user access the application, and no matter what type of user-level security you use, the code will run under the account you've set.
In the second one, the user must be authenticated by IIS. The web-page code will then execute under the identity of the appropriate user.

Implement Impersonation:

Impersonate the Microsoft IIS Authenticated Account or User : To impersonate the IIS authenticating user on every request for every page in an ASP.NET application, we must include an tag in the Web.config file of this application and set the impersonate attribute to true.

Impersonate a Specific User:To impersonate a specific user for all the requests on all pages of an ASP.NET application, you can specify the userName and password attributes in the tag of the Web.config file for that application.

Monday, November 26, 2007

ASP.NET 2.0's Provider Model

The provider design pattern is used throughout ASP.NET 2.0.The beauty of the provider model is that the customer implementing the solution can specify a custom class that the system should use. This custom class must implement the system's well-defined API, but it allows for any custom implementation to be seamlessly plugged in. That is, once this API is defined, the system implementor can create a default concrete implementation - one that uses SQL Server and a Users table - that most customers can use without modification. Those customers that have a custom need - those that want to use Oracle or have user data stored in some other manner - can create classes that provide the necessary functionality and plug them into the system.

The Benefits of the Provider Model:

1.) There is a clean separation between the code and the backend implementation. Regardless if whether or not the code to authenticate a user is done against a SQL Server 2000 database's Users table, or if it's done against an Active Directory store, the code from the page developer's perspective is the same: DataProvider.Instance().AuthenticateUser(username, password). The backend implementation changes are transparent.

2.) Since system architects are strongly encouraged to create a default concrete implementation, the provider model offers the best of both worlds: to those who are content with the default implementation, the system just works as expected; for those that need to customize the system, they can do so without upsetting the existing code or programmatic logic. This design pattern also makes prototyping and agile development a lot easier. For example, in the early iterations of working with the system, it might be easier to just use the default implementation. However, later you suspect you'll need to cusomize certain aspects in order to integrate the work with your company's existing systems. When that time comes, you can achieve the needed customization through the provider model, meaning your earlier work need not be changed to reflect the backend implementation changes.

3.) Like many good design patterns, the provider model also affords separation of duties among developers. One set of developers can be tasked with mastering the system's API, while others can be tasked with focusing on the backend implementation and customization. These two groups can work on the system without stepping on one another's toes. Furthermore, if the system being worked on is an industry standard - like ASP.NET 2.0 - skills from both tasks can be easily carried over into future jobs.


ASP.NET 2.0 utilizes the provider model throughout its architecture. Many of its subsystems - Membership, Site Navigation, Personalization- utilize the provider model. Each of these subsystems provide a default implementation, but enable customers to tweak the functionality to their own needs. For example, the Site Navigation piece of ASP.NET 2.0 allows a page developer to define the navigational structure of their website. This data can then be used by a variety of Web controls, to display site maps, breadcrumbs, treeviews, or menus that highlight the site's navigation and/or show the user's location in the site. In addition to navigation-related Web controls, the site navigation API provides a bevy of methods for interacting with the website's navigation information
By default the site's navigational information must be encoded in a properly-formatted XML file. This is the data store that the default site navigation is hard-coded to use. However, ASP.NET 2.0's provider model makes it easy for you to use your own data store for site navigation.
The provider model is one of ASP.NET 2.0's greatest features for migration. ASP.NET 2.0 offers a lot of new features that developers had to custom bake in 1.x. If these new features in 2.0 used rigid implementations, it would dissuade migration from 'living' 1.x applications that used custom solutions since many of the new ASP.NET 2.0 Web controls use these new subsystems. With the provider model in place, however, we can upgrade our 1.x apps to 2.0 and create a provider to have 2.0's new subsystems integrate with our custom baked solutions. That means when moving to 2.0 we can use the new Web controls and have them seamlessly use our existing systems thanks to the provider model.


For More Information... read Rob Howard's two articles:
Provider Model Design Pattern and Specification, and
The Provider Model in ASP.NET 1.x

Displaying Random Images in an ASP.NET Web Page

Displaying Random Images can be done using two different approaches.They are:

1.) Randomly Displaying an Image from a Directory of Images:

The easiest way to display a random image is to add an ASP.NET Image control to your page (or Master Page) and to write a little code that gets all of the image files from a particular directory, uses the Random class to randomly pick an image from the list, and assings the randomly selected image's path to the ASP.NET Image control's ImageUrl property.

2.)Displaying Random Images Using the ContentRotator Web Control:

Using the ContentRotator control is easy: just specify what content items are to be considered when choosing the content item to display. The content items can be specified through an XML file, added programmatically to the control, or hard-coded through the control's declarative syntax.
The algorithm used to determine what content item to display uses each content item's impressions attribute.

"The algorithm used to randomly choose a content item works by laying out each applicable content item end-to-end, forming a line. The length of each content item is its impressions value, meaning that the total length of the line is the sum of the applicable content items' impressions. Next, a random number less than the total length is chosen, and the content item to display is the one that lies at the location of the random number."

To complete the implementation of this approach, all we need to do is add the ContentRotator control to the ASP.NET page or Master Page where we want the random image to appear and set its ContentFile property to the path of the XML content file.

Using the ContentRotator control in your application involves the following steps:

1.) Add the skmContentRotator.dll assembly to your application's /Bin directory.

2.) At the top of the ASP.NET page or Master Page where you want to use the ContentRotator control, add the following @Register directive: <%@ Register TagPrefix="skm" Namespace="skmContentRotator" Assembly="skmContentRotator" %>.

3.) Add the ContentRotator control to a page by adding the following markup: .

Alternatively, you can bypass steps 2 and 3 above by adding the ContentRotator control to the Toolbox and then dragging and dropping the control from the Toolbox onto your web page or Master Page.

Sunday, November 25, 2007

SQL SPY is returning

Project SQL SPY 5.2 is out! This version of SQL SPY fully supports SQL Server 2005 and has many new features exclusively for SQL 2005. SQL SPY was designed as a tool to display, monitor and report on valuable SQL Server information. Look around the internet and find any SQL Server monitoring tool. Now compare them to SQL SPY. What you will find is that you are going to pay hundreds, if not thousands of dollars for them. Since 1999 I have dedicated all my free time to developing an alternative to those high priced applications. Will those companies actually listen to you and add the features or functionality that you really want? SQL SPY supports 75 distinct features, and every one of those features has come from you.
Now you can customize SQL SPY yourself. I am offering the complete SQL SPY project for download!SQL SPY's reporting capabilities include the Server, Database, and Connection levels. Generate data object biographies that include dependencies, sizes, indexes, rules, triggers and defaults, stored procedures, user functions and views. Monitor and record user connections over long periods of time with minimal impact even within the most sensitive production environment.
Important!
1. If you are upgrading or reinstalling SQL SPY, you will need to use the "Clean" button on the SQL registration window and re-register any SQL 2005 instances to be able to use the new features.
2. I will not be supporting SQL SPY as of version 5.2. I am unable to keep up with the demand.

Please download the setup from
"http://www.hybridx.com/_Downloads/SQL SPY Setup.zip"

Using T-SQL MERGE command in SQL Server 2008

MERGE sql command is one of the new features introduced in upcoming SQL Server 2008. Basically, it’s used to insert, delete or update records in a target table basing on the result of join with the source table. Instead of using a combination of IFs and SELECTs MERGE makes it possible to write one query which will:
join the tables
specify matching values and perform requested action
specify non-matching values and also perform requested action
The following example should give you some idea about MERGE command.

I assume you have already created some database for testing purposes, so the first thing that needs to be done is preparing two tables. One of the tables, TestTable1, is going to be the target table, while the other one, SourceTable1, will be used as a source table for the merge. The following queries create the tables and populate then with some values. Notice the difference both in column naming and table definition.
CREATE TABLE TestTable1(tableId int PRIMARY KEY,textData varchar(20),intData int)
CREATE TABLE SourceTable1(tableId int PRIMARY KEY,someText varchar(20),someInt int,someBit bit)

INSERT INTO TestTable1 VALUES(1, ‘Test 1′, 21)
INSERT INTO TestTable1 VALUES(2, ‘Test 2′, 21)
INSERT INTO TestTable1 VALUES(7, ‘Test 7′, 21)
INSERT INTO TestTable1 VALUES(9, ‘Test 9′, 21)
INSERT INTO SourceTable1 VALUES(1, ‘Merge source 1′, 21, 0)
INSERT INTO SourceTable1 VALUES(2, ‘Merge source 2′, 31, 1)
INSERT INTO SourceTable1 VALUES(3, ‘Merge source 3′, 55, 1)
INSERT INTO SourceTable1 VALUES(4, ‘Merge source 4′, 1, 0)
INSERT INTO SourceTable1 VALUES(5, ‘Merge source 5′, 13, 0)
INSERT INTO SourceTable1 VALUES(6, ‘Merge source 6′, 90, 1)
INSERT INTO SourceTable1 VALUES(8, ‘Merge source 8′, 97, 1)
INSERT INTO SourceTable1 VALUES(9, ‘Merge source 9′, 6, 0)
INSERT INTO SourceTable1 VALUES(10, ‘Merge source 10′, 11, 0)

The tables are ready, so now it’s time to write the query to merge SourceTable1 into TestTable1 according to following rules:
1) If tableId of the source record does not exist in the target table, copy the record into target table skipping someBit value
2) If tableId of the source record exists in target table, overwrite target record’s textData with matching source record’s someText value. If source record’s someBit is set to 1 also overwrite intData with source’s someInt, otherwise leave it intact

The following query performs these actions:
MERGE INTO TestTable1 T
USING SourceTable1 S ON S.tableId = T.tableId
WHEN NOT MATCHED THEN INSERT (tableId, textData, intData) VALUES(S.tableId, S.someText, S.someInt)
WHEN MATCHED THEN UPDATE
SET textData = S.someText,
intData = CASE S.someBit WHEN 1 THEN S.someInt
ELSE T.intData
END;
As you can (hopefuly) see, the code is simple and easy to read. First, I defined the join criteria (ON S.tableId = T.tableId), then the INSERT action for non-matching records, and finally UPDATE action for matching records.
Warning: Notice the semicolon at the end of the statement - MERGE has to be terminated, else the execution will return an error.
Hint: You can use second WHEN MATCHED clause but you have to obey some rules. The first clause has to be accompanied by AND clause. The second WHEN MATCHED is applied if the first one isn’t - so you can not combine WHEN MATCHED clauses to perform more than one action on the same row. Also, when using two WHEN MATCHED clauses one of them has to specify UPDATE, and the other DELETE action.
Link: There is a pre-release documentation for MERGE which you can read here. If you are interested in more detailed description of the command it’s a good place to start.

Thursday, November 22, 2007

WebParts in ASP.Net 2.0

Webparts are going to be the future of web based management systems. WebParts give us the option of dragging and dropping of objects on a page as well as, changing titles and border style properties of objects at runtime. Before the introduction of WebParts it was used to be a hectic task because we had to write a lot of Javascript and had to save the state of objects in a database.

There are two basic things in WebParts:
1.) WebPart manager
2.) WebPart zones

WebPartManager
The WebPartManager is the manager for all webparts. If webparts are used in web projects then WebPartManager is required. Usually you just drag and drop this into your webform and are ready to go.

WebPart Zones
There are four kinds of zones in webpart zones:

1.) WebPart Zone
2.) Editor Zone
3.) Catalog Zone
4.) Connection Zone

The webpart Zone is the basic unit for webparts. By placing different contents in a webpart zone we can allow a user to drag and drop contents on a page.
To use different zones add a dropdownlist to your webform and add the following items to it.
----Browse
----Display
----Edit
----Catalog
----Connect

Browse Mode
The Browse mode is the default mode of webparts. In Browse mode we can not drag and drop the webparts but we can see two options, minimize and close. Minimizing a webpart will still display it in minimized state. If you choose close then it can only be restored while being in catalog mode.

Design mode
In design mode we can drag drop objects between webparts.

Edit Mode
The edit mode is used to edit webparts at runtime. Editing a webpart is further divided into four types: Appearance, Behavior, Property and Layout.

Catalog mode
The Catalog mode gives us the option to add/remove webparts at runtime. For example if we have few modules like weather, news, shopping, horoscope etc. and want to give the user an option to show or hide these modules at runtime, we can accomplish this task using the catalog mode.

Connect mode
This mode allows webparts to communicate with each other. We can make static connections once (in our code) or we can allow users to create connecttions at runtime according to their needs. The Connection mode doesn’t mean mean that the webpart is connecting to a database rather means it is connected to other webparts. For example if a webpart contains a grid, used to display some records and we want to filter it on the users input, then we could use a textbox in another webpart, which would send the filter criteria text by using the connect mode.

For more information,check
http://dotnetslackers.com/articles/aspnet/UsingWebPartsInASPNet20.aspx

Difference between Truncate and Delete

Truncate and Delete both are used to delete data from the table. Both these commands will only delete the data of the specified table; they cannot remove the whole table: data along with structure.
TRUNCATE and DELETE remove the data not the structure
Both commands remove rows from a table, but the table structure and its columns, constraints, indexes, and so on remain. To remove the table definition in addition to its data, use the DROP TABLE statement.

Conditional based deletion of data
Conditional based deletion of data means that not all rows are deleted. Let's suppose I have a table authors and from this table I want to delete the authors that are living in Australia. Let's examine what our options for doing this with each command.
TRUNCATE - In case of the TRUNCATE command we can't perform the conditional based deletion because there is no WHERE clause allowed with this command.
DELETE - THe DELETE command provides the functionality of conditional based deletion of data from the table using the WHERE clause.

Delete and Truncate both are logged operations:
On most of the articles I have read on the Internet, I have seen this written: "delete is a logged operation and truncate is not a logged operation", which means when we run the delete command it logs (records) the information about the deleted rows and when we run the truncate command it doesn't log any data. But this is not true; truncate is also a logged operation but in a different way. It uses fewer system and transaction log resources than delete. The TRUNCATE command uses minimum logging resources, which is why it is faster than delete. So both delete and truncate are logged operations, but they work differently as shown below.

DELETE is a logged operation on a per row basis.
The DELETE statement removes rows one at a time and records an entry in the transaction log for each deleted row. So, in case if you are deleting a huge number of records then it can cause your transaction log to grow. This means the deletion of a huge number of records will use more server resources as it logs each and every row that is deleted. That is why your transaction log will grow very rapidly. Since the delete statement records each deleted row it is also slow. Some people ask that if this is done for each row then why does not Microsoft modify the delete statement to not record each deleted row??? The answer is when you run your databases in full recovery mode, detailed logging is necessary for SQL Server to be able to recover your database to the most recent state.

TRUNCATE logs the deallocation of the data pages in which the data exists. TRUNCATE is faster than DELETE due to the way TRUNCATE "removes" rows from the table. It won't log the deletion of each row; instead it logs the deallocation of the data pages of the table. The TRUNCATE statement removes the data by deallocating the data pages used to store the table data and records only the page deallocation in the transaction log. Actually, TRUNCATE does not remove data, but rather deallocates whole data pages and removes pointers to indexes. The data still exists until it is overwritten or the database is shrunk. This action does not require a lot of resources and is therefore very fast. It is a common mistake to think that TRUNCATE is not logged. This is wrong. The deallocation of the data pages is recorded in the log file. Therefore, "Books Online (BOL)" refers to TRUNCATE operations as "minimally logged" operations. You can use TRUNCATE within a transaction, and when this transaction is rolled-back, the data pages are reallocated again and the database is again in its original, consistent state.

Behavior of Delete and Truncate for identity columns
now the case of identity columns. Both the TRUNCATE and DELETE commands behave differently against Identity columns. When we use truncate it will reset the counter used by an identity column for new rows to the seed value defined for the column. But in the case of DELETE it will not reset the counter of your identity column. Rather it maintains the same counter for new rows. In both the cases, if no seed was defined the default value 1 is used. As TRUNCATE resets the identity column counter, in the case where you want to retain the identity counter, use DELETE instead of TRUNCATE.

bcp Utility

There was a time that I avoided bcp utility like the plague. A few years ago, we needed to set up a once a month job to transfer about 100 GB of data from one server to another. Since this involved two SQL Server 2000 instances, the first attempt was using DTS, but the performance was just not good enough. We considered setting up a job to copy a backup file from one server to another, do a restore and then import the data. I decided to give bcp a try, exporting the data to files and then importing the data into the required tables. The performance was acceptable and the bcp solution has been in place without any issues since then.
I think that the reason I avoided using bcp for so long is that it has a dizzying number of options. Luckily, I found some bcp scripts written by a vendor using just a handful of arguments that got me started. Since then I have found several opportunities to use bcp and now consider it a very useful tool.
The bcp utility is a command line tool. You can also use it from the Query Window if xp_cmdshell is enabled. One thing to keep in mind is that when it is used from the command window or batch file, the file path is in the context of the box where the command is run. When running from the Query Window or a stored proc, the file path will be in the context of the server.
Below is an example script that I ran from my laptop at home in the Query Window. It is running SQLExpress. There are many, many more options available for bcp, but this minimal number of arguments has worked for just about everything I have wanted to do. There is also a handy “queryout” argument you can use instead of “out” to use a select statement instead of a table or view name.
You will have to modify the server name and file path for your environment. Make sure that the bcp command ends up all on one line./*
Export from the AdventureWorks database to a file
out = export to a file
-S = Server\Instance
-T = trusted authentication
-c = use character datatypes, tab delimited
-t = override the delimiter
*/
exec master.dbo.xp_cmdshell 'bcp AdventureWorks.HumanResources.Employee out c:\temp\employee.txt -S localhost\SQLExpress -T -c -t ""'
--Create a blank table
use AdventureWorks
go
if object_id('dbo.test_import') is not null
drop table dbo.test_import
select * into dbo.test_import from HumanResources.Employee where 1 = 2
/*
Import from a file into a table.
in = import from a file
-S = Server\Instance
-T = trusted authentication
-c = character datatypes, tab delimited
-t = override the delimiter
*/
exec master.dbo.xp_cmdshell 'bcp AdventureWorks.dbo.test_import in c:\temp\employee.txt -S localhost\SQLExpress -T -c -t ""'
select * from dbo.test_import

Wednesday, November 21, 2007

ClickOnce Deployment in .NET Framework 2.0

Microsoft has released a new technology named ClickOnce that is designed to solve the deployment issues for a windows forms application. This new technology not only provides an easy application installation mechanism but also enables easy deployment of upgrades to existing applications.

This technology works as follows:

1.) You create a Windows forms application and use the Publish option to deploy the application onto any of the following locations: File System, Local Web Server, FTP Site, or a Remote Web Site.
2.) Once the application is deployed onto the target location, the users of the application can browse to the publish.htm file and install the application onto their machine. Note that publish.htm file is the entry point for installing the application and this will be discussed in the later part of this article.
3.) Once the user has installed the application, a shortcut icon will be added to the desktop and the application will also be listed in the Control Panel/Add Remove Programs.
4.) When the user launches the application again, the manifest will contain all the information to decide if the application should go to the source location and check for updates to the original application. Let us say, for instance, a newer version of the application is available, it will be automatically downloaded and made available to the user. Note that when the new version is downloaded, it is performed in a transacted manner meaning that either the entire update is downloaded or nothing is downloaded. This will ensure that the application integrity is preserved.

The ASP.NET 2.0 TreeView Control

ASP.NET 2.0 introduces a new control named TreeView that provides a seamless way to consume information from hierarchical data sources such as an XML file and then display that information. We can use the TreeView control to display information from a wide variety of data sources such as an XML file, site-map file, string, or from a database.It is available under the Standard tab in the Toolbox.
We can use this control when displaying a navigation menu, displaying database records from database tables in a Master/Detail relation, displaying the contents of an XML document, or displaying files and folders from the file system. It is also possible for usto programmatically access the TreeView object model to dynamically create trees, populate nodes, set properties and so on. The TreeView control consists of nodes and there are three types of nodes that you can add to a TreeView control.
Root - A root node is a node that has no parent node. It has one or more child nodes.
Parent - A node that has a parent node and one or more child nodes.
Leaf - A node that has no child nodes.

Customizing the Appearance of the TreeView Control

The TreeView control provides ImageUrl properties such as RootNodeImageUrl, ParentNodeImageUrl, and LeafNodeImageUrl. These custom images are rendered to the left of the node's text. You can override the default image for the node type using the ImageUrl property. The TreeView control also provides CollapseImageUrl and ExpandImageUrl properties for the expanded and collapsed indicators. These are usually represented by plus and minus icons. There is also a property named NoExpandImageUrl that can be used for rendering an image for nodes which have no children. We can turn off the default image using the ShowExpandCollapseboolean property.
In addition to custom images, the TreeView control also supports TreeNodeStyle properties for each node types. These style properties override the NodeStyle property, which applies to all node types. A node can also have a different style applied when it is selected. When Selected property is set to true, the node is selected and the SelectedNodeStyle properties overrides any corresponding unselected style properties for the selected node. It is also possible for us to render check boxes between the node and image by setting the ShowCheckBoxes property to a boolean value.

ASP.NET 2.0's Administrative Tools

1.)Internet Services Manager's ASP.NET Tab
Upon installation ASP.NET 2.0 adds an "ASP.NET" tab to the property pages in the Internet Services Manager. Almost the exact same page appears regardless of the level at which you select properties... the only exception being that the "Edit machine.config" button is available from the site nodes, but not from application level nodes.
Of particular interest to some of us is the "ASP.NET Version" drop down box. It lets you select what installed version of the .NET Framework your application will run under.... This whole side by side deployment thing is pretty nice.

2.)ASP.NET Web Site Administration Tool
The other tool is the ASP.NET Web Site Administration Tool. This tool can be used to manage all the security settings for the application.We can set up users and passwords(authentication),create roles(groups of users),create permissions(rules for controlling access to parts of the application).

For more information ,please check the link
http://www.15seconds.com/Issue/041215.htm

SQL Server 2005 Features Continues....

1. HTTP endpoints
You can easily create HTTP endpoints via a simple T-SQL statement exposing an object that can be accessed over the Internet. This allows a simple object to be called across the Internet for the needed data.

2. Multiple Active Result Sets (MARS)
MARS allow a persistent database connection from a single client to have more than one active request per connection. This should be a major performance improvement, allowing developers to give users new capabilities when working with SQL Server. For example, it allows multiple searches, or a search and data entry. The bottom line is that one client connection can have multiple active processes simultaneously.

3. Dedicated administrator connection
If all else fails, stop the SQL Server service or push the power button. That mentality is finished with the dedicated administrator connection. This functionality will allow a DBA to make a single diagnostic connection to SQL Server even if the server is having an issue.

4. SQL Server Integration Services (SSIS)
SSIS has replaced DTS (Data Transformation Services) as the primary ETL (Extraction, Transformation and Loading) tool and ships with SQL Server free of charge. This tool, completely rewritten since SQL Server 2000, now has a great deal of flexibility to address complex data movement.

5. Database mirroring
It's not expected to be released with SQL Server 2005 at the RTM in November, but I think this feature has great potential. Database mirroring is an extension of the native high-availability capabilities. So, stay tuned for more details…. For now, here's

URL mapping - a new feature to ASP.NET 2.0

URL mapping enables page developers to map one set of URLs to another. If a request comes in for one of the URLs in the first set, it is automatically re-mapped on the server-side.
URL mapping is often used to provide "friendly" URLs, which are URLs that are more readable and sensical.URL mapping also is useful when restructuring a site.
The URL mapping feature in ASP.NET 2.0 is very simple and works by specifying the mapping directly in Web.config.To specify the mappings, simply add a element to Web.config. Set the enabled attribute to true and then include an element for each mapping. In the element, specify the incoming URL to look for and the URL to map it to using the url and mappedUrl attributes, respectively.

For more info,check out
http://aspnet.4guysfromrolla.com/articles/011007-1.aspx

AJAX control --- Update Panel

The UpdatePanel is useful in situations where we only want a portion of the page to postback rather than the entire page. Such a limited postback is called a partial postback, and is easy to implement using the UpdatePanel.
Many ASP.NET controls can cause postbacks: Button controls, when clicked; DropDownLists and CheckBoxes, when their AutoPostBack property is set to True; and so on. Under normal circumstances, when these controls cause a postback, the entire page is posted back. All form field values are sent from the browser to the server. The server then re-renders the entire page and returns the complete HTML, which is then redisplayed by the browser.
When these controls appear in an UpdatePanel, however, a partial page postback is initiated instead. Only the form fields in the UpdatePanel are sent to the server. The server then re-renders the page, but only sends back the markup for those controls in the UpdatePanel. The client-side script that initiated the partial postback receives the partial markup results from the server and seamlessly updates the display in the browser with the returned values. Consequently, the UpdatePanel improves the reponsiveness of a page by reducing the amount of data exchanged between the client and the server and by "redrawing" only the portion of the screen that kicked off the partial page postback.
All of the GridView's rich functionality - paging, sorting, editing, and deleting - are accessible when its placed within an UpdatePanel without the need for any special code or instructions.
If GridView is in an UpdatePanel, actions that would normally cause a full postback - moving to the next page of data, sorting, editing, or deleting - instead result in a partial postback.

Tuesday, November 20, 2007

.NET Building Blocks: Build a Configurable Database Credential Selector

Visual Studio provides convenient design-time wizards to define static database connection strings that you can store in your settings files. Storing connection strings in configuration files means you can update the connection string without having to recompile or redeploy the host program, although it does require restarting the application. This article presents a dynamically configurable user control that lets you provide users with the capability to change the connection parameters themselves. You have complete control—letting them change all or only some portions of the connection string.

Please visit the below link for more information:
http://www.devx.com/dotnet/Article/35374

New web Features of Visual Studio 2008

1.) New Web Design Interface
Visual Studio 2008 has incorporated a new Web designer that uses the design engine from Expression Web. Moving between design and source view is faster than ever and the new split view capability means you can edit the HTML source and simultaneously see the results on the page. Support for style sheets in separate files has been added as well as a CSS properties pane which clarifies the sometimes-complex hierarchy of cascading styles, so that it is easy to understand why an element looks the way it does. In addition Visual Studio 2008 has full WYSIWYG support for building and using ASP.NET Nested Master Pages which greatly improves the ability to build a Web site with a consistent look and feel.

2.) JavaScript Debugging and Intellisense
In Visual Studio 2008, client-side JavaScript has now become a first-class citizen in regards to its debugging and Intellisense support. Not only does the Intellisense give standard JavaScript keyword support, but it will automatically infer variable types and provide method, property and event support from any number of included script files. Similarly, the JavaScript debugging support now allows for the deep Watch and Locals support in JavaScript that you are accustomed to having in other languages in Visual Studio. And despite the dynamic nature of a lot of JavaScript, you will always be able to visualize and step into the JavaScript code, no matter where it is generated from. This is especially convenient when building ASP.NET AJAX applications.

3.) Multi-targeting Support
In previous versions of Visual Studio, you could only build projects that targeted a single version of the .NET Framework. With Visual Studio 2008, we have introduced the concept of Multi-targeting. Through a simple drop-down, you can decide if you want a project to target .NET Framework 2.0, 3.0 or 3.5. The builds, the Intellisense, the toolbox, etc. will all adjust to the feature set of the specific version of the .NET Framework which you choose. This allows you to take advantage of the new features in Visual Studio 2008, like the Web design interface, and the improved JavaScript support, and still build your projects for their current runtime version.

Visual Studio 2008 released

Microsoft Visual Studio 2008 , code-named Orcas, is the successor to Visual Studio 2005. It was released to MSDN subscribers on 19 November 2007. The codename Orcas is, like Whidbey, a reference to an island in Puget Sound, Orcas Island. The successor to Visual Studio 2008 is codenamed Rosario. The source code for the Visual Studio 2008 IDE will be available under a shared-source license to some of Microsoft's partners and ISVs.

Visual Studio 2008 is focused on development of Windows Vista, 2007 Office system, and Web applications. Among other things, it brings a new language feature, LINQ, new versions of C# and Visual Basic languages, a Windows Presentation Foundation visual designer, and improvements to the .NET Framework. It also features a new HTML/CSS editor influenced by Microsoft Expression Web. Visual Studio 2008 requires .NET Framework 3.5 and by default configures compiled assemblies to run on .NET Framework 3.5; but it also supports multi-targeting which lets the developers choose which version of the .NET Framework (out of 2.0, 3.0, 3.5, Silverlight CoreCLR or .NET Compact Framework) the assembly runs on. Visual Studio 2008 also includes new code analysis tools, including the new Code Metrics tool.

Visual Studio 2008 features a XAML based designer (codenamed Cider), workflow designer, LINQ to SQL designer (for defining the type mappings and object encapsulation for SQL data), XSLT debugger, XSD designer, JavaScript Intellisense support, JavaScript Debugging support, support for UAC manifests, a concurrent build system, among others. It also ships with an enhanced set of UI widgets, both for WinForms and WPF. It also includes a multithreaded build engine to compile multiple source files (and build the executable file) in a project across different threads simultaneously.

Monday, November 19, 2007

New features in SQL Server 2005 Continues

3. Service Broker
The Service Broker handles messaging between a sender and receiver in a loosely coupled manner. A message is sent, processed and responded to, completing the transaction. This greatly expands the capabilities of data-driven applications to meet workflow or custom business needs.

4. Data encryption
SQL Server 2000 had no documented or publicly supported functions to encrypt data in a table natively. Organizations had to rely on third-party products to address this need. SQL Server 2005 has native capabilities to support encryption of data stored in user-defined databases.

5. SMTP mail
Sending mail directly from SQL Server 2000 is possible, but challenging. With SQL Server 2005, Microsoft incorporates SMTP mail to improve the native mail capabilities. Say "see-ya" to Outlook on SQL Server!

SQL Server 2005 Features continues in next posts..

Microsoft .NET Framework 3.5

.NET Framework 3.5 builds incrementally on the new features added in .NET Framework 3.0. For example, feature sets in Windows Workflow Foundation (WF), Windows Communication Foundation (WCF), Windows Presentation Foundation (WPF) and Windows CardSpace. In addition, .NET Framework 3.5 contains a number of new features in several technology areas which have been added as new assemblies to avoid breaking changes. They include the following:

1.) Deep integration of Language Integrated Query (LINQ) and data awareness. This new feature will let you write code written in LINQ-enabled languages to filter, enumerate, and create projections of several types of SQL data, collections, XML, and DataSets by using the same syntax.

2.) ASP.NET AJAX lets you create more efficient, more interactive, and highly-personalized Web experiences that work across all the most popular browsers.

3.) New Web protocol support for building WCF services including AJAX, JSON, REST, POX, RSS, ATOM, and several new WS-* standards.

4.)Full tooling support in Visual Studio 2008 for WF, WCF, and WPF, including the new workflow-enabled services technology.

5.)New classes in .NET Framework 3.5 base class library (BCL) that address many common customer requests.

New features in SQL Server 2005

n the business world, everything is about being "better, faster and cheaper" than the competition -- and SQL Server 2005 offers many new features to save energy, time and money. From programming to administrative capabilities, this version of SQL Server tops all others and it enhances many existing SQL Server 2000 features.

1. T-SQL (Transaction SQL) enhancements
T-SQL is the native set-based RDBMS programming language offering high-performance data access. It now incorporates many new features including error handling via the TRY and CATCH paradigm, Common Table Expressions (CTEs), which return a record set in a statement, and the ability to shift columns to rows and vice versa with the PIVOT and UNPIVOT commands.

2. CLR (Common Language Runtime)
The next major enhancement in SQL Server 2005 is the integration of a .NET compliant language such as C#, ASP.NET or VB.NET to build objects (stored procedures, triggers, functions, etc.). This enables you to execute .NET code in the DBMS to take advantage of the .NET functionality. It is expected to replace extended stored procedures in the SQL Server 2000 environment as well as expand the traditional relational engine capabilities.

Rest of the Features Continues in next posts....

New Visual Basic LINQ to XML Videos Released

Language-Integrated Query capabilities of LINQ to XML is the fact that LINQ to XML represents a new, modernized in-memory XML Programming API. LINQ to XML was designed to be a cleaner, modernized API, as well as fast and lightweight. LINQ to XML uses modern language features (e.g., generics and nullable types) and diverges from the DOM programming model with a variety of innovations to simplify programming against XML. Even without Language-Integrated Query capabilities LINQ to XML represents a significant stride forward for XML programming.

Because LINQ to XML provides a fully featured in-memory XML programming API you can do all of the things you would expect when reading and manipulating XML. A few examples include the following:
Load XML into memory in a variety of ways (file, XmlReader, and so on).
Create an XML tree from scratch.
Insert new XML Elements into an in-memory XML tree.
Delete XML Elements out of an in-memory XML tree.
Save XML to a variety of output types (file, XmlWriter, and so on).

Sunday, November 18, 2007

Microsoft Announces New Virtualization Offerings, Windows Server 2008 Details, System Center Product Availability at TechEd IT Forum 2007

Today at TechEd IT Forum 2007, Microsoft Corp.’s foremost technology show in Europe for IT professionals, Bob Kelly, corporate vice president in the company’s Server and Tools Business, announced details of next-generation solutions to deliver Dynamic IT for the People-Ready Business.

“Earlier this year, we introduced our vision for Dynamic IT, which aims to make IT a stronger partner to the business by bringing together the capabilities of core infrastructure, application and development platforms,” Kelly said. “At TechEd IT Forum, we are demonstrating how that vision is coming to life for customers as IT is streamlined across the full spectrum of our software and technology offerings, including through exciting new developments with Windows Server, Microsoft System Center, Microsoft SQL Server and the Microsoft Desktop Optimization Pack. The response from customers has been strong as we move toward the worldwide February 2008 launch of Windows Server 2008, SQL Server 2008 and Visual Studio 2008.

For more information check out the below link

http://www.microsoft.com/presspass/press/2007/nov07/11-12ITForumPR.mspx

The family of .NET languages is increasing

F# will be the youngest language in the .NET family. Just a few days before the release of the VS 2008, Microsoft development tool Vice-President Soma Somasegar announced that his division would work with Microsoft Research to integrate the F# language into Visual Studio. F# is a project that looks to exploit functional programming techniques.

Former Microsoft head James Plamondon commented that moves like this are important on the day when ''programmers would chose the best language to write each piece of a coding task, just as a carpenter uses a hammer for one task and a saw for another.''