Pages

Sunday, December 30, 2007

List Available DBCC Commands

DBCC Commands or Database Consistency Checker commands have been with SQL Server from its early ages itself. In this article we would list the available DBCC commands listed using the DBCC Help command. Many in this list are undocumented and are rarely used in practical scenarios. But these form the base for debugging complex problems. There are approximately 130 of them.

DBCC activecursors [(spid)]

DBCC addextendedproc (function_name, dll_name)

DBCC addinstance (objectname, instancename)

DBCC adduserobject (name)

DBCC auditevent (eventclass, eventsubclass, success, loginname, rolename, dbusername, loginid)

DBCC autopilot (typeid, dbid, tabid, indid, pages [,flag])

DBCC balancefactor (variance_percent)

DBCC bufcount [(number_of_buffers)]

DBCC buffer ( {'dbname' | dbid} [, objid [, number [, printopt={0|1|2} ][, dirty | io | kept | rlock | ioerr | hashed ]]])

DBCC bytes ( startaddress, length )

DBCC cachestats

DBCC callfulltext

DBCC checkalloc [('database_name'[, NOINDEX | REPAIR])] [WITH NO_INFOMSGS[, ALL_ERRORMSGS][, ESTIMATEONLY]]

DBCC checkcatalog [('database_name')] [WITH NO_INFOMSGS]

DBCC checkconstraints [( 'tab_name' | tab_id | 'constraint_name' | constraint_id )] [WITH ALL_CONSTRAINTS | ALL_ERRORMSGS]

DBCC checkdb [('database_name'[, NOINDEX | REPAIR])] [WITH NO_INFOMSGS[, ALL_ERRORMSGS][, PHYSICAL_ONLY][, ESTIMATEONLY][, DBCC TABLOCK]

DBCC checkdbts (dbid, newTimestamp)]

DBCC checkfilegroup [( [ {'filegroup_name' | filegroup_id} ] [, NOINDEX] )] [WITH NO_INFOMSGS[, ALL_ERRORMSGS][, PHYSICAL_ONLY][, ESTIMATEONLY][, TABLOCK]]

DBCC checkident ('table_name'[, { NORESEED | {RESEED [, new_reseed_value] } } ] )

DBCC checkprimaryfile ( {'FileName'} [, opt={0|1|2|3} ])

DBCC checktable ('table_name'[, {NOINDEX | index_id | REPAIR}]) [WITH NO_INFOMSGS[, ALL_ERRORMSGS][, PHYSICAL_ONLY][, ESTIMATEONLY][, TABLOCK]]

DBCC cleantable ('database_name'|database_id, 'table_name'|table_id, [batch_size])

DBCC cacheprofile [( {actionid} [, bucketid])

DBCC clearspacecaches ('database_name'|database_id, 'table_name'|table_id, 'index_name'|index_id)

DBCC collectstats (on | off)

DBCC concurrencyviolation (reset | display | startlog | stoplog)

DBCC config

DBCC cursorstats ([spid [,'clear']])

DBCC dbinfo [('dbname')]

DBCC dbrecover (dbname [, IgnoreErrors])

DBCC dbreindex ('table_name' [, index_name [, fillfactor ]]) [WITH NO_INFOMSGS]

DBCC dbreindexall (db_name/db_id, type_bitmap)

DBCC dbrepair ('dbname', DROPDB [, NOINIT])

DBCC dbtable [({'dbname' | dbid})]

DBCC debugbreak

DBCC deleteinstance (objectname, instancename)

DBCC des [( {'dbname' | dbid} [, {'objname' | objid} ])]

DBCC detachdb [( 'dbname' )]

DBCC dropcleanbuffers

DBCC dropextendedproc (function_name)

DBCC dropuserobject ('object_name')

DBCC dumptrigger ({'BREAK', {0 | 1}} | 'DISPLAY' | {'SET', exception_number} | {'CLEAR', exception_number})

DBCC errorlog

DBCC extentinfo [({'database_name'| dbid | 0} [, {'table_name' | table_id} [, {'index_name' | index_id | -1}]])]

DBCC fileheader [( {'dbname' | dbid} [, fileid])

DBCC fixallocation [({'ADD' | 'REMOVE'}, {'PAGE' | 'SINGLEPAGE' | 'EXTENT' | 'MIXEDEXTENT'}, filenum, pagenum [, objectid, indid])

DBCC flush ('data' | 'log', dbid)

DBCC flushprocindb (database)

DBCC free dll_name (FREE)

DBCC freeproccache

DBCC freeze_io (db)

DBCC getvalue (name)

DBCC help ('DBCC_command' | '?')

DBCC icecapquery ('dbname', stored_proc_name [, #_times_to_icecap (-1 infinite, 0 turns off)]) Use 'DBCC icecapquery (printlist)' to see list of SP's to profile. Use 'DBCC icecapquery (icecapall)' to profile all SP's.

DBCC incrementinstance (objectname, countername, instancename, value)

DBCC ind ( { 'dbname' | dbid }, { 'objname' | objid }, { indid | 0 | -1 | -2 } )

DBCC indexdefrag ({dbid | dbname | 0}, {tableid | tablename}, {indid | indname})

DBCC inputbuffer (spid)

DBCC invalidate_textptr (textptr)

DBCC invalidate_textptr_objid (objid)

DBCC iotrace ( { 'dbname' | dbid | 0 | -1 }, { fileid | 0 }, bufsize, [ { numIOs | -1 } [, { timeout (sec) | -1 } [, printopt={ 0 | 1 }]]] )

DBCC latch ( address [, 'owners'] [, 'stackdumps'])

DBCC lock ([{'DUMPTABLE' | 'DUMPSTATS' | 'RESETSTATS' | 'HASH'}]|[{'STALLREPORTTHESHOLD', stallthreshold}])

DBCC lockobjectschema ('object_name')

DBCC log ([dbid[,{0|1|2|3|4}[,['lsn','[0x]x:y:z']|['numrecs',num]|['xdesid','x:y']|['extent','x:y']|['pageid','x:y']|['objid',{x,'y'}]|['logrecs',{'lop'|op}...]|['output',x,['filename','x']]...]]])

DBCC loginfo [({'database_name' | dbid})]

DBCC matview ({'PERSIST' | 'ENDPERSIST' | 'FREE' | 'USE' | 'ENDUSE'})

DBCC memobjlist [(memory object)]

DBCC memorymap

DBCC memorystatus

DBCC memospy

DBCC memusage ([IDS | NAMES], [Number of rows to output])

DBCC monitorevents ('sink' [, 'filter-expression'])

DBCC newalloc - please use checkalloc instead

DBCC no_textptr (table_id , max_inline)

DBCC opentran [({'dbname'| dbid})] [WITH TABLERESULTS[,NO_INFOMSGS]]

DBCC outputbuffer (spid)

DBCC page ( {'dbname' | dbid}, filenum, pagenum [, printopt={0|1|2|3} ][, cache={0|1} ])

DBCC perflog

DBCC perfmon

DBCC pglinkage (dbid, startfile, startpg, number, printopt={0|1|2}, targetfile, targetpg, order={1|0})

DBCC pintable (database_id, table_id)

DBCC procbuf [({'dbname' | dbid}[, {'objname' | objid}[, nbufs[, printopt = { 0 | 1 } ]]] )]

DBCC proccache

DBCC prtipage (dbid, objid, indexid [, [{{level, 0} | {filenum, pagenum}}] [,printopt]])

DBCC pss [(uid[, spid[, printopt = { 1 | 0 }]] )]

DBCC readpage ({ dbid, 'dbname' }, fileid, pageid, formatstr [, printopt = { 0 | 1} ])

DBCC rebuild_log (dbname [, filename])

DBCC renamecolumn (object_name, old_name, new_name)

DBCC resource

DBCC row_lock (dbid, tableid, set) - Not Needed

DBCC ruleoff ({ rulenum | rulestring } [, { rulenum | rulestring } ]+)

DBCC ruleon ( rulenum | rulestring } [, { rulenum | rulestring } ]+)

DBCC setcpuweight (weight)

DBCC setinstance (objectname, countername, instancename, value)

DBCC setioweight (weight)

DBCC show_statistics ('table_name', 'target_name')

DBCC showcontig (table_id | table_name [, index_id | index_name] [WITH FAST, ALL_INDEXES, TABLERESULTS [,ALL_LEVELS]])

DBCC showdbaffinity

DBCC showfilestats [(file_num)]

DBCC showoffrules

DBCC showonrules

DBCC showtableaffinity (table)

DBCC showtext ('dbname', {textpointer | {fileid, pageid, slotid [,option]}})

DBCC showweights

DBCC shrinkdatabase ({dbid | 'dbname'}, [freespace_percentage [, {NOTRUNCATE | TRUNCATEONLY}]])

DBCC shrinkdb is no longer supported. Please use shrinkdatabase instead

DBCC shrinkfile ({fileid | 'filename'}, [compress_size [, {NOTRUNCATE | TRUNCATEONLY | EMPTYFILE}]])

DBCC sqlmgrstats

DBCC sqlperf (LOGSPACE)({IOSTATS | LRUSTATS | NETSTATS | RASTATS [, CLEAR]} | {THREADS} | {LOGSPACE})

DBCC stackdump [( {uid[, spid[, ecid]} | {threadId, 'THREADID'}] )]

DBCC tab ( dbid, objid )

DBCC tape_control {'query' | 'release'}[,('\\.\tape')]

DBCC tec [( uid[, spid[, ecid]] )]

DBCC textall [({'database_name'|database_id}[, 'FULL' | FAST] )]

DBCC textalloc ({'table_name'|table_id}[, 'FULL' | FAST])

DBCC thaw_io (db)

DBCC traceoff [( tracenum [, tracenum ... ] )]

DBCC traceon [( tracenum [, tracenum ... ] )]

DBCC tracestatus (trace# [, ...trace#])

DBCC unpintable (dbid, table_id)

DBCC updateusage ({'database_name'| 0} [, 'table_name' [, index_id]]) [WITH [NO_INFOMSGS] [,] COUNT_ROWS]

DBCC upgradedb (db)

DBCC usagegovernor (command, value)

DBCC useplan [(number_of_plan)]

DBCC useroptions

DBCC wakeup (spid)

DBCC writepage ({ dbid, 'dbname' }, fileid, pageid, offset, length, data)

@Page Directives

@Page directives allow you to set attributes that directly affect your ASP.NET pages that end with the .aspx extension. There are 26 directives available, and you can explore the exhaustive list in the SDK at the link I provided a few paragraphs back, but Table 4.4 contains some of the more commonly used @page directives.

Buffer
As seen in the previous section on the Response object, this sets whether the Buffer is enabled or not. This is true by default and would need to be set explicitly to false.

EnableSessionState
This defines Session-State requirements for the page. If this is set to true (default) then Session-State is enabled; if false Session-State is disabled; or if ReadOnly then Session state can be read but not changed.

EnableViewState
This direct controls whether ViewState(Maintain GUI state) is maintained across page requests. It is true by default.

Explicit
Specifies whether all variables on a page must be declared using a Dim, Private, Public, or ReDim statement in Visual Basic .NET. The default is false.

Inherits
Defines a class from a code-behind page that a page should inherit. (Don't worry, this will be explained is a few short minutes.)

Language
Informs the .NET Framework of the language used for all inline(<% %>) and server-side script blocks within the ASP.NET page.

Src
This contains the name of the source file for an included code-behind page.

Trace
This indicates whether tracing is enabled or not. True is enabled; otherwise false. False is the default. Tracing is a debugging function in ASP.NET that allows you to see diagnostic and debug information about an ASP.NET page.

clustered index in SQL Server

What is a clustered index

First off, we'll go through what a clustered index is. SQL Server has two types of indexes, clustered indexes and non-clustered indexes. Both types are organized in the same way with a b-tree structure. The difference between them lies in what the leaf-level nodes -- the lowest level of the tree -- contains. In a clustered index the leaf-level is the data, while the leaves of a non-clustered index contains bookmarks to the actual data. This mean that for a table that has a clustered index, the data is actually stored in the order of the index. What the bookmarks of the non-clustered index point to depends on if the table also has a clustered index or not. If it does have a clustered index then the leaves of non-clustered indexes will contain the clustering key -- the specific value(s) of the column(s) that make up the clustered index -- for each row. If the table does not have a clustered index it is known as a heap table and the bookmarks in non-clustered indexes are in RID format (File#:Page#:Slot#), i.e. direct pointers to the physical location the row is stored in. Later in this article we will see why this difference is important. To make sure that everyone understands the difference between a clustered index and a non-clustered index I have visualized them in these two images (clustered | non-clustered). The indexes correspond to those of this table:

CREATE TABLE EMPLOYEES
(
empid int NOT NULL CONSTRAINT ix_pkEMPLOYEES PRIMARY KEY NONCLUSTERED
, name varchar(25) NOT NULL
, age tinyint NOT NULL
)


CREATE CLUSTERED INDEX ixcEMPLOYEES ON EMPLOYEES (name)


INSERT INTO EMPLOYEES (empid, name, age) VALUES (1, 'David', 42)
INSERT INTO EMPLOYEES (empid, name, age) VALUES (2, 'Tom', 31)
INSERT INTO EMPLOYEES (empid, name, age) VALUES (3, 'Adam', 27)
INSERT INTO EMPLOYEES (empid, name, age) VALUES (4, 'John', 22)


SELECT * FROM EMPLOYEES WHERE name = 'John'
SELECT * FROM EMPLOYEES WHERE empid = 1

In the real indexes these four rows would fit on the same page, but for this discussion I've just put one row on each page. So, to return results for the first query containing WHERE name = 'John' SQL Server will traverse the clustered index from the root down through the intermediate node levels until it finds the leaf page containing John, and it would have all the data available to return for the query. But to return results for the second query, it will traverse the non-clustered index until it finds the leaf page containing empid 1, then use the clustering key found there for empid 1 (David) for a lookup in the clustered index to find the remaining data (in this case just the column age is missing). You can see this for yourself by viewing the execution plan for the queries in Query Analyzer (press Ctrl-K too see the plan).

Extended Stored Procedures

An extended stored procedure is simply a procedure that is implemented in a dynamic link library (DLL) — a library that is called by an application at runtime. Extended stored procedures can be used in much the same way as database stored procedures, except that extended stored procedures normally perform tasks related to the interaction of SQL Server with its operating environment. Tasks that are either too complicated or just not possible using Transact-SQL can often be performed with extended stored procedures.

Extended stored procedures are written using the Open Data Services (ODS) API. ODS is written and supported by Microsoft and is available on the Workstation version of SQL Server and as a separate product (and in the Back-Office Development Kit). The ODS development package includes both a static-link library (Opends60.lib) and C header files that let you create fully functional database services. ODS applications are dynamically linked with Opends60.dll.

Saturday, December 29, 2007

Using NHibernate with ASP.NET

NHibernate is a port of Hibernate Core for Java to the .NET Framework. It handles persisting plain .NET objects to and from an underlying relational database. Given an XML description of your entities and relationships, NHibernate automatically generates SQL for loading and storing the objects. Optionally, you can describe your mapping metadata with attributes in your source code.

NHibernate supports transparent persistence, your object classes don't have to follow a restrictive programming model. Persistent classes do not need to implement any interface or inherit from a special base class. This makes it possible to design the business logic using plain .NET (CLR) objects and object-oriented idiom.

Originally being a port of Hibernate 2.1, the NHibernate API is very similar to that of Hibernate. All Hibernate knowledge and existing Hibernate documentation is therefore directly applicable to NHibernate. Take a look at the available NHibernate Documentation.

NHibernate key features:
Natural programming model - NHibernate supports natural OO idiom; inheritance, polymorphism, composition and the .NET collections framework, including generic collections.
Native .NET - NHibernate API uses .NET conventions and idioms
Support for fine-grained object models - a rich variety of mappings for collections and dependent objects
No build-time bytecode enhancement - there's no extra code generation or bytecode processing steps in your build procedure
The query options - NHibernate addresses both sides of the problem; not only how to get objects into the database, but also how to get them out again
Custom SQL - specify the exact SQL that NHibernate should use to persist your objects. Stored procedures are supported on Microsoft SQL Server.
Support for "conversations" - NHibernate supports long-lived persistence contexts, detach/reattach of objects, and takes care of optimistic locking automatically
Free/open source - NHibernate is licensed under the LGPL (Lesser GNU Public License)

T-SQL Coding Standards Part - 2

8. Tables that map many-to-many relationships should be named by concatenating the names of the tables in question, starting with the most central table’s name.
9. Primary and Foreign key fields are postfixed with ID.
--Correct
CREATE TABLE dbo.[User]
(
UserID int NOT NULL,
AddressID int NOT NULL –-Foreign key
)
--Avoid
CREATE TABLE dbo.[User]
(
UserID int NOT NULL,
AddressFK int NOT NULL –-Fieldname indicates its use as a foreign key
)
10. Avoid naming fields in a way that indicates its use as a foreign key.
--Avoid
CREATE TABLE dbo.[UserAddress]
(
UserFK int NOT NULL,
AddressFK int NOT NULL
)

T-SQL Coding Standards Part - 1

1.Use upper case for all T-SQL constructs, except Types:
SELECT MAX(MyField) FROM MyTable
2.User lower case for all T-SQL Types and usernames:
DECLARE @MyVariable int
3.Use Camel casing for all UDO’s:
CREATE TABLE dbo.MyTable
(
MyField int
)
4.Avoid abbreviations and single character names
--Correct
DECLARE @Counter int
--Avoid
DECLARE @C int
5.UDO naming must confer to the following regular expression ([a-zA-Z][a-zA-Z0-9]+) - in short don’t use any special or language dependent characters to name objects. Constraints can use the underscore character.
--Avoid
CREATE TABLE dbo.[User Information]
6. Use the following prefixes when naming objects:
•usp - User Stored Procedures
•svf - Scalar Valued Functions
•tvf - Table Valued Functions
•vi - Views
•FK_ - Foreign keys
•DF_ - Default constraints
•IX_ - Indexes
CREATE PROCEDURE dbo.uspMyProcedure AS (...)
CREATE FUNCTION dbo.svfMyFunction
(...)
RETURNS int
AS
(...)
CREATE FUNCTION dbo.tvfMyFunction
(...)
RETURNS TABLE
AS
(...)
CREATE VIEW dbo.viMyView AS (...)
7. Name tables in the singular form:
--Correct
CREATE TABLE dbo.Address
--Avoid
CREATE TABLE dbo.Addresses

SQL Sevrer interview questions...

check out the below links on the interview questions..

http://www.t-mug.org/Interview_SQL.aspx
http://dotnetinterviews.spaces.live.com/
http://www.mcse.ms/message2303242.html

Benefits of using stored procedures

Applications that use stored procedures have the following advantages:

---Reduced network usage between clients and servers:

A client application passes control to a stored procedure on the database server. The stored procedure performs intermediate processing on the database server, without transmitting unnecessary data across the network. Only the records that are actually required by the client application are transmitted. Using a stored procedure can result in reduced network usage and better overall performance.

Applications that execute SQL statements one at a time typically cross the network twice for each SQL statement. A stored procedure can group SQL statements together, making it necessary to only cross the network twice for each group of SQL statements. The more SQL statements that you group together in a stored procedure, the more you reduce network usage and the time that database locks are held. Reducing network usage and the length of database locks improves overall network performance and reduces lock contention problems.

Applications that process large amounts of SQL-generated data, but present only a subset of the data to the user, can generate excessive network usage because all of the data is returned to the client before final processing. A stored procedure can do the processing on the server, and transmit only the required data to the client, which reduces network usage.

----Enhanced hardware and software capabilities:

Applications that use stored procedures have access to increased memory and disk space on the server computer. These applications also have access to software that is installed only on the database server. You can distribute the executable business logic across machines that have sufficient memory and processors.

---Improved security:

By including database privileges with stored procedures that use static SQL, the database administrator (DBA) can improve security. The DBA or developer who builds the stored procedure must have the database privileges that the stored procedure requires. Users of the client applications that call the stored procedure do not need such privileges. This can reduce the number of users who require privileges.

---Reduced development cost and increased reliability:

In a database application environment, many tasks are repeated. Repeated tasks might include returning a fixed set of data, or performing the same set of multiple requests to a database. By reusing one common procedure, a stored procedure can provide a highly efficient way to address these recurrent situations.

---Centralized security, administration, and maintenance for common routines:

By managing shared logic in one place at the server, you can simplify security, administration, and maintenance . Client applications can call stored procedures that run SQL queries with little or no additional processing.

Difference between Gridview and Datagrid

The GridView and the DataGrid controls have a different foundation. To enable easy migration from existing pages, the GridView preserves the DataGrid object model as much as possible. However, you shouldn't expect 100 percent compatibility between DataGrid-based code and new GridView-based code.
Another key difference between the DataGrid and GridView controls lies in the adaptive user interface. Unlike the version 1.x DataGrid, the GridView can render on mobile devices, too. In other words, to build reports on mobile devices, you can use the same grid control you would use for desktop pages. The DataGrid in version 2.0 can also render adaptively, but its UI capabilities are not quite as rich as those of the GridView.
In ASP.NET 2.0, the DataGrid control has been enhanced to support common control features such as themes and personalization. In addition, the new DataGrid control can be populated by a data source control. Remember, though, that a DataGrid can only bind to a data source object for the purpose of reading data. To actually modify the underlying data source, some user-defined code is still required. In contrast, the GridView control takes advantage of the capabilities of the underlying data source and automatically deletes or updates records. Note that the GridView control also supports the classic binding mechanism based on the DataSource property and the DataBind method. Although fully supported, this programming practice is discouraged.



The Gridview control provides you with option for Paging, Sorting, Formatting with a click of a button as compared to the datagrid control which required a bit of programming for the paging and sorting.

the GridView control also has a number of new features and advantages over the DataGrid control, which include:

· Richer design-time capabilities.
· Improved data source binding capabilities.
· Automatic handling of sorting, paging, updates, and deletes.
· Additional column types and design-time column operations.
· A Customized pager user interface (UI) with the PagerTemplate property.

Differences between the GridView control and the DataGrid control include:
· Different custom-paging support.
· Different event models.

Thursday, December 27, 2007

System.Runtime.Serialization namespace in .NET 3.0

This is a new namespace that has been added in .NET Framework 3.0. It has rich functionality with classes that are used for serializing and deserializing objects. Let us see some of the new classes that are added in this namespace.

1.DataContractAttribute --- This class Serializes and deserializes an object to an XML stream.

2.DataMemberAttribute--- This attribute is applied to the member of a type. This specifies that the member is part of a data contract andis serializable by the DataContractSerializer.

3.EnumMemberAttribute ---- Specifies that the field is an enumeration member and
should be serialized.

4.DataContractSerializer---- Serializes and deserializes an instance of a type into an XML stream or document using a supplied data contract.

Comparing Basic serialization with Custom serialization

Using basic serialization we can have an attribute "Serializable" at the class level. Using basic serialization, it is up to the .NET Framework to take care of the serialization and de-serialization. But the problem with this type of serialization is that we cannot have control over the serialization algorithm. Another major issue with basic serialization is that we have versioning issues like "System.Runtime.Serialization.SerializationException" Possible Version mismatch. But basic serialization is the easiest way for serialization.

On other hand custom serialization gives us more control. As we have seen in listing 1 and 2, we can implement the interface ISerializable which gives us more control over the serialization algorithm. We can also avoid the serialization exceptions with the custom serialization. Apart from these, we have 4 attributes when applied to methods they are called in the process of serialization.

OnDeserializedAttribute – This attribute when put on a method, the method gets fired when the de-serialization is completely done.

OnDeserializingAttribute – When applied on a method this method is called during de-serialization of an object.

OnSerializedAttribute – When applied on a method this method gets fired after the serialization process takes place.

OnSerializingAttribute – When applied on a method this is fired during the serialization process.

We can get maximum control over serialization and deserialization process using the attributes.

Wednesday, December 26, 2007

tables which are dependent on a given table

Use the below query to get the list of tables that depends on the input table.


Select
S.[name] as 'Dependent_Tables'
From
sys.objects S inner join sys.sysreferences R
on S.object_id = R.rkeyid
Where
S.[type] = 'U' AND
R.fkeyid = OBJECT_ID('Person.StateProvince')

here, replace Person.StateProvince with your table name.

Returning number of affected rows

By default, SQL Server returns a friendly message to report the total number of rows affected by each query. This is a great setting for debugging your applications or modifying data directly in Query Analyzer. However, your front-end application doesn't need to know the number of affected rows -- it simply needs the data. Sending this message can introduce unnecessary network overhead.

What should you do? Use "SET NOCOUNT ON" in all your stored procedures to reduce network chatter.

Performance issues with views

SQL Server views offer several benefits to a T-SQL programmer: they can be used to obfuscate underlying table structure, restrict sensitive data from certain users and so forth. However, views are not the best tool for every job.

Some Oracle programmers who switch to SQL Server tend to use views extensively; older versions of Oracle didn't allow returning data set from stored procedures, so one would have to create views. This is not the case with SQL Server. But, to be fair to Oracle programmers, they're certainly not the only ones who make this mistake.

Perhaps the most common misconception is that a view with dozens of joins will perform better than a stored procedure containing the same number of joins. This simply isn't true. A view has no performance advantage over the query that has the same number of joins. In fact, views have a major limitation -- they do not accept parameters. So, if you have a view joining 10 tables and one of those tables has a million rows, your view will return at least 1 million rows. Joining such a view to a few other tables within a stored procedure could be disastrous.

What should you do? A better alternative in such cases would be a user-defined function (UDF), which accepts parameters and allows you to limit the number of returned rows. Another option is to join all tables within a single unit stored procedure and limit the output by using parameters.

Table variables in SQL Server 2008

Microsoft SQL Server 2008 introduces another important change to table variables. The latest SQL Server edition allows you to create a table variable as an input or output parameter of a stored procedure. An array of information can now be passed to a stored procedure without creating a string or XML block and then having to parse it out after executing the stored procedure. There are some restrictions on this however. The table variable parameter must be declared based on a table data type, and the variable must be read-only. When compared to not having this ability at all, these restrictions are minor inconveniences at best.

In the below sample code, a user-defined table type is created and a stored procedure is then defined using this table type. A local variable is declared also using the table type, data is loaded into the variable and then the array of data is passed from the local variable to the input parameter where the records are used within the stored procedure. Here is an example:


create type tt_example AS TABLE
(spid int)
go
create procedure usp_example
@spids tt_example READONLY
AS
SELECT *
FROM @spids
GO
declare @spids tt_example

insert into @spids
select top 10 spid
from sys.sysprocesses

exec usp_example @spids=@spids

Tuesday, December 25, 2007

Changing Stored Procedure and View Names

There are times when you will need to change the name of a view or stored procedure after the object has been created. In the Enterprise Manager:

1. Right-click on the object, to show the available sub-menu options.
2. Select ‘Rename’.
3. Change the name of the object.

A word of caution is needed when performing this action. Renaming a stored procedure or a view does not change the name in the text of the definition (i.e. “Create Procedure ”, “Create View ”). To change the name of the text, you must modify the object directly.

If you do not make the proper modification, you will encounter a problem when you script the object. When the object is scripted, the name that appears in the text of the definition will become the name of the object. In this case, the object name will be reverted back to the original name.

Finding Duplicates

In many situations, users need to have the ability to determine when duplicate data exists in the database. You can use a GROUP BY and COUNT to see how many duplicates exist in each group. However, in most instances, users want to see the primary key identifier of the duplicate rows so that you can display the data or delete the duplicates. You cannot add the primary key to a query that uses a GROUP BY because this would cause all counts to equal one since the primary key by definition is unique.

To get a query that shows duplicates along with the primary key, you must use an "In" clause in the WHERE clause.

Using Northwind:

SELECT
Employees.LastName, Employees.FirstName, Employees.Address, Employees.EmployeeID, Employees.Title

FROM
Employees

WHERE
(
(
(Employees.LastName) In
(
SELECT [LastName]
FROM [Employees] As Tmp
GROUP BY [LastName],[FirstName],[Address]
HAVING Count(*)>1 And [FirstName] = [Employees].[FirstName] And [Address] = [Employees].[Address]
)
)
)

ORDER BY
Employees.LastName, Employees.FirstName, Employees.Address


This query will return the first name, last name, address, employeeid and title of all rows where the last name, first name, and address are duplicated.

Avoid Using Text and Ntext Data Types

The text data type has several inherent problems, including:

You cannot directly write or update text data using the INSERT or UPDATE statements. Instead, you have to use special statements like READTEXT, WRITETEXT and UPDATETEXT.
There are several known issues associated with replicating tables that contain text columns.
Because of the way SQL Server stores text columns, it can take more time to retrieve text/ntext values.
Text data types have extra overhead that can negatively impact performance.
If you don't have to store more than 8KB of text, consider using char(8000) or varchar(8000) data types instead of text and ntext.

Declaring and Using Transact-SQL Variables in a Stored Procedure

Most people are familiar with declaring and using Transact-SQL variables in stored procedures. Typically, these variables are used for temporary storage of scalar values such as integers, strings, dates and so forth. However, a variable may also be a table-type variable. You can use a table-type variable where you might otherwise use a temporary table. One reason you might want to use a table variable is to hold onto a small subset of data for use multiple times in the body of a stored procedure.

In this example, a table variable is created to hold a subset of data from the Northwind Orders table for a particular employee:

CREATE PROCEDURE stpTableVariable
(@EmpID int) as

-- create the table variable
declare @EmpOrders table (orderid int not null)

-- populate the table. In this case we take the results of a query on
-- another table, but you could do simple INSERT statements that take
-- literals as well:
insert into @EmpOrders (orderid)
select orderid
from orders where employeeid = @EmpID

-- Now use the results stored in the table variable as part of a where
-- clause. You could also do a join or any other standard SQL action
-- with the table variable
select * from orders
where orderid in (select orderid from @EmpOrders)

Sql Database Table Design

When creating a SQL database you need to ask - what is the information that I want to capture? Do not try to capture everything on one SQL database. Simplify it. It is OK to create several SQL databases with specific types of information you want to capture for an organization. You can always create a sql command to gather the information from the separate databases.

Afterwards, you can get down to designing the TABLES. Below are three rules about SQL Table design. I know that rules are made and then broken, but in general follow the 3 Rules below (if you can).

RULE 1: In a Table, do not repeat data.

RULE 2: Repeated data goes in another Linked table.

RULE 3: Row Data must only rely on the Table's Primary Key.

Microsoft Offers Free Online Course on SQL Server 2008

Microsoft Learning is offering a free, three-hour, self-paced, web-based, on-line training course on the improvements that will be available with SQL Server 2008. The course is text-based, so you will be spending a lot of time reading, but it it worth it if you want to get a head start on SQL Server 2008.

The course is called "Clinic 6188: What's New in Microsoft SQL Server 2008 for Enterprise Data Platform," and is divided into three catetgories, so you can focus on what most interests you.

What's New in SQL Server 2008 for Enterprise Data Platform
What's New in SQL Server 2008 for Business Intelligence
What's New in SQL Server 2008 for Database Development

https://www.microsoftelearning.com/eLearning/offerDetail.aspx?offerPriceId=139087

You need a Microsoft Live Account to register for this free course.

SQL Server Join Hints

JOIN hints can be used in a query to specify the type of JOIN the Query Optimizer is to use for the execution plan. The JOIN options are:

Loop
Merge
Hash
The syntax for a JOIN hint is (using an INNER JOIN as an example):

FROM table_one INNER [LOOP | MERGE | JOIN] JOIN table_two

Here's an example:

FROM header_table INNER LOOP JOIN detail_table

As you can see, the JOIN hint is between the type of JOIN (in this example, INNER) and the JOIN keyword. Only one JOIN hint can be used per JOIN in a query. In addition, JOIN hints can only be used when the ANSI JOIN syntax is used, not the older Microsoft JOIN syntax.

The syntax above is not the only option to add a JOIN hint to a query. In addition, you can also use OPTION clause. Using the OPTION clause specifies that the hint be used throughout all of the query. While multiple hints can be added to the OPTION clause, each query hint can be used only once. In addition, only one OPTION clause can be used per query.

Here's an example of using the OPTION clause:

OPTION (INNER) or OPTION (MERGE) or OPTION (HASH)

The Query Optimizer always tries to identify the fastest way to JOIN tables. The fastest JOIN method is the Loop JOIN, followed by the Merge and the Hash JOIN. While the Query Optimizer always tries to perform a Loop JOIN if possible, it is not always possible, and one of the other two types may have to be used.

Before you attempt to use JOIN hint to specify a JOIN type, you should first take a look at your query, and the indexing of the relevant tables, to see if you can find a way, other than using a hint, to induce the Query Optimizer to use a Loop JOIN. Keep in mind that if you specify a Loop JOIN hint for a JOIN that is currently using a HASH JOIN, that you may get worse, not better performance. So always test JOIN hints to see what their end result in performance really is.

Occasionally, you may find yourself having to perform an INNER JOIN on tables located in databases on different servers. Generally speaking, SQL Server will evaluate if the performing the JOIN will perform better on the local or remote server, and run the JOIN their accordingly. But in some cases, such as when the remote table is not a SQL Server table, this is not possible.

SQL Server provides a special JOIN hint that you can use to force a JOIN to occur on the remote server, if this is appropriate, called REMOTE. The REMOTE hint is used to specify that the JOIN operation be performed on the right table of the JOIN. You might consider this if the left table of the JOIN is the local table and the right table in the JOIN is the remote table. Of course, doing this only makes sense if the right table has more rows that the left table. REMOTE cannot be used if the tables use different collation.

SQL Server Table Hints

Index hints (a form of a table hint) are used to specify which index or indexes you want used when a query runs. When you specify one or more indexes, the Query Optimizer will use them when creating the execution plan for your query.

The syntax for a table hint is:

SELECT column_list FROM table_name WITH (INDEX (index_name) [, ...])

Where:

column_list is the list of columns you want SELECTed.

table_name is the name of the table with the relevant indexes.

index_name is the index name or index ID for the index or indexes you want used by the Query Optimizer.

Here's an example:

SELECT memberID, fname, lname FROM members WITH (INDEX(memberID_index))

While you can use an index ID instead of the index name, this is not a good idea. This is because index IDs can change if you drop and recreate the indexes differently than when you first created them.

If you want to specify multiple indexes, to force an index intersection or an index covering as a result of an index join, you can by adding each index name or ID, one after another, with each one separated by a comma.

If you want to force a table scan, use the hint: (INDEX(0)).

Index hints should only be used when the Query Optimizer, for whatever reason, does not select the available index, and if using the index provides a performance enhancement over not using an index.

The FASTFIRSTROW table hint is functionally equivalent to the OPTION FAST (number_of_rows) hint, and either one can be used interchangedly, although Microsoft recommends that you use OPTION FAST, as support of FASTFIRSTROW may be removed in future versions of SQL Server.

For queries that return large resultsets, this hint can be handy. What this hint does is to tell the Query Optimizer to retrieve X number of rows of the total resultset as fast as it can, then continue on and return the full resultset as time permits. This acts to get the initial part of your data retrieved very quickly.

On the other hand, while it generally speeds up the retrieval of the first X number of rows, the rest of the rows are often returned more slowly that they would have been if the hint had not been specified.

Thursday, December 13, 2007

.NET Assemblies

.NET Assemblies are a partially compiled library for use in deployment, versioning and security.

Characteristics include:
1.)Assemblies are self describing in nature.
2.)They can be executed side by side without versioning problems.
3.)Two or more applications can run independently inside a single process with processes separated via the application domains.
4.)In case of private assemblies, installation is as simple as copying the files into the file system organization.

Assemblies are of two types:

Private Assembly :Private assemblies are intended for a particular application and are typically stored in the application.
Shared Assembly : Shared assemblies are stored in the global assembly cache (GAC) and are shared by many applications. Because they are global, each assembly must have a distinct or unique name.

Friday, December 7, 2007

Microsoft Indexing Service and ASP.NET

Microsoft Indexing Service is a service that provides a means of quickly searching for files on the machine. The most familiar usage of the service is on web servers, where it provides the functionality behind site searches. It is built into Windows 2000 and 2003. It provides a straightforward way to index and search your web site.

Connecting to the Indexing Service
The Indexing Service exposes itself to the developer as as ADO.Net provider MSIDXS with the data source equal to the indexing catalog name. For example, the connection string used insearching this site is

Provider="MSIDXS";Data Source="idunno.org";

As with any other ADO.Net provider you use the connection string property of the System.Data.OleDb.OleDbConnection object.

using System.Data.OleDb;
protected OleDbConnection odbSearch;
odbSearch.ConnectionString= "Provider= \"MSIDXS\";Data Source=\"idunno.org\";"
odbSearch.Open(); //Query and process result
odbSearch.Close();

COM/COM+ in ASP.NET

Microsoft introduced a new concept called managed code. Code that runs within the CLR environment is called managed code. Code that runs outside the boundaries of CLR environment is called unmanaged code. Some of the example of unmanaged code are COM, ActiveX components. .NET provides a mechanism which makes it possible to use the existing unmanaged components with ASP.NET application. This mechanism is called interoperability.

Introduction to COM/COM+
COM is a binary standard. It is acronym for the Component Object Model. COM is a specification and the components built in any language according to this specification (COM) can be integrated with any client application written in any language. The Client doesn't has to know the location of the components. When working with COM, the objects are compiled into DLLs i.e. the binary executable files. A DLL can contain more than one class.
COM+ is nothing but an extension of COM. COM+ is a combination of operating system services and an object oriented programming architecture. By extension to COM,it means adding a few services, like event registry, the queues of asynchronously received requests for a service (transaction management), ensuring security against the system requests (authentication), object pooling.

Invoking COM/COM+ components
In .NET, all codes are executed within the Common Language Runtime (CLR) environment. CLR runs your code and specifies services required for the development process. ASP.NET application cannot directly interact with COM components.

When an ASP.NET application calls a method defined in the COM component, CLR creates a wrapper called Runtime Callable Wrapper (RCW). RCW is a .NET object and is executed in CLR. The primary task of RCW is to marshal calls between the >NET application and the COM components. The CLR only creates one RCW object for a COM component even if it is referred more than one time in an application. RCW stores the number of references attached to it in an internal cache. RCW is created on the heap and so is subjected to garbage collection. When reference count to a particular RCW becomes zero, the garbage collection module will delete it and the memory allocated to the RCW object will be released.

When an ASP.NET application calls a method defined within the COM component, RCW performs the following tasks:

1.)Creates an instance of the COM object and bind it to the application
2.)Manages data conversion between managed and unmanaged data type.
3.)Manages the life time of the COM component for which it is acting as a wrapper.
4.)Handles exception during the data conversion process.

To add a COM component to an ASP.NET application,perform the following steps:

1. Open new project and select Visual C# projects from the Project types pane and ASP.NET Web Service from the templates pane.

2. Select Project->Add References.

3. Click the COM tab in the Add Reference dialog box. A list of COM components is displayed.

4. Select the COM component that you want to use within your ASP.NET application. For example, select the Microsoft ActiveX Data Objects 2.7 library option form the list and double click the Select button. The component is added to the Selected Components pane.

5. Click the OK button. The component that you have added to the project is displayed in the Reference node in Solution Explorer.

6. You can use the COM component in the application by specifying its library class, as follows:
ADODB.RecordsetClass rs=new ADODB.RecordsetClass();

This COM object rs can be used in the application to call its method.

Thursday, December 6, 2007

ASP.NET state management

ASP.NET applications are hosted in a web server and are accessed over the stateless HTTP protocol. As such, if the application uses stateful interaction, it has to implement state management on its own. ASP.NET provides various functionality for state management in ASP.NET applications. These functionality include Application state, Session state and View state variables.

Application state
Application state is a collection of user-defined variables that are shared by all invocations of an ASP.NET application. These are set and initialized when the Application_OnStart event fires on the loading of the first instance of the applications and are available till the last instance exits. Application state variables are accessed using the Applications collection, which provides a wrapper for the application state variables. Application state variables are identified by their names.

Session state
Session state is a collection of user defined session variables, which are persisted during a user session. These variables are unique to different instances of a user session, and are accessed using the Session collection. Session variables can be set to be automatically destroyed after a defined time of inactivity, even if the session does not end. At the client end, a user session is identified either by a cookie or by encoding the session ID in the URL itself.

ASP.NET supports three modes of persistence for session variables:

1.)InProc
The session variables are maintained within the ASP.NET process. However, in this mode the variables are destroyed when the ASP.NET process is recycled or shut down.

2.)StateServer
In this mode, ASP.NET runs a separate service that maintains the state variables. Because the state management happens outside the ASP.NET process, this has a negative impact on performance, but it allows multiple ASP.NET instances to share the same state server, thus allowing an ASP.NET application to be load-balanced and scaled out on multiple servers. Also, since the state management service runs independent of ASP.NET, variables can persist across ASP.NET process shutdowns.

3.)SqlServer
In this mode, the state variables are stored in a database server, accessible using SQL. Session variables can be persisted across ASP.NET process shutdowns in this mode as well.

View state
View state refers to the page-level state management mechanism, which is utilized by the HTML pages emitted by ASP.NET applications to maintain the state of the web form controls and widgets. The state of the controls are encoded and sent to the server at every form submittal in a hidden field known as __VIEWSTATE. The server sends back the variable so that when the page is re-rendered, the controls render at their last state. At the server side, the application might change the viewstate, if the processing results in updating the state of any control. The states of individual controls are decoded at the server, and are available for use in ASP.NET pages using the ViewState collection.

Directory structure in ASP.NET

In general, the ASP.NET directory structure can be determined by the developer's preferences. Apart from a few reserved directory names, the site can span any number of directories. The structure is typically reflected directly in the urls. Although ASP.NET provides means for intercepting the request at any point during processing, the developer is not forced to funnel requests through a central application or front controller.

The special directory names are:

App_Browsers
holds site-specific browser definition files.

App_Code
This is the "raw code" directory. The ASP.NET server will automatically compile files (and subdirectories) in this folder into an assembly which is accessible in the code of every page of the site. App_Code will typically be used for data access abstraction code, model code and business code. Also any site-specific http handlers and modules and web service implementation go in this directory. As an alternative to using App_Code the developer may opt to provide a separate assembly with precompiled code.

App_Data
default directory for databases, such as Access mdb files and SQL Server mdf files. This directory is usually the only one with write access for the application.

App_LocalResources
Contains localized resource files for individual pages of the site. E.g. a file called CheckOut.aspx.fr-FR.resx holds localized resources for the french version of the CheckOut.aspx page. When the UI culture is set to french, ASP.NET will automatically find and use this file for localization.

App_GlobalResources
Holds resx files with localized resources available to every page of the site. This is where the ASP.NET developer will typically store localized messages etc. which are used on more than one page.

App_Themes
holds alternative themes of the site.

App_WebReferences
holds discovery files and WSDL files for references to web services to be consumed in the site.

Memory in .NET - what goes where

The memory slot for a variable is stored on either the stack or the heap. It depends on the context in which it is declared:

1.)Each local variable (ie one declared in a method) is stored on the stack. That includes reference type variables - the variable itself is on the stack, but remember that the value of a reference type variable is only a reference (or null), not the object itself. Method parameters count as local variables too, but if they are declared with the ref modifier, they don't get their own slot, but share a slot with the variable used in the calling code.There's one exception to this simple rule - captured variables (used in anonymous methods) are local in terms of the C# code, but end up being compiled into instance variables in a type associated with the delegate created by the anonymous method.
2.)Instance variables for a reference type are always on the heap. That's where the object itself "lives".
3.)Instance variables for a value type are stored in the same context as the variable that declares the value type. The memory slot for the instance effectively contains the slots for each field within the instance. That means (given the previous two points) that a struct variable declared within a method will always be on the stack, whereas a struct variable which is an instance field of a class will be on the heap.
4.)Every static variable is stored on the heap, regardless of whether it's declared within a reference type or a value type. There is only one slot in total no matter how many instances are created.

DotSVN

DotSVN is a complete .NET implementation of the Subversion. DotSVN accesses a Subversion repository at the repository level itself (instead of at the client side/working copy). It implements the Subversion repository access layer completely in .NET without using any of the Subversion libraries. It exposes APIs which will help .NET developers to connect to an SVN repository and manipulate it. DotSVN is developed as an Open Source project hosted on Google Code. Recently, the DotSVN team released the first iteration of the DotSVN library which supports reading the contents of an FSFS repository hosted in Subversion. It can read the directory structure, properties of a revision and the contents of individual files from the repository. This release of DotSVN also includes a sample application which demonstrates the usage of the DotSVN APIs. This sample application is a WinForms based repository browser very similar to the repository browser of Tortoise SVN. It displays the contents of an FSFS repository in a hierarchical fashion displaying the Windows file type icon beside each directory or file

DotSVN APIs are very simple to use. Accessing an SVN repository using DotSVN involves 3 steps:

1.) Open the repository.
2.) Do the relevant operations (for eg: get the directory contents of the repository).
3.) Close the repository.

For further information,please check
http://www.dotsvn.net/articles/accessing-subversion-repository-from-net-using-dotsvn.html

Tuesday, December 4, 2007

ASP.NET Page Life Cycle

ASP.NET Page Life Cycle

A page in an ASP.NET application consists of several server controls. These are the fundamental building blocks of an ASP.NET application.

The Life Cycle of a page when requested for the first time:

1.)Initializing: During this phase, the server creates an instance of the server control
2.)Loading: During this phase, the instance of the control is loaded onto the page object in which it is defined.
3.)PreRendering: During this phase, the control is updated with the changes made to it. This prepares the control for rendering.
4.)Saving: During this phase, the state information of the control is saved. For example, if a value is set for the control during the Load event, it is embedded in the HTML tag that will be returned to the browser.
5.)Rendering: During this phase, the server creates the corresponding HTML tag for the control.
6.)Disposing: During this phase, all cleanup tasks, such as closing files and database connections opened by the control are performed.
7.)Unloading: During this phase, all cleanup tasks, such as destroying the instances of server control are performed. This is the final event in the life cycle of a server control

Windows Authentication in .NET

Securing a web application consists of two steps :
1.)Authenticating the user accessing the page
2.)Authorizing the user to access the page

Authentication is a process of determining whether a user is the one who he claims to be. Typically this will be determined with the help of user id and password.
Authorization is a process in which you decide whether an authenticated user is allowed to access certain page or resource. For example, operators may not be allowed to view certain confidential financial information that managers can view.

ASP.NET offers various ways to authenticate and authorize users of your web site. They are :

Windows authentication
Forms authentication (cookie authentication)
Passport authentication

Windows Authentication
Windows authentication scheme uses traditional mechanisms of Basic, NTLM/Kerberose and Digest authentication. Here IIS uses the credentials of logged in user are used to authenticate web requests. In case integrated windows authentication is turned off a typical gray colored dialog pops up asking for user id and password.

Steps involved in implementing windows authentication and authorization
1.)Create a ASP.NET web application
2.)Modify web.config to set authentication mode to windows
3.)Modify web.config to deny access to anonymous users
4.)Authorize users based on their NT user groups (roles)

Monitoring File System Events in .NET

Introduction
In some applications like file system explorers, file managers etc. it becomes necessary to keep an eye on file system changes. In traditional tools like VB6 this task usually required some third party component or Windows APIs.
In .NET the FileSystemWatcher class from System.IO namespace provides easy way to do the same thing. A windows file system has many events associated with files and folders. They include creation, renaming, deletion, change etc. In this article we will see how to use FileSystemWatcher class.

Steps involved in file system monitoring
Following steps are involved in typical application that uses FileSystemWatcher class:

1.) Import required namespace
The class FileSystemWatcher resides in System.IO namespace. So, you need to import this namespace. In addition to System.IO you will also need some threading support since you will be continuously monitoring for any change. So, you will also need System.Threading namespace.
Imports System.IO
Imports System.Threading

2.) Create object of FileSystemWatcher class and set its properties
Dim fw As New FileSystemWatcher()
fw.Path = "c:\bipin"
fw.Filter = "*.*"
fw.IncludeSubdirectories = True

The path property sets the location that is to be monitored.
The filter property sets the type of files to be monitored.
The IncludeSubdirectories property decides whether changes in sub-folders are to be considered for monitoring or not.

3.) Attach event handlers for various events
Next, we will attach event handlers for various events as shown below :
AddHandler fw.Renamed, AddressOf MyRenamedHandler
AddHandler fw.Created, AddressOf MyCreatedHandler
AddHandler fw.Changed, AddressOf MyChangedHandler
AddHandler fw.Deleted, AddressOf MyDeletedHandler
Here, we have added event handlers to Renamed, Created, Changed and Deleted events using AddHandler statement.

4.)Watch for any changes
Now that we have configured our FileSystemWatcher object it is time to start monitoring the file system. For our example I have put monitoring on a separate thread.
Dim t As New Thread(AddressOf WatchChange)
Public Sub WatchChange()
While True
fw.WaitForChanged(WatcherChangeTypes.All)
End While
End Sub
The WaitForChanged method is invoked inside an end less while loop. The argument to the method tells that we are interested in all types of changes.

5.)Write event handlers
The final thing is to write event handlers for various methods mentioned above.
Public Sub MyRenamedHandler(ByVal s As Object, ByVal e As
RenamedEventArgs)
Dim strMsg As String
strMsg = "Old Path :" & e.OldFullPath() & vbCrLf
strMsg += "New Path :" & e.FullPath
MessageBox.Show(strMsg)
End Sub
Public Sub MyCreatedHandler(ByVal s As Object, ByVal e As
FileSystemEventArgs)
Dim strMsg As String
strMsg = "File/Folder Created :" & e.FullPath
MessageBox.Show(strMsg)
End Sub
Public Sub MyChangedHandler(ByVal s As Object, ByVal e As
FileSystemEventArgs)
Dim strMsg As String
strMsg = "File/Folder Changed :" & e.FullPath
MessageBox.Show(strMsg)
End Sub
Public Sub MyDeletedHandler(ByVal s As Object, ByVal e As
FileSystemEventArgs)
Dim strMsg As String
strMsg = "File/Folder Deleted :" & e.FullPath
MessageBox.Show(strMsg)
End Sub

Monday, December 3, 2007

Custom Error Handling in ASP.NET

There are three error modes in which an ASP.Net application can work:

1.) Off Mode
2.) On Mode
3.) RemoteOnly Mode

The Error mode attribute determines whether or not an ASP.Net error message is displayed. By default, the mode value is set to "RemoteOnly".

Off Mode
When the error attribute is set to "Off", ASP.Net uses its default error page for both local and remote users in case of an error.

On Mode
In case of "On" Mode, ASP.Net uses user-defined custom error page instead of its default error page for both local and remote users. If a custom error page is not specified, ASP.Net shows the error page describing how to enable remote viewing of errors.

RemoteOnly
ASP.Net error page is shown only to local users. Remote requests will first check the configuration settings for the custom error page or finally show an IIS error.

Customization of error page can be implemented by adding a value for an attribute "defaultRedirect" in the tag of the configuration file "web.config". This file determines configuration settings for the underlying application.

Notification of Error to the Administrator

In a practical web application, customization of error pages is not the only requirement. The error, if encountered, should be reported to the administrator so that it can be rectified thus enabling subsequent requests to work properly without any error.
Notification of the error can be sent to the administrator in one of the following two ways:

1.) Error can be registered as a log entry in the Windows Event Log on the administrator's machine.
2.) An Email can be sent to the administrator with a suitable error message.

Writing to the Event Log

In ASP.Net, error can be handled programmatically by writing appropriate code in the page-level error event, for errors on an individual page or in the application-level error event for handling errors that may occur in any page of the application.

Therefore, code for writing in the Event Log should be written in either of the events, depending on the requirement of the application. To illustrate this example, I have written the code in the application-level event with the error mode set to "RemoteOnly" and the "defaultRedirect" attribute to "error.htm". The application-level error event should be included in the global file "global.asax" within the same application folder.

Sending Email To the Administrator

Imports System.Web
Imports System.Web.SessionState
Imports System.Web.Mail
Public Class Global
Inherits System.Web.HttpApplication
Sub Application_Error(ByVal sender As Object, ByVal e As EventArgs)
Dim mail As New MailMessage
Dim ErrorMessage = "The error description is as follows : "
& Server.GetLastError.ToStringmail.To = administrator@domain.com
mail.Subject = "Error in the Site"
mail.Priority = MailPriority.High
mail.BodyFormat = MailFormat.Text
mail.Body = ErrorMessage
SmtpMail.Send(mail)
End Sub
End Class

How to give strong name to DLL and how to install DLL in GAC

Steps for Giving strong name to DLL

1.)Open .net command prompt.

2.)Go to the folder contanig DLL.

3.)Type sn -k test.snk, you can use any file name instead of test. This will create test .snk file in that folder.

4.)Open the assemblyinfo.cs file of project.

5.)Type file path in this tag [assembly:AssemblyKeyFile@"E:\hemant\practice\HP\bin\Debug\HP.snk")]

6.)Build application, finally your strong name created for your DLL.

Steps for Installing in GAC

1.)After giving strong name in .net command prompt type gacutil in DLL path which
will install file in assembly.

2.)Copy the DLL file C:\WINDOWS\Microsoft.NET\Framework\v1.1.4322 folder.

3.)Add references to another project using .net tab.

Thursday, November 29, 2007

ADO.NET 2.0 Data Source Controls

ASP.NET 2.0 has following data source controls :

SqlDataSource
AccessDataSource
ObjectDataSource
DataSetDataSource
XmlDataSource
SiteMapDataSource

SqlDataSource: Designed to work with SQL Server databases. It uses Sql Server .NET data provider internally. Sql Server .NET data provider classes are defined in the System.Data.SqlClient namespace.
The data source is SQL Server.

AccessDataSource: Designed to work with Microsoft Access. It uses OleDb data provider internally. OleDb data provider classes are defined in the System.Data.OleDb namespace.The data source is Microsoft Access.

ObjectDataSource : Designed to work with objects.
The data source is Object (usually classes provide a well-defined structure).

XmlDataSource: Designed to work with XML documents.The data source is XML documents.

SiteMapDataSource: Designed to work with SiteMap objects, which is a new concept in ASP.NET 2.0. The data source is SiteMap.

DataSetDataSource: Designed to work with DataSet objects.The datasource is dataset.

Tuesday, November 27, 2007

New Caching Features in ASP.NET 2.0

The four main new features are:

1.)SQL Cache Invalidation
2.)Post-Cache Substitution
3.)Fragment Caching API
4.)Cache Configuration

SQL Cache Invalidation:

Let us assume you've got a web page which queries data from a database. If this is data that doesn't change very often, then there's really no need to query the database for the data each time a user requests the web page, so we decide to cache the data. The problem then becomes, how long do we cache the data for? If we cache it for too short a period then we make our web server work harder then it needs to. In the larger scheme of things, this will increase our operating costs since the server won't be able to handle as many requests and we'll need to add another server sooner then we really should. If, on the other hand, we cache the data for too long a period then we risk users being presented with out of data information. How big a problem this is really depends on the actual application and the data being cached, but it's generally not good to be showing users out of data information.

So finding the optimal length of time for which to cache a certain piece of data is not an easy task. It depends on a lot of factors. Some of those include -- how quickly the application needs to respond to users, the amount of users it needs to support, how frequently the data in the database changes, how quickly those changes must be reflected in the web pages, and what are the potential consequences of displaying old data?

What if instead of us having to continually check to see if there have been any changes, we could simply ask the database to tell us when there's been a change made. This is where SQL cache invalidation comes in. Instead of just picking a length of time to cache our data for, with ASP.NET 2.0's SQL cache notification we can set it up so that when the data is changed in the database, the cached version of the data is automatically cleared.

Post-Cache Substitution

Post-cache substitution is for that situation where most everything on a page can be cached except for one or two little exceptions that must be handled dynamically. In ASP.NET 1.x, the only way to handle this type of scenario was to split the page up into sections that could be cached and then make those into user controls. It worked, but it could be really confusing because you had to sort of reverse your thinking about the problem. It was no longer "let's cache everything but this one little section", but instead became "let's find everything we need to cache and turn it into a user control so we can cache it." All of a sudden your page is split up into ten different user controls and everything got complicated simply because we wanted to do something like put a current timestamp at the bottom of the page.
Post-cache substitution is exactly what it says it is. We take something that has been already been cached and simply substitute some dynamic data back into it. In effect, we are caching the whole page and then just executing the one little part that we didn't cache.
There are two ways to implement post-cache substitution. You can either use the Response.WriteSubstitution command or the control.

Fragment Caching API

The method used most often to cache sections of a page is called fragment caching. Fragment caching is what I described earlier where you move the sections to be cached into user controls and then set the OutputCache directive at the top of the control to cache it. This works fine in all versions of ASP.NET, but in ASP.NET 2.0, we now have access to the fragment caching API. This means that we are no longer stuck just choosing a finite number of minutes to cache the control. Now we can programmatically adjust the caching options.

Cache Configuration

There have been two main advances in this area: the ability to create and use cache profiles and to set cache properties via your application's configuration files.

Cache Profiles :
In ASP.NET 1.x you needed to set the length of time for which a file should be cached via the OutputCache directive at the top of the file. This made changing caching configuration relatively difficult because in order to change the setting you had to modify the setting in each file which implemented caching.
In ASP.NET 2.0, you can define what are called cache profiles. This allows you to create named sets of settings which are defined in your web.config file. Then if you find you need to make a change to one of the profiles, all you need to do is edit the profile in the config file and the change is picked up by all the scripts using that profile.

Cache Configuration via Config Files :
You can now modify caching parameters via ASP.NET's configuration files . You can enable or disable output and fragment caching, modify a number of parameters and even specify how much memory the system should allow caching to use.

Deploying web part on virtual server gallery

These steps are more specific to the windows share point server 2003 & web part developed in ASP.net 2003:

1.) Open the web part project in visual studio

2.) Choose file -> Add Project -> New Project -> Setup and deployment project -> Add -> Project Output

3.) Select the setup project in the solution explorer and choose project ->Add - > Project output.

4.) Select Primary output and content files from the web part project and choose OK, visual studio adds those items to the setup project.

5.) Choose Build -> Rebuild Solution (By doing this visual studio rebuilds the web part assembly and package the assembly and content files in the CAB file)

6.) Copy the resulting CAB file to the webparts folder

7.) Run stsadm.exe to install the CAB file on the server.

You can locate stsadm.exe file at following location:

C:\program files\common files\Microsoft shared\web server extension\60\bin\

You can run following command using command prompt

Stsadm - o addwppack - filename "c:\inetpub\wwwroot\Calendar.CAB"
This will install Calendar web part to the "Bin" directory of the virtual server. And this will be available to drag and drop in share point server 2003 under Virtual Gallery section.

For removing a web part from the list we have to use following command on the command prompt.

Stsadm-o deletewppack -name "Calendar.CAB"

Impersonation with ASP.NET 2.0

Impersonation:

Impersonation is the process of executing code in the context of another user identity. By default, all ASP.NET code is executed using a fixed machine-specific account. To execute code using another identity we can use the built-in impersonation capabilities of ASP.NET. We can use a predefined user account or user's identity, if the user has already been authenticated using a windows account.

We can use the impersonation in this two scenarios:

1.) To give each web application different permissions.
2.) To use existing Windows user permission.

These two scenario are fundamentally different. In the first one, impersonation defines a single, specific account. In this case, no matter what user access the application, and no matter what type of user-level security you use, the code will run under the account you've set.
In the second one, the user must be authenticated by IIS. The web-page code will then execute under the identity of the appropriate user.

Implement Impersonation:

Impersonate the Microsoft IIS Authenticated Account or User : To impersonate the IIS authenticating user on every request for every page in an ASP.NET application, we must include an tag in the Web.config file of this application and set the impersonate attribute to true.

Impersonate a Specific User:To impersonate a specific user for all the requests on all pages of an ASP.NET application, you can specify the userName and password attributes in the tag of the Web.config file for that application.

Monday, November 26, 2007

ASP.NET 2.0's Provider Model

The provider design pattern is used throughout ASP.NET 2.0.The beauty of the provider model is that the customer implementing the solution can specify a custom class that the system should use. This custom class must implement the system's well-defined API, but it allows for any custom implementation to be seamlessly plugged in. That is, once this API is defined, the system implementor can create a default concrete implementation - one that uses SQL Server and a Users table - that most customers can use without modification. Those customers that have a custom need - those that want to use Oracle or have user data stored in some other manner - can create classes that provide the necessary functionality and plug them into the system.

The Benefits of the Provider Model:

1.) There is a clean separation between the code and the backend implementation. Regardless if whether or not the code to authenticate a user is done against a SQL Server 2000 database's Users table, or if it's done against an Active Directory store, the code from the page developer's perspective is the same: DataProvider.Instance().AuthenticateUser(username, password). The backend implementation changes are transparent.

2.) Since system architects are strongly encouraged to create a default concrete implementation, the provider model offers the best of both worlds: to those who are content with the default implementation, the system just works as expected; for those that need to customize the system, they can do so without upsetting the existing code or programmatic logic. This design pattern also makes prototyping and agile development a lot easier. For example, in the early iterations of working with the system, it might be easier to just use the default implementation. However, later you suspect you'll need to cusomize certain aspects in order to integrate the work with your company's existing systems. When that time comes, you can achieve the needed customization through the provider model, meaning your earlier work need not be changed to reflect the backend implementation changes.

3.) Like many good design patterns, the provider model also affords separation of duties among developers. One set of developers can be tasked with mastering the system's API, while others can be tasked with focusing on the backend implementation and customization. These two groups can work on the system without stepping on one another's toes. Furthermore, if the system being worked on is an industry standard - like ASP.NET 2.0 - skills from both tasks can be easily carried over into future jobs.


ASP.NET 2.0 utilizes the provider model throughout its architecture. Many of its subsystems - Membership, Site Navigation, Personalization- utilize the provider model. Each of these subsystems provide a default implementation, but enable customers to tweak the functionality to their own needs. For example, the Site Navigation piece of ASP.NET 2.0 allows a page developer to define the navigational structure of their website. This data can then be used by a variety of Web controls, to display site maps, breadcrumbs, treeviews, or menus that highlight the site's navigation and/or show the user's location in the site. In addition to navigation-related Web controls, the site navigation API provides a bevy of methods for interacting with the website's navigation information
By default the site's navigational information must be encoded in a properly-formatted XML file. This is the data store that the default site navigation is hard-coded to use. However, ASP.NET 2.0's provider model makes it easy for you to use your own data store for site navigation.
The provider model is one of ASP.NET 2.0's greatest features for migration. ASP.NET 2.0 offers a lot of new features that developers had to custom bake in 1.x. If these new features in 2.0 used rigid implementations, it would dissuade migration from 'living' 1.x applications that used custom solutions since many of the new ASP.NET 2.0 Web controls use these new subsystems. With the provider model in place, however, we can upgrade our 1.x apps to 2.0 and create a provider to have 2.0's new subsystems integrate with our custom baked solutions. That means when moving to 2.0 we can use the new Web controls and have them seamlessly use our existing systems thanks to the provider model.


For More Information... read Rob Howard's two articles:
Provider Model Design Pattern and Specification, and
The Provider Model in ASP.NET 1.x

Displaying Random Images in an ASP.NET Web Page

Displaying Random Images can be done using two different approaches.They are:

1.) Randomly Displaying an Image from a Directory of Images:

The easiest way to display a random image is to add an ASP.NET Image control to your page (or Master Page) and to write a little code that gets all of the image files from a particular directory, uses the Random class to randomly pick an image from the list, and assings the randomly selected image's path to the ASP.NET Image control's ImageUrl property.

2.)Displaying Random Images Using the ContentRotator Web Control:

Using the ContentRotator control is easy: just specify what content items are to be considered when choosing the content item to display. The content items can be specified through an XML file, added programmatically to the control, or hard-coded through the control's declarative syntax.
The algorithm used to determine what content item to display uses each content item's impressions attribute.

"The algorithm used to randomly choose a content item works by laying out each applicable content item end-to-end, forming a line. The length of each content item is its impressions value, meaning that the total length of the line is the sum of the applicable content items' impressions. Next, a random number less than the total length is chosen, and the content item to display is the one that lies at the location of the random number."

To complete the implementation of this approach, all we need to do is add the ContentRotator control to the ASP.NET page or Master Page where we want the random image to appear and set its ContentFile property to the path of the XML content file.

Using the ContentRotator control in your application involves the following steps:

1.) Add the skmContentRotator.dll assembly to your application's /Bin directory.

2.) At the top of the ASP.NET page or Master Page where you want to use the ContentRotator control, add the following @Register directive: <%@ Register TagPrefix="skm" Namespace="skmContentRotator" Assembly="skmContentRotator" %>.

3.) Add the ContentRotator control to a page by adding the following markup: .

Alternatively, you can bypass steps 2 and 3 above by adding the ContentRotator control to the Toolbox and then dragging and dropping the control from the Toolbox onto your web page or Master Page.

Sunday, November 25, 2007

SQL SPY is returning

Project SQL SPY 5.2 is out! This version of SQL SPY fully supports SQL Server 2005 and has many new features exclusively for SQL 2005. SQL SPY was designed as a tool to display, monitor and report on valuable SQL Server information. Look around the internet and find any SQL Server monitoring tool. Now compare them to SQL SPY. What you will find is that you are going to pay hundreds, if not thousands of dollars for them. Since 1999 I have dedicated all my free time to developing an alternative to those high priced applications. Will those companies actually listen to you and add the features or functionality that you really want? SQL SPY supports 75 distinct features, and every one of those features has come from you.
Now you can customize SQL SPY yourself. I am offering the complete SQL SPY project for download!SQL SPY's reporting capabilities include the Server, Database, and Connection levels. Generate data object biographies that include dependencies, sizes, indexes, rules, triggers and defaults, stored procedures, user functions and views. Monitor and record user connections over long periods of time with minimal impact even within the most sensitive production environment.
Important!
1. If you are upgrading or reinstalling SQL SPY, you will need to use the "Clean" button on the SQL registration window and re-register any SQL 2005 instances to be able to use the new features.
2. I will not be supporting SQL SPY as of version 5.2. I am unable to keep up with the demand.

Please download the setup from
"http://www.hybridx.com/_Downloads/SQL SPY Setup.zip"

Using T-SQL MERGE command in SQL Server 2008

MERGE sql command is one of the new features introduced in upcoming SQL Server 2008. Basically, it’s used to insert, delete or update records in a target table basing on the result of join with the source table. Instead of using a combination of IFs and SELECTs MERGE makes it possible to write one query which will:
join the tables
specify matching values and perform requested action
specify non-matching values and also perform requested action
The following example should give you some idea about MERGE command.

I assume you have already created some database for testing purposes, so the first thing that needs to be done is preparing two tables. One of the tables, TestTable1, is going to be the target table, while the other one, SourceTable1, will be used as a source table for the merge. The following queries create the tables and populate then with some values. Notice the difference both in column naming and table definition.
CREATE TABLE TestTable1(tableId int PRIMARY KEY,textData varchar(20),intData int)
CREATE TABLE SourceTable1(tableId int PRIMARY KEY,someText varchar(20),someInt int,someBit bit)

INSERT INTO TestTable1 VALUES(1, ‘Test 1′, 21)
INSERT INTO TestTable1 VALUES(2, ‘Test 2′, 21)
INSERT INTO TestTable1 VALUES(7, ‘Test 7′, 21)
INSERT INTO TestTable1 VALUES(9, ‘Test 9′, 21)
INSERT INTO SourceTable1 VALUES(1, ‘Merge source 1′, 21, 0)
INSERT INTO SourceTable1 VALUES(2, ‘Merge source 2′, 31, 1)
INSERT INTO SourceTable1 VALUES(3, ‘Merge source 3′, 55, 1)
INSERT INTO SourceTable1 VALUES(4, ‘Merge source 4′, 1, 0)
INSERT INTO SourceTable1 VALUES(5, ‘Merge source 5′, 13, 0)
INSERT INTO SourceTable1 VALUES(6, ‘Merge source 6′, 90, 1)
INSERT INTO SourceTable1 VALUES(8, ‘Merge source 8′, 97, 1)
INSERT INTO SourceTable1 VALUES(9, ‘Merge source 9′, 6, 0)
INSERT INTO SourceTable1 VALUES(10, ‘Merge source 10′, 11, 0)

The tables are ready, so now it’s time to write the query to merge SourceTable1 into TestTable1 according to following rules:
1) If tableId of the source record does not exist in the target table, copy the record into target table skipping someBit value
2) If tableId of the source record exists in target table, overwrite target record’s textData with matching source record’s someText value. If source record’s someBit is set to 1 also overwrite intData with source’s someInt, otherwise leave it intact

The following query performs these actions:
MERGE INTO TestTable1 T
USING SourceTable1 S ON S.tableId = T.tableId
WHEN NOT MATCHED THEN INSERT (tableId, textData, intData) VALUES(S.tableId, S.someText, S.someInt)
WHEN MATCHED THEN UPDATE
SET textData = S.someText,
intData = CASE S.someBit WHEN 1 THEN S.someInt
ELSE T.intData
END;
As you can (hopefuly) see, the code is simple and easy to read. First, I defined the join criteria (ON S.tableId = T.tableId), then the INSERT action for non-matching records, and finally UPDATE action for matching records.
Warning: Notice the semicolon at the end of the statement - MERGE has to be terminated, else the execution will return an error.
Hint: You can use second WHEN MATCHED clause but you have to obey some rules. The first clause has to be accompanied by AND clause. The second WHEN MATCHED is applied if the first one isn’t - so you can not combine WHEN MATCHED clauses to perform more than one action on the same row. Also, when using two WHEN MATCHED clauses one of them has to specify UPDATE, and the other DELETE action.
Link: There is a pre-release documentation for MERGE which you can read here. If you are interested in more detailed description of the command it’s a good place to start.

Thursday, November 22, 2007

WebParts in ASP.Net 2.0

Webparts are going to be the future of web based management systems. WebParts give us the option of dragging and dropping of objects on a page as well as, changing titles and border style properties of objects at runtime. Before the introduction of WebParts it was used to be a hectic task because we had to write a lot of Javascript and had to save the state of objects in a database.

There are two basic things in WebParts:
1.) WebPart manager
2.) WebPart zones

WebPartManager
The WebPartManager is the manager for all webparts. If webparts are used in web projects then WebPartManager is required. Usually you just drag and drop this into your webform and are ready to go.

WebPart Zones
There are four kinds of zones in webpart zones:

1.) WebPart Zone
2.) Editor Zone
3.) Catalog Zone
4.) Connection Zone

The webpart Zone is the basic unit for webparts. By placing different contents in a webpart zone we can allow a user to drag and drop contents on a page.
To use different zones add a dropdownlist to your webform and add the following items to it.
----Browse
----Display
----Edit
----Catalog
----Connect

Browse Mode
The Browse mode is the default mode of webparts. In Browse mode we can not drag and drop the webparts but we can see two options, minimize and close. Minimizing a webpart will still display it in minimized state. If you choose close then it can only be restored while being in catalog mode.

Design mode
In design mode we can drag drop objects between webparts.

Edit Mode
The edit mode is used to edit webparts at runtime. Editing a webpart is further divided into four types: Appearance, Behavior, Property and Layout.

Catalog mode
The Catalog mode gives us the option to add/remove webparts at runtime. For example if we have few modules like weather, news, shopping, horoscope etc. and want to give the user an option to show or hide these modules at runtime, we can accomplish this task using the catalog mode.

Connect mode
This mode allows webparts to communicate with each other. We can make static connections once (in our code) or we can allow users to create connecttions at runtime according to their needs. The Connection mode doesn’t mean mean that the webpart is connecting to a database rather means it is connected to other webparts. For example if a webpart contains a grid, used to display some records and we want to filter it on the users input, then we could use a textbox in another webpart, which would send the filter criteria text by using the connect mode.

For more information,check
http://dotnetslackers.com/articles/aspnet/UsingWebPartsInASPNet20.aspx

Difference between Truncate and Delete

Truncate and Delete both are used to delete data from the table. Both these commands will only delete the data of the specified table; they cannot remove the whole table: data along with structure.
TRUNCATE and DELETE remove the data not the structure
Both commands remove rows from a table, but the table structure and its columns, constraints, indexes, and so on remain. To remove the table definition in addition to its data, use the DROP TABLE statement.

Conditional based deletion of data
Conditional based deletion of data means that not all rows are deleted. Let's suppose I have a table authors and from this table I want to delete the authors that are living in Australia. Let's examine what our options for doing this with each command.
TRUNCATE - In case of the TRUNCATE command we can't perform the conditional based deletion because there is no WHERE clause allowed with this command.
DELETE - THe DELETE command provides the functionality of conditional based deletion of data from the table using the WHERE clause.

Delete and Truncate both are logged operations:
On most of the articles I have read on the Internet, I have seen this written: "delete is a logged operation and truncate is not a logged operation", which means when we run the delete command it logs (records) the information about the deleted rows and when we run the truncate command it doesn't log any data. But this is not true; truncate is also a logged operation but in a different way. It uses fewer system and transaction log resources than delete. The TRUNCATE command uses minimum logging resources, which is why it is faster than delete. So both delete and truncate are logged operations, but they work differently as shown below.

DELETE is a logged operation on a per row basis.
The DELETE statement removes rows one at a time and records an entry in the transaction log for each deleted row. So, in case if you are deleting a huge number of records then it can cause your transaction log to grow. This means the deletion of a huge number of records will use more server resources as it logs each and every row that is deleted. That is why your transaction log will grow very rapidly. Since the delete statement records each deleted row it is also slow. Some people ask that if this is done for each row then why does not Microsoft modify the delete statement to not record each deleted row??? The answer is when you run your databases in full recovery mode, detailed logging is necessary for SQL Server to be able to recover your database to the most recent state.

TRUNCATE logs the deallocation of the data pages in which the data exists. TRUNCATE is faster than DELETE due to the way TRUNCATE "removes" rows from the table. It won't log the deletion of each row; instead it logs the deallocation of the data pages of the table. The TRUNCATE statement removes the data by deallocating the data pages used to store the table data and records only the page deallocation in the transaction log. Actually, TRUNCATE does not remove data, but rather deallocates whole data pages and removes pointers to indexes. The data still exists until it is overwritten or the database is shrunk. This action does not require a lot of resources and is therefore very fast. It is a common mistake to think that TRUNCATE is not logged. This is wrong. The deallocation of the data pages is recorded in the log file. Therefore, "Books Online (BOL)" refers to TRUNCATE operations as "minimally logged" operations. You can use TRUNCATE within a transaction, and when this transaction is rolled-back, the data pages are reallocated again and the database is again in its original, consistent state.

Behavior of Delete and Truncate for identity columns
now the case of identity columns. Both the TRUNCATE and DELETE commands behave differently against Identity columns. When we use truncate it will reset the counter used by an identity column for new rows to the seed value defined for the column. But in the case of DELETE it will not reset the counter of your identity column. Rather it maintains the same counter for new rows. In both the cases, if no seed was defined the default value 1 is used. As TRUNCATE resets the identity column counter, in the case where you want to retain the identity counter, use DELETE instead of TRUNCATE.

bcp Utility

There was a time that I avoided bcp utility like the plague. A few years ago, we needed to set up a once a month job to transfer about 100 GB of data from one server to another. Since this involved two SQL Server 2000 instances, the first attempt was using DTS, but the performance was just not good enough. We considered setting up a job to copy a backup file from one server to another, do a restore and then import the data. I decided to give bcp a try, exporting the data to files and then importing the data into the required tables. The performance was acceptable and the bcp solution has been in place without any issues since then.
I think that the reason I avoided using bcp for so long is that it has a dizzying number of options. Luckily, I found some bcp scripts written by a vendor using just a handful of arguments that got me started. Since then I have found several opportunities to use bcp and now consider it a very useful tool.
The bcp utility is a command line tool. You can also use it from the Query Window if xp_cmdshell is enabled. One thing to keep in mind is that when it is used from the command window or batch file, the file path is in the context of the box where the command is run. When running from the Query Window or a stored proc, the file path will be in the context of the server.
Below is an example script that I ran from my laptop at home in the Query Window. It is running SQLExpress. There are many, many more options available for bcp, but this minimal number of arguments has worked for just about everything I have wanted to do. There is also a handy “queryout” argument you can use instead of “out” to use a select statement instead of a table or view name.
You will have to modify the server name and file path for your environment. Make sure that the bcp command ends up all on one line./*
Export from the AdventureWorks database to a file
out = export to a file
-S = Server\Instance
-T = trusted authentication
-c = use character datatypes, tab delimited
-t = override the delimiter
*/
exec master.dbo.xp_cmdshell 'bcp AdventureWorks.HumanResources.Employee out c:\temp\employee.txt -S localhost\SQLExpress -T -c -t ""'
--Create a blank table
use AdventureWorks
go
if object_id('dbo.test_import') is not null
drop table dbo.test_import
select * into dbo.test_import from HumanResources.Employee where 1 = 2
/*
Import from a file into a table.
in = import from a file
-S = Server\Instance
-T = trusted authentication
-c = character datatypes, tab delimited
-t = override the delimiter
*/
exec master.dbo.xp_cmdshell 'bcp AdventureWorks.dbo.test_import in c:\temp\employee.txt -S localhost\SQLExpress -T -c -t ""'
select * from dbo.test_import

Wednesday, November 21, 2007

ClickOnce Deployment in .NET Framework 2.0

Microsoft has released a new technology named ClickOnce that is designed to solve the deployment issues for a windows forms application. This new technology not only provides an easy application installation mechanism but also enables easy deployment of upgrades to existing applications.

This technology works as follows:

1.) You create a Windows forms application and use the Publish option to deploy the application onto any of the following locations: File System, Local Web Server, FTP Site, or a Remote Web Site.
2.) Once the application is deployed onto the target location, the users of the application can browse to the publish.htm file and install the application onto their machine. Note that publish.htm file is the entry point for installing the application and this will be discussed in the later part of this article.
3.) Once the user has installed the application, a shortcut icon will be added to the desktop and the application will also be listed in the Control Panel/Add Remove Programs.
4.) When the user launches the application again, the manifest will contain all the information to decide if the application should go to the source location and check for updates to the original application. Let us say, for instance, a newer version of the application is available, it will be automatically downloaded and made available to the user. Note that when the new version is downloaded, it is performed in a transacted manner meaning that either the entire update is downloaded or nothing is downloaded. This will ensure that the application integrity is preserved.