Pages

Sunday, December 30, 2007

List Available DBCC Commands

DBCC Commands or Database Consistency Checker commands have been with SQL Server from its early ages itself. In this article we would list the available DBCC commands listed using the DBCC Help command. Many in this list are undocumented and are rarely used in practical scenarios. But these form the base for debugging complex problems. There are approximately 130 of them.

DBCC activecursors [(spid)]

DBCC addextendedproc (function_name, dll_name)

DBCC addinstance (objectname, instancename)

DBCC adduserobject (name)

DBCC auditevent (eventclass, eventsubclass, success, loginname, rolename, dbusername, loginid)

DBCC autopilot (typeid, dbid, tabid, indid, pages [,flag])

DBCC balancefactor (variance_percent)

DBCC bufcount [(number_of_buffers)]

DBCC buffer ( {'dbname' | dbid} [, objid [, number [, printopt={0|1|2} ][, dirty | io | kept | rlock | ioerr | hashed ]]])

DBCC bytes ( startaddress, length )

DBCC cachestats

DBCC callfulltext

DBCC checkalloc [('database_name'[, NOINDEX | REPAIR])] [WITH NO_INFOMSGS[, ALL_ERRORMSGS][, ESTIMATEONLY]]

DBCC checkcatalog [('database_name')] [WITH NO_INFOMSGS]

DBCC checkconstraints [( 'tab_name' | tab_id | 'constraint_name' | constraint_id )] [WITH ALL_CONSTRAINTS | ALL_ERRORMSGS]

DBCC checkdb [('database_name'[, NOINDEX | REPAIR])] [WITH NO_INFOMSGS[, ALL_ERRORMSGS][, PHYSICAL_ONLY][, ESTIMATEONLY][, DBCC TABLOCK]

DBCC checkdbts (dbid, newTimestamp)]

DBCC checkfilegroup [( [ {'filegroup_name' | filegroup_id} ] [, NOINDEX] )] [WITH NO_INFOMSGS[, ALL_ERRORMSGS][, PHYSICAL_ONLY][, ESTIMATEONLY][, TABLOCK]]

DBCC checkident ('table_name'[, { NORESEED | {RESEED [, new_reseed_value] } } ] )

DBCC checkprimaryfile ( {'FileName'} [, opt={0|1|2|3} ])

DBCC checktable ('table_name'[, {NOINDEX | index_id | REPAIR}]) [WITH NO_INFOMSGS[, ALL_ERRORMSGS][, PHYSICAL_ONLY][, ESTIMATEONLY][, TABLOCK]]

DBCC cleantable ('database_name'|database_id, 'table_name'|table_id, [batch_size])

DBCC cacheprofile [( {actionid} [, bucketid])

DBCC clearspacecaches ('database_name'|database_id, 'table_name'|table_id, 'index_name'|index_id)

DBCC collectstats (on | off)

DBCC concurrencyviolation (reset | display | startlog | stoplog)

DBCC config

DBCC cursorstats ([spid [,'clear']])

DBCC dbinfo [('dbname')]

DBCC dbrecover (dbname [, IgnoreErrors])

DBCC dbreindex ('table_name' [, index_name [, fillfactor ]]) [WITH NO_INFOMSGS]

DBCC dbreindexall (db_name/db_id, type_bitmap)

DBCC dbrepair ('dbname', DROPDB [, NOINIT])

DBCC dbtable [({'dbname' | dbid})]

DBCC debugbreak

DBCC deleteinstance (objectname, instancename)

DBCC des [( {'dbname' | dbid} [, {'objname' | objid} ])]

DBCC detachdb [( 'dbname' )]

DBCC dropcleanbuffers

DBCC dropextendedproc (function_name)

DBCC dropuserobject ('object_name')

DBCC dumptrigger ({'BREAK', {0 | 1}} | 'DISPLAY' | {'SET', exception_number} | {'CLEAR', exception_number})

DBCC errorlog

DBCC extentinfo [({'database_name'| dbid | 0} [, {'table_name' | table_id} [, {'index_name' | index_id | -1}]])]

DBCC fileheader [( {'dbname' | dbid} [, fileid])

DBCC fixallocation [({'ADD' | 'REMOVE'}, {'PAGE' | 'SINGLEPAGE' | 'EXTENT' | 'MIXEDEXTENT'}, filenum, pagenum [, objectid, indid])

DBCC flush ('data' | 'log', dbid)

DBCC flushprocindb (database)

DBCC free dll_name (FREE)

DBCC freeproccache

DBCC freeze_io (db)

DBCC getvalue (name)

DBCC help ('DBCC_command' | '?')

DBCC icecapquery ('dbname', stored_proc_name [, #_times_to_icecap (-1 infinite, 0 turns off)]) Use 'DBCC icecapquery (printlist)' to see list of SP's to profile. Use 'DBCC icecapquery (icecapall)' to profile all SP's.

DBCC incrementinstance (objectname, countername, instancename, value)

DBCC ind ( { 'dbname' | dbid }, { 'objname' | objid }, { indid | 0 | -1 | -2 } )

DBCC indexdefrag ({dbid | dbname | 0}, {tableid | tablename}, {indid | indname})

DBCC inputbuffer (spid)

DBCC invalidate_textptr (textptr)

DBCC invalidate_textptr_objid (objid)

DBCC iotrace ( { 'dbname' | dbid | 0 | -1 }, { fileid | 0 }, bufsize, [ { numIOs | -1 } [, { timeout (sec) | -1 } [, printopt={ 0 | 1 }]]] )

DBCC latch ( address [, 'owners'] [, 'stackdumps'])

DBCC lock ([{'DUMPTABLE' | 'DUMPSTATS' | 'RESETSTATS' | 'HASH'}]|[{'STALLREPORTTHESHOLD', stallthreshold}])

DBCC lockobjectschema ('object_name')

DBCC log ([dbid[,{0|1|2|3|4}[,['lsn','[0x]x:y:z']|['numrecs',num]|['xdesid','x:y']|['extent','x:y']|['pageid','x:y']|['objid',{x,'y'}]|['logrecs',{'lop'|op}...]|['output',x,['filename','x']]...]]])

DBCC loginfo [({'database_name' | dbid})]

DBCC matview ({'PERSIST' | 'ENDPERSIST' | 'FREE' | 'USE' | 'ENDUSE'})

DBCC memobjlist [(memory object)]

DBCC memorymap

DBCC memorystatus

DBCC memospy

DBCC memusage ([IDS | NAMES], [Number of rows to output])

DBCC monitorevents ('sink' [, 'filter-expression'])

DBCC newalloc - please use checkalloc instead

DBCC no_textptr (table_id , max_inline)

DBCC opentran [({'dbname'| dbid})] [WITH TABLERESULTS[,NO_INFOMSGS]]

DBCC outputbuffer (spid)

DBCC page ( {'dbname' | dbid}, filenum, pagenum [, printopt={0|1|2|3} ][, cache={0|1} ])

DBCC perflog

DBCC perfmon

DBCC pglinkage (dbid, startfile, startpg, number, printopt={0|1|2}, targetfile, targetpg, order={1|0})

DBCC pintable (database_id, table_id)

DBCC procbuf [({'dbname' | dbid}[, {'objname' | objid}[, nbufs[, printopt = { 0 | 1 } ]]] )]

DBCC proccache

DBCC prtipage (dbid, objid, indexid [, [{{level, 0} | {filenum, pagenum}}] [,printopt]])

DBCC pss [(uid[, spid[, printopt = { 1 | 0 }]] )]

DBCC readpage ({ dbid, 'dbname' }, fileid, pageid, formatstr [, printopt = { 0 | 1} ])

DBCC rebuild_log (dbname [, filename])

DBCC renamecolumn (object_name, old_name, new_name)

DBCC resource

DBCC row_lock (dbid, tableid, set) - Not Needed

DBCC ruleoff ({ rulenum | rulestring } [, { rulenum | rulestring } ]+)

DBCC ruleon ( rulenum | rulestring } [, { rulenum | rulestring } ]+)

DBCC setcpuweight (weight)

DBCC setinstance (objectname, countername, instancename, value)

DBCC setioweight (weight)

DBCC show_statistics ('table_name', 'target_name')

DBCC showcontig (table_id | table_name [, index_id | index_name] [WITH FAST, ALL_INDEXES, TABLERESULTS [,ALL_LEVELS]])

DBCC showdbaffinity

DBCC showfilestats [(file_num)]

DBCC showoffrules

DBCC showonrules

DBCC showtableaffinity (table)

DBCC showtext ('dbname', {textpointer | {fileid, pageid, slotid [,option]}})

DBCC showweights

DBCC shrinkdatabase ({dbid | 'dbname'}, [freespace_percentage [, {NOTRUNCATE | TRUNCATEONLY}]])

DBCC shrinkdb is no longer supported. Please use shrinkdatabase instead

DBCC shrinkfile ({fileid | 'filename'}, [compress_size [, {NOTRUNCATE | TRUNCATEONLY | EMPTYFILE}]])

DBCC sqlmgrstats

DBCC sqlperf (LOGSPACE)({IOSTATS | LRUSTATS | NETSTATS | RASTATS [, CLEAR]} | {THREADS} | {LOGSPACE})

DBCC stackdump [( {uid[, spid[, ecid]} | {threadId, 'THREADID'}] )]

DBCC tab ( dbid, objid )

DBCC tape_control {'query' | 'release'}[,('\\.\tape')]

DBCC tec [( uid[, spid[, ecid]] )]

DBCC textall [({'database_name'|database_id}[, 'FULL' | FAST] )]

DBCC textalloc ({'table_name'|table_id}[, 'FULL' | FAST])

DBCC thaw_io (db)

DBCC traceoff [( tracenum [, tracenum ... ] )]

DBCC traceon [( tracenum [, tracenum ... ] )]

DBCC tracestatus (trace# [, ...trace#])

DBCC unpintable (dbid, table_id)

DBCC updateusage ({'database_name'| 0} [, 'table_name' [, index_id]]) [WITH [NO_INFOMSGS] [,] COUNT_ROWS]

DBCC upgradedb (db)

DBCC usagegovernor (command, value)

DBCC useplan [(number_of_plan)]

DBCC useroptions

DBCC wakeup (spid)

DBCC writepage ({ dbid, 'dbname' }, fileid, pageid, offset, length, data)

@Page Directives

@Page directives allow you to set attributes that directly affect your ASP.NET pages that end with the .aspx extension. There are 26 directives available, and you can explore the exhaustive list in the SDK at the link I provided a few paragraphs back, but Table 4.4 contains some of the more commonly used @page directives.

Buffer
As seen in the previous section on the Response object, this sets whether the Buffer is enabled or not. This is true by default and would need to be set explicitly to false.

EnableSessionState
This defines Session-State requirements for the page. If this is set to true (default) then Session-State is enabled; if false Session-State is disabled; or if ReadOnly then Session state can be read but not changed.

EnableViewState
This direct controls whether ViewState(Maintain GUI state) is maintained across page requests. It is true by default.

Explicit
Specifies whether all variables on a page must be declared using a Dim, Private, Public, or ReDim statement in Visual Basic .NET. The default is false.

Inherits
Defines a class from a code-behind page that a page should inherit. (Don't worry, this will be explained is a few short minutes.)

Language
Informs the .NET Framework of the language used for all inline(<% %>) and server-side script blocks within the ASP.NET page.

Src
This contains the name of the source file for an included code-behind page.

Trace
This indicates whether tracing is enabled or not. True is enabled; otherwise false. False is the default. Tracing is a debugging function in ASP.NET that allows you to see diagnostic and debug information about an ASP.NET page.

clustered index in SQL Server

What is a clustered index

First off, we'll go through what a clustered index is. SQL Server has two types of indexes, clustered indexes and non-clustered indexes. Both types are organized in the same way with a b-tree structure. The difference between them lies in what the leaf-level nodes -- the lowest level of the tree -- contains. In a clustered index the leaf-level is the data, while the leaves of a non-clustered index contains bookmarks to the actual data. This mean that for a table that has a clustered index, the data is actually stored in the order of the index. What the bookmarks of the non-clustered index point to depends on if the table also has a clustered index or not. If it does have a clustered index then the leaves of non-clustered indexes will contain the clustering key -- the specific value(s) of the column(s) that make up the clustered index -- for each row. If the table does not have a clustered index it is known as a heap table and the bookmarks in non-clustered indexes are in RID format (File#:Page#:Slot#), i.e. direct pointers to the physical location the row is stored in. Later in this article we will see why this difference is important. To make sure that everyone understands the difference between a clustered index and a non-clustered index I have visualized them in these two images (clustered | non-clustered). The indexes correspond to those of this table:

CREATE TABLE EMPLOYEES
(
empid int NOT NULL CONSTRAINT ix_pkEMPLOYEES PRIMARY KEY NONCLUSTERED
, name varchar(25) NOT NULL
, age tinyint NOT NULL
)


CREATE CLUSTERED INDEX ixcEMPLOYEES ON EMPLOYEES (name)


INSERT INTO EMPLOYEES (empid, name, age) VALUES (1, 'David', 42)
INSERT INTO EMPLOYEES (empid, name, age) VALUES (2, 'Tom', 31)
INSERT INTO EMPLOYEES (empid, name, age) VALUES (3, 'Adam', 27)
INSERT INTO EMPLOYEES (empid, name, age) VALUES (4, 'John', 22)


SELECT * FROM EMPLOYEES WHERE name = 'John'
SELECT * FROM EMPLOYEES WHERE empid = 1

In the real indexes these four rows would fit on the same page, but for this discussion I've just put one row on each page. So, to return results for the first query containing WHERE name = 'John' SQL Server will traverse the clustered index from the root down through the intermediate node levels until it finds the leaf page containing John, and it would have all the data available to return for the query. But to return results for the second query, it will traverse the non-clustered index until it finds the leaf page containing empid 1, then use the clustering key found there for empid 1 (David) for a lookup in the clustered index to find the remaining data (in this case just the column age is missing). You can see this for yourself by viewing the execution plan for the queries in Query Analyzer (press Ctrl-K too see the plan).

Extended Stored Procedures

An extended stored procedure is simply a procedure that is implemented in a dynamic link library (DLL) — a library that is called by an application at runtime. Extended stored procedures can be used in much the same way as database stored procedures, except that extended stored procedures normally perform tasks related to the interaction of SQL Server with its operating environment. Tasks that are either too complicated or just not possible using Transact-SQL can often be performed with extended stored procedures.

Extended stored procedures are written using the Open Data Services (ODS) API. ODS is written and supported by Microsoft and is available on the Workstation version of SQL Server and as a separate product (and in the Back-Office Development Kit). The ODS development package includes both a static-link library (Opends60.lib) and C header files that let you create fully functional database services. ODS applications are dynamically linked with Opends60.dll.

Saturday, December 29, 2007

Using NHibernate with ASP.NET

NHibernate is a port of Hibernate Core for Java to the .NET Framework. It handles persisting plain .NET objects to and from an underlying relational database. Given an XML description of your entities and relationships, NHibernate automatically generates SQL for loading and storing the objects. Optionally, you can describe your mapping metadata with attributes in your source code.

NHibernate supports transparent persistence, your object classes don't have to follow a restrictive programming model. Persistent classes do not need to implement any interface or inherit from a special base class. This makes it possible to design the business logic using plain .NET (CLR) objects and object-oriented idiom.

Originally being a port of Hibernate 2.1, the NHibernate API is very similar to that of Hibernate. All Hibernate knowledge and existing Hibernate documentation is therefore directly applicable to NHibernate. Take a look at the available NHibernate Documentation.

NHibernate key features:
Natural programming model - NHibernate supports natural OO idiom; inheritance, polymorphism, composition and the .NET collections framework, including generic collections.
Native .NET - NHibernate API uses .NET conventions and idioms
Support for fine-grained object models - a rich variety of mappings for collections and dependent objects
No build-time bytecode enhancement - there's no extra code generation or bytecode processing steps in your build procedure
The query options - NHibernate addresses both sides of the problem; not only how to get objects into the database, but also how to get them out again
Custom SQL - specify the exact SQL that NHibernate should use to persist your objects. Stored procedures are supported on Microsoft SQL Server.
Support for "conversations" - NHibernate supports long-lived persistence contexts, detach/reattach of objects, and takes care of optimistic locking automatically
Free/open source - NHibernate is licensed under the LGPL (Lesser GNU Public License)

T-SQL Coding Standards Part - 2

8. Tables that map many-to-many relationships should be named by concatenating the names of the tables in question, starting with the most central table’s name.
9. Primary and Foreign key fields are postfixed with ID.
--Correct
CREATE TABLE dbo.[User]
(
UserID int NOT NULL,
AddressID int NOT NULL –-Foreign key
)
--Avoid
CREATE TABLE dbo.[User]
(
UserID int NOT NULL,
AddressFK int NOT NULL –-Fieldname indicates its use as a foreign key
)
10. Avoid naming fields in a way that indicates its use as a foreign key.
--Avoid
CREATE TABLE dbo.[UserAddress]
(
UserFK int NOT NULL,
AddressFK int NOT NULL
)

T-SQL Coding Standards Part - 1

1.Use upper case for all T-SQL constructs, except Types:
SELECT MAX(MyField) FROM MyTable
2.User lower case for all T-SQL Types and usernames:
DECLARE @MyVariable int
3.Use Camel casing for all UDO’s:
CREATE TABLE dbo.MyTable
(
MyField int
)
4.Avoid abbreviations and single character names
--Correct
DECLARE @Counter int
--Avoid
DECLARE @C int
5.UDO naming must confer to the following regular expression ([a-zA-Z][a-zA-Z0-9]+) - in short don’t use any special or language dependent characters to name objects. Constraints can use the underscore character.
--Avoid
CREATE TABLE dbo.[User Information]
6. Use the following prefixes when naming objects:
•usp - User Stored Procedures
•svf - Scalar Valued Functions
•tvf - Table Valued Functions
•vi - Views
•FK_ - Foreign keys
•DF_ - Default constraints
•IX_ - Indexes
CREATE PROCEDURE dbo.uspMyProcedure AS (...)
CREATE FUNCTION dbo.svfMyFunction
(...)
RETURNS int
AS
(...)
CREATE FUNCTION dbo.tvfMyFunction
(...)
RETURNS TABLE
AS
(...)
CREATE VIEW dbo.viMyView AS (...)
7. Name tables in the singular form:
--Correct
CREATE TABLE dbo.Address
--Avoid
CREATE TABLE dbo.Addresses

SQL Sevrer interview questions...

check out the below links on the interview questions..

http://www.t-mug.org/Interview_SQL.aspx
http://dotnetinterviews.spaces.live.com/
http://www.mcse.ms/message2303242.html

Benefits of using stored procedures

Applications that use stored procedures have the following advantages:

---Reduced network usage between clients and servers:

A client application passes control to a stored procedure on the database server. The stored procedure performs intermediate processing on the database server, without transmitting unnecessary data across the network. Only the records that are actually required by the client application are transmitted. Using a stored procedure can result in reduced network usage and better overall performance.

Applications that execute SQL statements one at a time typically cross the network twice for each SQL statement. A stored procedure can group SQL statements together, making it necessary to only cross the network twice for each group of SQL statements. The more SQL statements that you group together in a stored procedure, the more you reduce network usage and the time that database locks are held. Reducing network usage and the length of database locks improves overall network performance and reduces lock contention problems.

Applications that process large amounts of SQL-generated data, but present only a subset of the data to the user, can generate excessive network usage because all of the data is returned to the client before final processing. A stored procedure can do the processing on the server, and transmit only the required data to the client, which reduces network usage.

----Enhanced hardware and software capabilities:

Applications that use stored procedures have access to increased memory and disk space on the server computer. These applications also have access to software that is installed only on the database server. You can distribute the executable business logic across machines that have sufficient memory and processors.

---Improved security:

By including database privileges with stored procedures that use static SQL, the database administrator (DBA) can improve security. The DBA or developer who builds the stored procedure must have the database privileges that the stored procedure requires. Users of the client applications that call the stored procedure do not need such privileges. This can reduce the number of users who require privileges.

---Reduced development cost and increased reliability:

In a database application environment, many tasks are repeated. Repeated tasks might include returning a fixed set of data, or performing the same set of multiple requests to a database. By reusing one common procedure, a stored procedure can provide a highly efficient way to address these recurrent situations.

---Centralized security, administration, and maintenance for common routines:

By managing shared logic in one place at the server, you can simplify security, administration, and maintenance . Client applications can call stored procedures that run SQL queries with little or no additional processing.

Difference between Gridview and Datagrid

The GridView and the DataGrid controls have a different foundation. To enable easy migration from existing pages, the GridView preserves the DataGrid object model as much as possible. However, you shouldn't expect 100 percent compatibility between DataGrid-based code and new GridView-based code.
Another key difference between the DataGrid and GridView controls lies in the adaptive user interface. Unlike the version 1.x DataGrid, the GridView can render on mobile devices, too. In other words, to build reports on mobile devices, you can use the same grid control you would use for desktop pages. The DataGrid in version 2.0 can also render adaptively, but its UI capabilities are not quite as rich as those of the GridView.
In ASP.NET 2.0, the DataGrid control has been enhanced to support common control features such as themes and personalization. In addition, the new DataGrid control can be populated by a data source control. Remember, though, that a DataGrid can only bind to a data source object for the purpose of reading data. To actually modify the underlying data source, some user-defined code is still required. In contrast, the GridView control takes advantage of the capabilities of the underlying data source and automatically deletes or updates records. Note that the GridView control also supports the classic binding mechanism based on the DataSource property and the DataBind method. Although fully supported, this programming practice is discouraged.



The Gridview control provides you with option for Paging, Sorting, Formatting with a click of a button as compared to the datagrid control which required a bit of programming for the paging and sorting.

the GridView control also has a number of new features and advantages over the DataGrid control, which include:

· Richer design-time capabilities.
· Improved data source binding capabilities.
· Automatic handling of sorting, paging, updates, and deletes.
· Additional column types and design-time column operations.
· A Customized pager user interface (UI) with the PagerTemplate property.

Differences between the GridView control and the DataGrid control include:
· Different custom-paging support.
· Different event models.

Thursday, December 27, 2007

System.Runtime.Serialization namespace in .NET 3.0

This is a new namespace that has been added in .NET Framework 3.0. It has rich functionality with classes that are used for serializing and deserializing objects. Let us see some of the new classes that are added in this namespace.

1.DataContractAttribute --- This class Serializes and deserializes an object to an XML stream.

2.DataMemberAttribute--- This attribute is applied to the member of a type. This specifies that the member is part of a data contract andis serializable by the DataContractSerializer.

3.EnumMemberAttribute ---- Specifies that the field is an enumeration member and
should be serialized.

4.DataContractSerializer---- Serializes and deserializes an instance of a type into an XML stream or document using a supplied data contract.

Comparing Basic serialization with Custom serialization

Using basic serialization we can have an attribute "Serializable" at the class level. Using basic serialization, it is up to the .NET Framework to take care of the serialization and de-serialization. But the problem with this type of serialization is that we cannot have control over the serialization algorithm. Another major issue with basic serialization is that we have versioning issues like "System.Runtime.Serialization.SerializationException" Possible Version mismatch. But basic serialization is the easiest way for serialization.

On other hand custom serialization gives us more control. As we have seen in listing 1 and 2, we can implement the interface ISerializable which gives us more control over the serialization algorithm. We can also avoid the serialization exceptions with the custom serialization. Apart from these, we have 4 attributes when applied to methods they are called in the process of serialization.

OnDeserializedAttribute – This attribute when put on a method, the method gets fired when the de-serialization is completely done.

OnDeserializingAttribute – When applied on a method this method is called during de-serialization of an object.

OnSerializedAttribute – When applied on a method this method gets fired after the serialization process takes place.

OnSerializingAttribute – When applied on a method this is fired during the serialization process.

We can get maximum control over serialization and deserialization process using the attributes.

Wednesday, December 26, 2007

tables which are dependent on a given table

Use the below query to get the list of tables that depends on the input table.


Select
S.[name] as 'Dependent_Tables'
From
sys.objects S inner join sys.sysreferences R
on S.object_id = R.rkeyid
Where
S.[type] = 'U' AND
R.fkeyid = OBJECT_ID('Person.StateProvince')

here, replace Person.StateProvince with your table name.

Returning number of affected rows

By default, SQL Server returns a friendly message to report the total number of rows affected by each query. This is a great setting for debugging your applications or modifying data directly in Query Analyzer. However, your front-end application doesn't need to know the number of affected rows -- it simply needs the data. Sending this message can introduce unnecessary network overhead.

What should you do? Use "SET NOCOUNT ON" in all your stored procedures to reduce network chatter.

Performance issues with views

SQL Server views offer several benefits to a T-SQL programmer: they can be used to obfuscate underlying table structure, restrict sensitive data from certain users and so forth. However, views are not the best tool for every job.

Some Oracle programmers who switch to SQL Server tend to use views extensively; older versions of Oracle didn't allow returning data set from stored procedures, so one would have to create views. This is not the case with SQL Server. But, to be fair to Oracle programmers, they're certainly not the only ones who make this mistake.

Perhaps the most common misconception is that a view with dozens of joins will perform better than a stored procedure containing the same number of joins. This simply isn't true. A view has no performance advantage over the query that has the same number of joins. In fact, views have a major limitation -- they do not accept parameters. So, if you have a view joining 10 tables and one of those tables has a million rows, your view will return at least 1 million rows. Joining such a view to a few other tables within a stored procedure could be disastrous.

What should you do? A better alternative in such cases would be a user-defined function (UDF), which accepts parameters and allows you to limit the number of returned rows. Another option is to join all tables within a single unit stored procedure and limit the output by using parameters.

Table variables in SQL Server 2008

Microsoft SQL Server 2008 introduces another important change to table variables. The latest SQL Server edition allows you to create a table variable as an input or output parameter of a stored procedure. An array of information can now be passed to a stored procedure without creating a string or XML block and then having to parse it out after executing the stored procedure. There are some restrictions on this however. The table variable parameter must be declared based on a table data type, and the variable must be read-only. When compared to not having this ability at all, these restrictions are minor inconveniences at best.

In the below sample code, a user-defined table type is created and a stored procedure is then defined using this table type. A local variable is declared also using the table type, data is loaded into the variable and then the array of data is passed from the local variable to the input parameter where the records are used within the stored procedure. Here is an example:


create type tt_example AS TABLE
(spid int)
go
create procedure usp_example
@spids tt_example READONLY
AS
SELECT *
FROM @spids
GO
declare @spids tt_example

insert into @spids
select top 10 spid
from sys.sysprocesses

exec usp_example @spids=@spids

Tuesday, December 25, 2007

Changing Stored Procedure and View Names

There are times when you will need to change the name of a view or stored procedure after the object has been created. In the Enterprise Manager:

1. Right-click on the object, to show the available sub-menu options.
2. Select ‘Rename’.
3. Change the name of the object.

A word of caution is needed when performing this action. Renaming a stored procedure or a view does not change the name in the text of the definition (i.e. “Create Procedure ”, “Create View ”). To change the name of the text, you must modify the object directly.

If you do not make the proper modification, you will encounter a problem when you script the object. When the object is scripted, the name that appears in the text of the definition will become the name of the object. In this case, the object name will be reverted back to the original name.

Finding Duplicates

In many situations, users need to have the ability to determine when duplicate data exists in the database. You can use a GROUP BY and COUNT to see how many duplicates exist in each group. However, in most instances, users want to see the primary key identifier of the duplicate rows so that you can display the data or delete the duplicates. You cannot add the primary key to a query that uses a GROUP BY because this would cause all counts to equal one since the primary key by definition is unique.

To get a query that shows duplicates along with the primary key, you must use an "In" clause in the WHERE clause.

Using Northwind:

SELECT
Employees.LastName, Employees.FirstName, Employees.Address, Employees.EmployeeID, Employees.Title

FROM
Employees

WHERE
(
(
(Employees.LastName) In
(
SELECT [LastName]
FROM [Employees] As Tmp
GROUP BY [LastName],[FirstName],[Address]
HAVING Count(*)>1 And [FirstName] = [Employees].[FirstName] And [Address] = [Employees].[Address]
)
)
)

ORDER BY
Employees.LastName, Employees.FirstName, Employees.Address


This query will return the first name, last name, address, employeeid and title of all rows where the last name, first name, and address are duplicated.

Avoid Using Text and Ntext Data Types

The text data type has several inherent problems, including:

You cannot directly write or update text data using the INSERT or UPDATE statements. Instead, you have to use special statements like READTEXT, WRITETEXT and UPDATETEXT.
There are several known issues associated with replicating tables that contain text columns.
Because of the way SQL Server stores text columns, it can take more time to retrieve text/ntext values.
Text data types have extra overhead that can negatively impact performance.
If you don't have to store more than 8KB of text, consider using char(8000) or varchar(8000) data types instead of text and ntext.

Declaring and Using Transact-SQL Variables in a Stored Procedure

Most people are familiar with declaring and using Transact-SQL variables in stored procedures. Typically, these variables are used for temporary storage of scalar values such as integers, strings, dates and so forth. However, a variable may also be a table-type variable. You can use a table-type variable where you might otherwise use a temporary table. One reason you might want to use a table variable is to hold onto a small subset of data for use multiple times in the body of a stored procedure.

In this example, a table variable is created to hold a subset of data from the Northwind Orders table for a particular employee:

CREATE PROCEDURE stpTableVariable
(@EmpID int) as

-- create the table variable
declare @EmpOrders table (orderid int not null)

-- populate the table. In this case we take the results of a query on
-- another table, but you could do simple INSERT statements that take
-- literals as well:
insert into @EmpOrders (orderid)
select orderid
from orders where employeeid = @EmpID

-- Now use the results stored in the table variable as part of a where
-- clause. You could also do a join or any other standard SQL action
-- with the table variable
select * from orders
where orderid in (select orderid from @EmpOrders)

Sql Database Table Design

When creating a SQL database you need to ask - what is the information that I want to capture? Do not try to capture everything on one SQL database. Simplify it. It is OK to create several SQL databases with specific types of information you want to capture for an organization. You can always create a sql command to gather the information from the separate databases.

Afterwards, you can get down to designing the TABLES. Below are three rules about SQL Table design. I know that rules are made and then broken, but in general follow the 3 Rules below (if you can).

RULE 1: In a Table, do not repeat data.

RULE 2: Repeated data goes in another Linked table.

RULE 3: Row Data must only rely on the Table's Primary Key.

Microsoft Offers Free Online Course on SQL Server 2008

Microsoft Learning is offering a free, three-hour, self-paced, web-based, on-line training course on the improvements that will be available with SQL Server 2008. The course is text-based, so you will be spending a lot of time reading, but it it worth it if you want to get a head start on SQL Server 2008.

The course is called "Clinic 6188: What's New in Microsoft SQL Server 2008 for Enterprise Data Platform," and is divided into three catetgories, so you can focus on what most interests you.

What's New in SQL Server 2008 for Enterprise Data Platform
What's New in SQL Server 2008 for Business Intelligence
What's New in SQL Server 2008 for Database Development

https://www.microsoftelearning.com/eLearning/offerDetail.aspx?offerPriceId=139087

You need a Microsoft Live Account to register for this free course.

SQL Server Join Hints

JOIN hints can be used in a query to specify the type of JOIN the Query Optimizer is to use for the execution plan. The JOIN options are:

Loop
Merge
Hash
The syntax for a JOIN hint is (using an INNER JOIN as an example):

FROM table_one INNER [LOOP | MERGE | JOIN] JOIN table_two

Here's an example:

FROM header_table INNER LOOP JOIN detail_table

As you can see, the JOIN hint is between the type of JOIN (in this example, INNER) and the JOIN keyword. Only one JOIN hint can be used per JOIN in a query. In addition, JOIN hints can only be used when the ANSI JOIN syntax is used, not the older Microsoft JOIN syntax.

The syntax above is not the only option to add a JOIN hint to a query. In addition, you can also use OPTION clause. Using the OPTION clause specifies that the hint be used throughout all of the query. While multiple hints can be added to the OPTION clause, each query hint can be used only once. In addition, only one OPTION clause can be used per query.

Here's an example of using the OPTION clause:

OPTION (INNER) or OPTION (MERGE) or OPTION (HASH)

The Query Optimizer always tries to identify the fastest way to JOIN tables. The fastest JOIN method is the Loop JOIN, followed by the Merge and the Hash JOIN. While the Query Optimizer always tries to perform a Loop JOIN if possible, it is not always possible, and one of the other two types may have to be used.

Before you attempt to use JOIN hint to specify a JOIN type, you should first take a look at your query, and the indexing of the relevant tables, to see if you can find a way, other than using a hint, to induce the Query Optimizer to use a Loop JOIN. Keep in mind that if you specify a Loop JOIN hint for a JOIN that is currently using a HASH JOIN, that you may get worse, not better performance. So always test JOIN hints to see what their end result in performance really is.

Occasionally, you may find yourself having to perform an INNER JOIN on tables located in databases on different servers. Generally speaking, SQL Server will evaluate if the performing the JOIN will perform better on the local or remote server, and run the JOIN their accordingly. But in some cases, such as when the remote table is not a SQL Server table, this is not possible.

SQL Server provides a special JOIN hint that you can use to force a JOIN to occur on the remote server, if this is appropriate, called REMOTE. The REMOTE hint is used to specify that the JOIN operation be performed on the right table of the JOIN. You might consider this if the left table of the JOIN is the local table and the right table in the JOIN is the remote table. Of course, doing this only makes sense if the right table has more rows that the left table. REMOTE cannot be used if the tables use different collation.

SQL Server Table Hints

Index hints (a form of a table hint) are used to specify which index or indexes you want used when a query runs. When you specify one or more indexes, the Query Optimizer will use them when creating the execution plan for your query.

The syntax for a table hint is:

SELECT column_list FROM table_name WITH (INDEX (index_name) [, ...])

Where:

column_list is the list of columns you want SELECTed.

table_name is the name of the table with the relevant indexes.

index_name is the index name or index ID for the index or indexes you want used by the Query Optimizer.

Here's an example:

SELECT memberID, fname, lname FROM members WITH (INDEX(memberID_index))

While you can use an index ID instead of the index name, this is not a good idea. This is because index IDs can change if you drop and recreate the indexes differently than when you first created them.

If you want to specify multiple indexes, to force an index intersection or an index covering as a result of an index join, you can by adding each index name or ID, one after another, with each one separated by a comma.

If you want to force a table scan, use the hint: (INDEX(0)).

Index hints should only be used when the Query Optimizer, for whatever reason, does not select the available index, and if using the index provides a performance enhancement over not using an index.

The FASTFIRSTROW table hint is functionally equivalent to the OPTION FAST (number_of_rows) hint, and either one can be used interchangedly, although Microsoft recommends that you use OPTION FAST, as support of FASTFIRSTROW may be removed in future versions of SQL Server.

For queries that return large resultsets, this hint can be handy. What this hint does is to tell the Query Optimizer to retrieve X number of rows of the total resultset as fast as it can, then continue on and return the full resultset as time permits. This acts to get the initial part of your data retrieved very quickly.

On the other hand, while it generally speeds up the retrieval of the first X number of rows, the rest of the rows are often returned more slowly that they would have been if the hint had not been specified.

Thursday, December 13, 2007

.NET Assemblies

.NET Assemblies are a partially compiled library for use in deployment, versioning and security.

Characteristics include:
1.)Assemblies are self describing in nature.
2.)They can be executed side by side without versioning problems.
3.)Two or more applications can run independently inside a single process with processes separated via the application domains.
4.)In case of private assemblies, installation is as simple as copying the files into the file system organization.

Assemblies are of two types:

Private Assembly :Private assemblies are intended for a particular application and are typically stored in the application.
Shared Assembly : Shared assemblies are stored in the global assembly cache (GAC) and are shared by many applications. Because they are global, each assembly must have a distinct or unique name.

Friday, December 7, 2007

Microsoft Indexing Service and ASP.NET

Microsoft Indexing Service is a service that provides a means of quickly searching for files on the machine. The most familiar usage of the service is on web servers, where it provides the functionality behind site searches. It is built into Windows 2000 and 2003. It provides a straightforward way to index and search your web site.

Connecting to the Indexing Service
The Indexing Service exposes itself to the developer as as ADO.Net provider MSIDXS with the data source equal to the indexing catalog name. For example, the connection string used insearching this site is

Provider="MSIDXS";Data Source="idunno.org";

As with any other ADO.Net provider you use the connection string property of the System.Data.OleDb.OleDbConnection object.

using System.Data.OleDb;
protected OleDbConnection odbSearch;
odbSearch.ConnectionString= "Provider= \"MSIDXS\";Data Source=\"idunno.org\";"
odbSearch.Open(); //Query and process result
odbSearch.Close();

COM/COM+ in ASP.NET

Microsoft introduced a new concept called managed code. Code that runs within the CLR environment is called managed code. Code that runs outside the boundaries of CLR environment is called unmanaged code. Some of the example of unmanaged code are COM, ActiveX components. .NET provides a mechanism which makes it possible to use the existing unmanaged components with ASP.NET application. This mechanism is called interoperability.

Introduction to COM/COM+
COM is a binary standard. It is acronym for the Component Object Model. COM is a specification and the components built in any language according to this specification (COM) can be integrated with any client application written in any language. The Client doesn't has to know the location of the components. When working with COM, the objects are compiled into DLLs i.e. the binary executable files. A DLL can contain more than one class.
COM+ is nothing but an extension of COM. COM+ is a combination of operating system services and an object oriented programming architecture. By extension to COM,it means adding a few services, like event registry, the queues of asynchronously received requests for a service (transaction management), ensuring security against the system requests (authentication), object pooling.

Invoking COM/COM+ components
In .NET, all codes are executed within the Common Language Runtime (CLR) environment. CLR runs your code and specifies services required for the development process. ASP.NET application cannot directly interact with COM components.

When an ASP.NET application calls a method defined in the COM component, CLR creates a wrapper called Runtime Callable Wrapper (RCW). RCW is a .NET object and is executed in CLR. The primary task of RCW is to marshal calls between the >NET application and the COM components. The CLR only creates one RCW object for a COM component even if it is referred more than one time in an application. RCW stores the number of references attached to it in an internal cache. RCW is created on the heap and so is subjected to garbage collection. When reference count to a particular RCW becomes zero, the garbage collection module will delete it and the memory allocated to the RCW object will be released.

When an ASP.NET application calls a method defined within the COM component, RCW performs the following tasks:

1.)Creates an instance of the COM object and bind it to the application
2.)Manages data conversion between managed and unmanaged data type.
3.)Manages the life time of the COM component for which it is acting as a wrapper.
4.)Handles exception during the data conversion process.

To add a COM component to an ASP.NET application,perform the following steps:

1. Open new project and select Visual C# projects from the Project types pane and ASP.NET Web Service from the templates pane.

2. Select Project->Add References.

3. Click the COM tab in the Add Reference dialog box. A list of COM components is displayed.

4. Select the COM component that you want to use within your ASP.NET application. For example, select the Microsoft ActiveX Data Objects 2.7 library option form the list and double click the Select button. The component is added to the Selected Components pane.

5. Click the OK button. The component that you have added to the project is displayed in the Reference node in Solution Explorer.

6. You can use the COM component in the application by specifying its library class, as follows:
ADODB.RecordsetClass rs=new ADODB.RecordsetClass();

This COM object rs can be used in the application to call its method.

Thursday, December 6, 2007

ASP.NET state management

ASP.NET applications are hosted in a web server and are accessed over the stateless HTTP protocol. As such, if the application uses stateful interaction, it has to implement state management on its own. ASP.NET provides various functionality for state management in ASP.NET applications. These functionality include Application state, Session state and View state variables.

Application state
Application state is a collection of user-defined variables that are shared by all invocations of an ASP.NET application. These are set and initialized when the Application_OnStart event fires on the loading of the first instance of the applications and are available till the last instance exits. Application state variables are accessed using the Applications collection, which provides a wrapper for the application state variables. Application state variables are identified by their names.

Session state
Session state is a collection of user defined session variables, which are persisted during a user session. These variables are unique to different instances of a user session, and are accessed using the Session collection. Session variables can be set to be automatically destroyed after a defined time of inactivity, even if the session does not end. At the client end, a user session is identified either by a cookie or by encoding the session ID in the URL itself.

ASP.NET supports three modes of persistence for session variables:

1.)InProc
The session variables are maintained within the ASP.NET process. However, in this mode the variables are destroyed when the ASP.NET process is recycled or shut down.

2.)StateServer
In this mode, ASP.NET runs a separate service that maintains the state variables. Because the state management happens outside the ASP.NET process, this has a negative impact on performance, but it allows multiple ASP.NET instances to share the same state server, thus allowing an ASP.NET application to be load-balanced and scaled out on multiple servers. Also, since the state management service runs independent of ASP.NET, variables can persist across ASP.NET process shutdowns.

3.)SqlServer
In this mode, the state variables are stored in a database server, accessible using SQL. Session variables can be persisted across ASP.NET process shutdowns in this mode as well.

View state
View state refers to the page-level state management mechanism, which is utilized by the HTML pages emitted by ASP.NET applications to maintain the state of the web form controls and widgets. The state of the controls are encoded and sent to the server at every form submittal in a hidden field known as __VIEWSTATE. The server sends back the variable so that when the page is re-rendered, the controls render at their last state. At the server side, the application might change the viewstate, if the processing results in updating the state of any control. The states of individual controls are decoded at the server, and are available for use in ASP.NET pages using the ViewState collection.

Directory structure in ASP.NET

In general, the ASP.NET directory structure can be determined by the developer's preferences. Apart from a few reserved directory names, the site can span any number of directories. The structure is typically reflected directly in the urls. Although ASP.NET provides means for intercepting the request at any point during processing, the developer is not forced to funnel requests through a central application or front controller.

The special directory names are:

App_Browsers
holds site-specific browser definition files.

App_Code
This is the "raw code" directory. The ASP.NET server will automatically compile files (and subdirectories) in this folder into an assembly which is accessible in the code of every page of the site. App_Code will typically be used for data access abstraction code, model code and business code. Also any site-specific http handlers and modules and web service implementation go in this directory. As an alternative to using App_Code the developer may opt to provide a separate assembly with precompiled code.

App_Data
default directory for databases, such as Access mdb files and SQL Server mdf files. This directory is usually the only one with write access for the application.

App_LocalResources
Contains localized resource files for individual pages of the site. E.g. a file called CheckOut.aspx.fr-FR.resx holds localized resources for the french version of the CheckOut.aspx page. When the UI culture is set to french, ASP.NET will automatically find and use this file for localization.

App_GlobalResources
Holds resx files with localized resources available to every page of the site. This is where the ASP.NET developer will typically store localized messages etc. which are used on more than one page.

App_Themes
holds alternative themes of the site.

App_WebReferences
holds discovery files and WSDL files for references to web services to be consumed in the site.

Memory in .NET - what goes where

The memory slot for a variable is stored on either the stack or the heap. It depends on the context in which it is declared:

1.)Each local variable (ie one declared in a method) is stored on the stack. That includes reference type variables - the variable itself is on the stack, but remember that the value of a reference type variable is only a reference (or null), not the object itself. Method parameters count as local variables too, but if they are declared with the ref modifier, they don't get their own slot, but share a slot with the variable used in the calling code.There's one exception to this simple rule - captured variables (used in anonymous methods) are local in terms of the C# code, but end up being compiled into instance variables in a type associated with the delegate created by the anonymous method.
2.)Instance variables for a reference type are always on the heap. That's where the object itself "lives".
3.)Instance variables for a value type are stored in the same context as the variable that declares the value type. The memory slot for the instance effectively contains the slots for each field within the instance. That means (given the previous two points) that a struct variable declared within a method will always be on the stack, whereas a struct variable which is an instance field of a class will be on the heap.
4.)Every static variable is stored on the heap, regardless of whether it's declared within a reference type or a value type. There is only one slot in total no matter how many instances are created.

DotSVN

DotSVN is a complete .NET implementation of the Subversion. DotSVN accesses a Subversion repository at the repository level itself (instead of at the client side/working copy). It implements the Subversion repository access layer completely in .NET without using any of the Subversion libraries. It exposes APIs which will help .NET developers to connect to an SVN repository and manipulate it. DotSVN is developed as an Open Source project hosted on Google Code. Recently, the DotSVN team released the first iteration of the DotSVN library which supports reading the contents of an FSFS repository hosted in Subversion. It can read the directory structure, properties of a revision and the contents of individual files from the repository. This release of DotSVN also includes a sample application which demonstrates the usage of the DotSVN APIs. This sample application is a WinForms based repository browser very similar to the repository browser of Tortoise SVN. It displays the contents of an FSFS repository in a hierarchical fashion displaying the Windows file type icon beside each directory or file

DotSVN APIs are very simple to use. Accessing an SVN repository using DotSVN involves 3 steps:

1.) Open the repository.
2.) Do the relevant operations (for eg: get the directory contents of the repository).
3.) Close the repository.

For further information,please check
http://www.dotsvn.net/articles/accessing-subversion-repository-from-net-using-dotsvn.html

Tuesday, December 4, 2007

ASP.NET Page Life Cycle

ASP.NET Page Life Cycle

A page in an ASP.NET application consists of several server controls. These are the fundamental building blocks of an ASP.NET application.

The Life Cycle of a page when requested for the first time:

1.)Initializing: During this phase, the server creates an instance of the server control
2.)Loading: During this phase, the instance of the control is loaded onto the page object in which it is defined.
3.)PreRendering: During this phase, the control is updated with the changes made to it. This prepares the control for rendering.
4.)Saving: During this phase, the state information of the control is saved. For example, if a value is set for the control during the Load event, it is embedded in the HTML tag that will be returned to the browser.
5.)Rendering: During this phase, the server creates the corresponding HTML tag for the control.
6.)Disposing: During this phase, all cleanup tasks, such as closing files and database connections opened by the control are performed.
7.)Unloading: During this phase, all cleanup tasks, such as destroying the instances of server control are performed. This is the final event in the life cycle of a server control

Windows Authentication in .NET

Securing a web application consists of two steps :
1.)Authenticating the user accessing the page
2.)Authorizing the user to access the page

Authentication is a process of determining whether a user is the one who he claims to be. Typically this will be determined with the help of user id and password.
Authorization is a process in which you decide whether an authenticated user is allowed to access certain page or resource. For example, operators may not be allowed to view certain confidential financial information that managers can view.

ASP.NET offers various ways to authenticate and authorize users of your web site. They are :

Windows authentication
Forms authentication (cookie authentication)
Passport authentication

Windows Authentication
Windows authentication scheme uses traditional mechanisms of Basic, NTLM/Kerberose and Digest authentication. Here IIS uses the credentials of logged in user are used to authenticate web requests. In case integrated windows authentication is turned off a typical gray colored dialog pops up asking for user id and password.

Steps involved in implementing windows authentication and authorization
1.)Create a ASP.NET web application
2.)Modify web.config to set authentication mode to windows
3.)Modify web.config to deny access to anonymous users
4.)Authorize users based on their NT user groups (roles)

Monitoring File System Events in .NET

Introduction
In some applications like file system explorers, file managers etc. it becomes necessary to keep an eye on file system changes. In traditional tools like VB6 this task usually required some third party component or Windows APIs.
In .NET the FileSystemWatcher class from System.IO namespace provides easy way to do the same thing. A windows file system has many events associated with files and folders. They include creation, renaming, deletion, change etc. In this article we will see how to use FileSystemWatcher class.

Steps involved in file system monitoring
Following steps are involved in typical application that uses FileSystemWatcher class:

1.) Import required namespace
The class FileSystemWatcher resides in System.IO namespace. So, you need to import this namespace. In addition to System.IO you will also need some threading support since you will be continuously monitoring for any change. So, you will also need System.Threading namespace.
Imports System.IO
Imports System.Threading

2.) Create object of FileSystemWatcher class and set its properties
Dim fw As New FileSystemWatcher()
fw.Path = "c:\bipin"
fw.Filter = "*.*"
fw.IncludeSubdirectories = True

The path property sets the location that is to be monitored.
The filter property sets the type of files to be monitored.
The IncludeSubdirectories property decides whether changes in sub-folders are to be considered for monitoring or not.

3.) Attach event handlers for various events
Next, we will attach event handlers for various events as shown below :
AddHandler fw.Renamed, AddressOf MyRenamedHandler
AddHandler fw.Created, AddressOf MyCreatedHandler
AddHandler fw.Changed, AddressOf MyChangedHandler
AddHandler fw.Deleted, AddressOf MyDeletedHandler
Here, we have added event handlers to Renamed, Created, Changed and Deleted events using AddHandler statement.

4.)Watch for any changes
Now that we have configured our FileSystemWatcher object it is time to start monitoring the file system. For our example I have put monitoring on a separate thread.
Dim t As New Thread(AddressOf WatchChange)
Public Sub WatchChange()
While True
fw.WaitForChanged(WatcherChangeTypes.All)
End While
End Sub
The WaitForChanged method is invoked inside an end less while loop. The argument to the method tells that we are interested in all types of changes.

5.)Write event handlers
The final thing is to write event handlers for various methods mentioned above.
Public Sub MyRenamedHandler(ByVal s As Object, ByVal e As
RenamedEventArgs)
Dim strMsg As String
strMsg = "Old Path :" & e.OldFullPath() & vbCrLf
strMsg += "New Path :" & e.FullPath
MessageBox.Show(strMsg)
End Sub
Public Sub MyCreatedHandler(ByVal s As Object, ByVal e As
FileSystemEventArgs)
Dim strMsg As String
strMsg = "File/Folder Created :" & e.FullPath
MessageBox.Show(strMsg)
End Sub
Public Sub MyChangedHandler(ByVal s As Object, ByVal e As
FileSystemEventArgs)
Dim strMsg As String
strMsg = "File/Folder Changed :" & e.FullPath
MessageBox.Show(strMsg)
End Sub
Public Sub MyDeletedHandler(ByVal s As Object, ByVal e As
FileSystemEventArgs)
Dim strMsg As String
strMsg = "File/Folder Deleted :" & e.FullPath
MessageBox.Show(strMsg)
End Sub

Monday, December 3, 2007

Custom Error Handling in ASP.NET

There are three error modes in which an ASP.Net application can work:

1.) Off Mode
2.) On Mode
3.) RemoteOnly Mode

The Error mode attribute determines whether or not an ASP.Net error message is displayed. By default, the mode value is set to "RemoteOnly".

Off Mode
When the error attribute is set to "Off", ASP.Net uses its default error page for both local and remote users in case of an error.

On Mode
In case of "On" Mode, ASP.Net uses user-defined custom error page instead of its default error page for both local and remote users. If a custom error page is not specified, ASP.Net shows the error page describing how to enable remote viewing of errors.

RemoteOnly
ASP.Net error page is shown only to local users. Remote requests will first check the configuration settings for the custom error page or finally show an IIS error.

Customization of error page can be implemented by adding a value for an attribute "defaultRedirect" in the tag of the configuration file "web.config". This file determines configuration settings for the underlying application.

Notification of Error to the Administrator

In a practical web application, customization of error pages is not the only requirement. The error, if encountered, should be reported to the administrator so that it can be rectified thus enabling subsequent requests to work properly without any error.
Notification of the error can be sent to the administrator in one of the following two ways:

1.) Error can be registered as a log entry in the Windows Event Log on the administrator's machine.
2.) An Email can be sent to the administrator with a suitable error message.

Writing to the Event Log

In ASP.Net, error can be handled programmatically by writing appropriate code in the page-level error event, for errors on an individual page or in the application-level error event for handling errors that may occur in any page of the application.

Therefore, code for writing in the Event Log should be written in either of the events, depending on the requirement of the application. To illustrate this example, I have written the code in the application-level event with the error mode set to "RemoteOnly" and the "defaultRedirect" attribute to "error.htm". The application-level error event should be included in the global file "global.asax" within the same application folder.

Sending Email To the Administrator

Imports System.Web
Imports System.Web.SessionState
Imports System.Web.Mail
Public Class Global
Inherits System.Web.HttpApplication
Sub Application_Error(ByVal sender As Object, ByVal e As EventArgs)
Dim mail As New MailMessage
Dim ErrorMessage = "The error description is as follows : "
& Server.GetLastError.ToStringmail.To = administrator@domain.com
mail.Subject = "Error in the Site"
mail.Priority = MailPriority.High
mail.BodyFormat = MailFormat.Text
mail.Body = ErrorMessage
SmtpMail.Send(mail)
End Sub
End Class

How to give strong name to DLL and how to install DLL in GAC

Steps for Giving strong name to DLL

1.)Open .net command prompt.

2.)Go to the folder contanig DLL.

3.)Type sn -k test.snk, you can use any file name instead of test. This will create test .snk file in that folder.

4.)Open the assemblyinfo.cs file of project.

5.)Type file path in this tag [assembly:AssemblyKeyFile@"E:\hemant\practice\HP\bin\Debug\HP.snk")]

6.)Build application, finally your strong name created for your DLL.

Steps for Installing in GAC

1.)After giving strong name in .net command prompt type gacutil in DLL path which
will install file in assembly.

2.)Copy the DLL file C:\WINDOWS\Microsoft.NET\Framework\v1.1.4322 folder.

3.)Add references to another project using .net tab.