Pages

Saturday, June 28, 2008

Add Column

Adding column
You can add new columns to an existing table. .

ALTER TABLE [dbo].[phone]
ADD inactive_date DATETIME NULL
GO

Alter column
You can add modify an existing column

ALTER TABLE [dbo].[person]
ALTER COLUMN [lastname] VARCHAR(35) NULL
GO

Considerations for altering a column
1. Reducing precision (example, going from CHAR(20) to CHAR(15)) can cause data truncation and should be avoided unless you are absolutely sure their will be no impact to the data.
2. Changing data types should typically be avoided. There are exceptions to this. For example, changing a CHAR(20) to a VARCHAR(20) on columns where the average storage length is 10 can save disk space.

Alter columns - No Can Do
You cannot directly alter a a column that is part of the primary key

Thursday, June 26, 2008

MERGE

The MERGE statement allows developers to use one command to perform deterministic inserts, updates, and deletes on a table based on a source table.

When synchronizing some information across two tables, up to three operations have to be performed. First, any new rows need to be inserted into the target table. Then existing rows have to be updated. Finally, old rows no longer in use may need to be deleted. This can lead to a lot of repetitive logic which needs to be maintained, and in turn can lead to subtle bugs.

merge [target] t
using [source] s on t.id = s.id
when matched then update t.name = s.name, t.age = s.age -- use "rowset1"
when not matched then insert values(id,name,age) -- use "rowset2"
when source not matched then delete; -- use "rowset3"

Trigger Uses

Triggers are useful in these ways:

Triggers can cascade changes through related tables in the database; however, these changes can be executed more efficiently using cascading referential integrity constraints.


Triggers can enforce restrictions that are more complex than those defined with CHECK constraints.
Unlike CHECK constraints, triggers can reference columns in other tables. For example, a trigger can use a SELECT from another table to compare to the inserted or updated data and to perform additional actions, such as modify the data or display a user-defined error message.

Triggers can also evaluate the state of a table before and after a data modification and take action(s) based on that difference.


Multiple triggers of the same type (INSERT, UPDATE, or DELETE) on a table allow multiple, different actions to take place in response to the same modification statement.

Override ToString method

This C# code snippet overrides the ToString method inherited from System.Object in order to display the private variables.

public class MyClass
{
private string customer ="";
private int customerID = 0;

public string Customer
{
get { return customer; }
set { customer = value; }
}

public int CustomerID
{
get { return customerID; }
set { customerID = value; }
}

public override string ToString()
{
return string.Format ("Customer = {0} ID = {1}", Customer, CustomerID);
}

}

Wednesday, June 25, 2008

Guidelines for Partitioned Tables

You must plan to create the following database objects before partitioning a table or index:

Partition function
Partition scheme


A partition function defines how the rows of a table or index are mapped to a set of partitions based on the values of certain columns, called partitioning columns.

A partition scheme maps each partition specified by the partition function to a filegroup.

Planning the Partition Function
There are two factors to consider when planning a partition function: the column whose values determine how a table is partitioned, known as the partitioning column, and the range of values of the partitioning column for each partition. This range of values determines the number of partitions that make up your table. A table can have a maximum of 1,000 partitions.

The choices you have for the partitioning column and the values range are determined primarily by the extent to which your data can be grouped in a logical way, such as by date, and whether this logical grouping is adequate for managing subsets of data.

For example, under the partitioning scenario of the AdventureWorks sample database, the TransactionHistory and TransactionHistoryArchive tables are partitioned on the TransactionDate field. The range of values for each partition is one month. The TransactionHistory table maintains the year's most current transactions, while TransactionHistoryArchive maintains older transactions. By partitioning the tables in this way, a single month's worth of old data can be transferred quickly and efficiently from TransactionHistory to TransactionHistoryArchive on a monthly basis.


For more information check out the below link
http://msdn.microsoft.com/en-us/library/ms180767.aspx

Logon Triggers

Logon triggers fire stored procedures in response to a LOGON event. This event is raised when a user session is established with an instance of SQL Server. Logon triggers fire after the authentication phase of logging in finishes, but before the user session is actually established. Therefore, all messages originating inside the trigger that would typically reach the user, such as error messages and messages from the PRINT statement, are diverted to the SQL Server error log. Logon triggers do not fire if authentication fails.

You can use logon triggers to audit and control server sessions, such as by tracking login activity, restricting logins to SQL Server, or limiting the number of sessions for a specific login. For example, in the following code, the logon trigger denies log in attempts to SQL Server initiated by login login_test if there are already three user sessions created by that login.

Check out the below link for more information on this
http://msdn.microsoft.com/en-us/library/bb326598.aspx

SQL Injection Attack

Microsoft and Hewlett-Packard have unveiled free tools to help web developers and site administrators defend against the rapidly growing number of SQL injection attacks that aim to hijack legitimate sites.

Check out the below link for more information
http://www.computerworlduk.com/technology/security-products/prevention/news/index.cfm?newsid=9749

Tuesday, June 24, 2008

Sort Array

This C# code snippet sorts the elements of an array into ascending order


public class ReverseArraySort
{

public static void Main()
{
string[] strings = {"beta", "alpha", "gamma"};
Console.WriteLine ("Array elements: ");
DisplayArray (strings);
Array.Sort (strings); // Sort elements
DisplayArray (strings);
}

public static void DisplayArray (Array array)
{
foreach (object o in array)
{
Console.Write ("{0} ", o);
}
Console.WriteLine();
}
}

Thursday, June 19, 2008

Side-by-side execution

Side by Side Excution enables the existence of multiple
versions of .Net Framework on the same machine at the same
time and also multiple versions of applications that use a
version of the .Net Framework.

Also the side by side implenetation requires the use of a
startup Shim.
Shim is a thin piece of code that accepts a Version No and
other start up parameters from host to start the CLR.

Startup shim is implemented in the mscoree.dll.
Only one version of the shim exists in the machine and is
available in the System 32 folder.

Retrieve Current Screen Resolution

Gets the dimensions, in pixels, of the current video mode of the primary display

MessageBox.Show( "Monitor Size:" + SystemInformation.PrimaryMonitorSize );

Cross Join Power

Cross Joins produce results that consist of every combination of rows from two or more tables. That means if table A has 3 rows and table B has 2 rows, a CROSS JOIN will result in 6 rows. There is no relationship established between the two tables – you literally just produce every possible combination.

The danger here, of course, is that if you have table A with 10,000 rows and Table B with 30,000 rows, and you accidentally create the product of these two tables, you will end up with a 300,000,000 row result -- probably not a good idea. (Though it is great for creating test data and the like.)

So, how can this ever be useful? Actually, if you do lots of report writing in SQL, a CROSS JOIN can be your best friend.

Suppose you need to write a report that returns total sales for each Store and each Product. You might come up with this:

SELECT Store, Product, SUM(Sales) as TotalSales
FROM Sales
GROUP BY Store, Product

Thursday, June 12, 2008

NLog

NLog is a .NET logging library designed with simplicity and flexibility in mind. With NLog you can process diagnostic messages emitted from any .NET language, augment them with contextual information, format them according to your preference and send them to one or more targets.

The API (application programming interface) is similar to log4net, and the configuration is very simple. NLog uses a routing table while log4net uses a logger hierarchy with attachable appenders. This makes NLog's configuration very easy to read and maintain.

NLog is licensed under the terms of BSD license, which permits commercial use and the source code is available to anyone at no cost. Everyone is encouraged to test it and report feedback to the mailing list.

NLog supports .NET, C/C++ and COM interop API so that all your application components including legacy modules written in C++/COM can send their messages through a common log routing engine.

The .NET API is very fast at filtering messages, so that you can keep your logging instructions in code and let NLog filter them out at runtime. NLog can filter out as many as 150 million logging instructions per second on a single-CPU 1.6 GHz laptop. Add that to asynchronous processing and other wrappers and you'll get a very powerful and scalable logging tool.

Look for more information
http://www.nlog-project.org/

Call with variable arguments

Sometimes we may require a variable number of arguments to be passed to a function. For example we may require a sum function which calculates the total of the numbers passed to it no matter how many numbers are passed.

In C# we can use the Params keyword and pass variable no of arguments to a function. It's much like using ParamArray in VisualBasic language. The syntax of params arguments is:
params datatype[] argument name

static void ListArguments (params object[] arguments)
{
foreach (object argument in arguments)
{
Console.WriteLine (argument);
}
}

public static void Main( )
{
ListArguments ("Arguments: ", DateTime.Now, 3.14f);
}

Saturday, June 7, 2008

IS vs AS

Use the as operator when the normal program flow is to test for the interface and—immediately—cast the object for access to the interface members.

Use the is operator when the normal program flow is to test for the interface; but, interface member access is either delayed or does not occur at all.

Calculate Fibonacci number

static int Fibonacci (int x)
{
Console.WriteLine ("x = {0}", x);
if (x <= 1)
{
return 1;
}
return Fibonacci (x-1) + Fibonacci (x-2);
}

static void Main( )
{
Console.WriteLine ("Fibonacci no. = {0}", Fibonacci (5));
Console.ReadKey();
}

Thursday, June 5, 2008

Native Image Generator Tool

.NET programming tool (Ngen.exe) for compiling an assembly to native machine code and installing it in the local assembly cache. Rather than the MSIL assembly, the native image is used for assembly access during execution. The Common Language Runtime (CLR) reverts to using the MSIL assembly if the native image is removed. Native images load and execute faster than MSIL assemblies which must be Just-In-Time (JIT) compiled by the CLR. Known as pre-JITing, using this tool to create a native image file makes JIT-compiling unnecessary.

Set a Database to Single-user Mode

Single-user mode specifies that only one user at a time can access the database and is generally used for maintenance actions.

The database remains in single-user mode even if the user that set the option logs off. At that point, a different user, but only one, can connect to the database.

To set a database to single-user mode
--> In Object Explorer, connect to an instance of the SQL Server 2005 Database Engine, and then expand that instance.

--> Right-click the database to change, and then click Properties.

--> In the Database Properties dialog box, click the Options page.

--> From the Restrict Access option, select Single.

--> If other users are connected to the database, an Open Connections message will appear. To change the property and close all other connections, click Yes.

Wednesday, June 4, 2008

Join algorithms

Three fundamental algorithms exist for performing a join operation.

Nested loops
Use of nested loops produces the simplest join-algorithm. For each tuple in the outer join relation, the system scans the entire inner-join relation and appends any tuples that match the join-condition to the result set. Naturally, this algorithm performs poorly with large join-relations: inner or outer or both. An index on columns in the inner relation in the join-predicate can enhance performance.

The "block nested loops" (BNL) approach offers a refinement to this technique: for every block in the outer relation, the system scans the entire inner relation. For each match between the current inner tuple and one of the tuples in the current block of the outer relation, the system adds a tuple to the join result-set. This variant means doing more computation for each tuple of the inner relation, but far fewer scans of the inner relation.


Merge join
If both join relations come in order, sorted by the join attribute(s), the system can perform the join trivially, thus:

Consider the current "group" of tuples from the inner relation; a group consists of a set of contiguous tuples in the inner relation with the same value in the join attribute.
For each matching tuple in the current inner group, add a tuple to the join result. Once the inner group has been exhausted, advance both the inner and outer scans to the next group.
Merge joins offer one reason why many optimizers keep track of the sort order produced by query plan operators—if one or both input relations to a merge join arrives already sorted on the join attribute, the system need not perform an additional sort. Otherwise, the DBMS will need to perform the sort, usually using an external sort to avoid consuming too much memory.

Hash join

A hash join algorithm can produce equi-joins. The database system pre-forms access to the tables concerned by building hash tables on the join-attributes. The lookup in hash tables operates much faster than through index trees. However, one can compare hashed values only for equality, not for other relationships.

SQL Optimization Tips Part 2

• Try to avoid using SQL Server cursors, whenever possible.
SQL Server cursors can result in some performance degradation in
comparison with select statements. Try to use correlated sub-query or
derived tables, if you need to perform row-by-row operations.

• Try to avoid the HAVING clause, whenever possible.
The HAVING clause is used to restrict the result set returned by the
GROUP BY clause. When you use GROUP BY with the HAVING clause, the
GROUP BY clause divides the rows into sets of grouped rows and
aggregates their values, and then the HAVING clause eliminates
undesired aggregated groups. In many cases, you can write your select
statement so, that it will contain only WHERE and GROUP BY clauses
without HAVING clause. This can improve the performance of your query.

• If you need to return the total table's row count, you can use
alternative way instead of SELECT COUNT(*) statement.
Because SELECT COUNT(*) statement make a full table scan to return the
total table's row count, it can take very many time for the large
table. There is another way to determine the total row count in a
table. You can use sysindexes system table, in this case. There is
ROWS column in the sysindexes table. This column contains the total
row count for each table in your database. So, you can use the
following select statement instead of SELECT COUNT(*): SELECT rows
FROM sysindexes WHERE id = OBJECT_ID('table_name') AND indid < 2 So,
you can improve the speed of such queries in several times.

• Include SET NOCOUNT ON statement into your stored procedures to stop
the message indicating the number of rows affected by a T-SQL statement.
This can reduce network traffic, because your client will not receive
the message indicating the number of rows affected by a T-SQL statement.

• Try to restrict the queries result set by using the WHERE clause.
This can results in good performance benefits, because SQL Server will
return to client only particular rows, not all rows from the table(s).
This can reduce network traffic and boost the overall performance of
the query.

• Use the select statements with TOP keyword or the SET ROWCOUNT
statement, if you need to return only the first n rows.
This can improve performance of your queries, because the smaller
result set will be returned. This can also reduce the traffic between
the server and the clients.

• Try to restrict the queries result set by returning only the
particular columns from the table, not all table's columns.
This can results in good performance benefits, because SQL Server will
return to client only particular columns, not all table's columns.
This can reduce network traffic and boost the overall performance of
the query.
1.Indexes
2.avoid more number of triggers on the table
3.unnecessary complicated joins
4.correct use of Group by clause with the select list
5 In worst cases Denormalization

SQL Optimization Tips Part 1

• Use views and stored procedures instead of heavy-duty queries.
This can reduce network traffic, because your client will send to
server only stored procedure or view name (perhaps with some
parameters) instead of large heavy-duty queries text. This can be used
to facilitate permission management also, because you can restrict
user access to table columns they should not see.

• Try to use constraints instead of triggers, whenever possible.
Constraints are much more efficient than triggers and can boost
performance. So, you should use constraints instead of triggers,
whenever possible.

• Use table variables instead of temporary tables.
Table variables require less locking and logging resources than
temporary tables, so table variables should be used whenever possible.
The table variables are available in SQL Server 2000 only.

• Try to use UNION ALL statement instead of UNION, whenever possible.
The UNION ALL statement is much faster than UNION, because UNION ALL
statement does not look for duplicate rows, and UNION statement does
look for duplicate rows, whether or not they exist.

• Try to avoid using the DISTINCT clause, whenever possible.
Because using the DISTINCT clause will result in some performance
degradation, you should use this clause only when it is necessary.

Index Optimization tips

• Every index increases the time in takes to perform INSERTS, UPDATES
and DELETES, so the number of indexes should not be very much. Try to
use maximum 4-5 indexes on one table, not more. If you have read-only
table, then the number of indexes may be increased.

• Keep your indexes as narrow as possible. This reduces the size of
the index and reduces the number of reads required to read the index.

• Try to create indexes on columns that have integer values rather
than character values.

• If you create a composite (multi-column) index, the order of the
columns in the key are very important. Try to order the columns in the
key as to enhance selectivity, with the most selective columns to the
leftmost of the key.

• If you want to join several tables, try to create surrogate integer
keys for this purpose and create indexes on their columns.

• Create surrogate integer primary key (identity for example) if your
table will not have many insert operations.

• Clustered indexes are more preferable than nonclustered, if you need
to select by a range of values or you need to sort results set with
GROUP BY or ORDER BY.

• If your application will be performing the same query over and over
on the same table, consider creating a covering index on the table.

• You can use the SQL Server Profiler Create Trace Wizard with
"Identify Scans of Large Tables" trace to determine which tables in
your database may need indexes. This trace will show which tables are
being scanned by queries instead of using an index.

• You can use sp_MSforeachtable undocumented stored procedure to
rebuild all indexes in your database. Try to schedule it to execute
during CPU idle time and slow production periods.
sp_MSforeachtable @command1="print '?' DBCC DBREINDEX ('?')"

concatenate strings with COALESCE

This question is asked quite frequently

Color
------
red
orange
blue
green

And return a resultset like this:
Colors
-------------------------
red,orange,blue,green


This isn't exactly relational, and can certainly be handled by the presentation layer. However, there are kludges in SQL Server that will allow you to derive a solution. We'll start with the simple case above:

CREATE TABLE Colors
(
Color VARCHAR(32)
)
GO

SET NOCOUNT ON
INSERT Colors SELECT 'red'
INSERT Colors SELECT 'orange'
INSERT Colors SELECT 'blue'
INSERT Colors SELECT 'green'
GO

DECLARE @colors VARCHAR(1024)

SELECT
@colors = COALESCE(@colors + ',', '') + Color
FROM
Colors

SELECT Colors = @colors
GO

DROP TABLE Colors
GO

difference between varchar and nvarchar

The difference is that nvarchar is used to store unicode data, which is used to store multilingual data in your database tables. Other languages have an extended set of character codes that need to be saved and this datatype allows for this extension. If your database will not be storing multilingual data you should use the varchar datatype instead. The reason for this is that nvarchar takes twice as much space as varchar, this is because of the need to store the extended character codes for other languages

Monday, June 2, 2008

Write a Registry key

using Microsoft.Win32;
...
RegistryKey masterKey = Registry.LocalMachine.CreateSubKey
("SOFTWARE\\Test\\Preferences");
if (masterKey == null)
{
Console.WriteLine ("Null Masterkey!");
}
else
{
try
{
masterKey.SetValue ("MyKey", "MyValue");
}
catch (Exception ex)
{
Console.WriteLine (ex.Message);
}
finally
{
masterKey.Close();
}
}

Read a Registry key

using Microsoft.Win32;
...
RegistryKey masterKey = Registry.LocalMachine.CreateSubKey
("SOFTWARE\\Test\\Preferences");
if (masterKey == null)
{
Console.WriteLine ("Null Masterkey!");
}
else
{
Console.WriteLine ("MyKey = {0}", masterKey.GetValue ("MyKey"));
}
masterKey.Close();

FxCop

FxCop is an application that analyzes managed code assemblies (code that targets the .NET Framework common language runtime) and reports information about the assemblies, such as possible design, localization, performance, and security improvements. Many of the issues concern violations of the programming and design rules set forth in the Design Guidelines for Class Library Developers, which are the Microsoft guidelines for writing robust and easily maintainable code by using the .NET Framework.

FxCop is intended for class library developers. However, anyone creating applications that should comply with the .NET Framework best practices will benefit. FxCop is also useful as an educational tool for people who are new to the .NET Framework or who are unfamiliar with the .NET Framework Design Guidelines.

FxCop is designed to be fully integrated into the software development cycle and is distributed as both a fully featured application that has a graphical user interface (FxCop.exe) for interactive work, and a command-line tool (FxCopCmd.exe) suited for use as part of automated build processes or integrated with Microsoft Visual Studio® .NET as an external tool.

check out the below link for more information.
http://msdn.microsoft.com/en-us/library/bb429476(vs.80).aspx

Sunday, June 1, 2008

Transaction and Locks

· What is a “Database Transactions “?
· What is ACID?
· What is “Begin Trans”, “Commit Tran”, “Rollback Tran” and “Save Tran”?
· What are “Checkpoint’s” in SQL Server?
· What are “Implicit Transactions”?
· Is it good to use “Implicit Transactions”?
· What is Concurrency?
· How can we solve concurrency problems?
· What kind of problems occurs if we do not implement proper locking strategy?
· What are “Dirty reads”?
· What are “Unrepeatable reads”?
· What are “Phantom rows”?
· What are “Lost Updates”?
· What are different levels of granularity of locking resources?
· What are different types of Locks in SQL Server?
· What are different Isolation levels in SQL Server?
· What are different types of Isolation levels in SQL Server?
· If you are using COM+, what “Isolation” level is set by default?
· What are “Lock” hints?
· What is a “Deadlock”?
· What are the steps you can take to avoid “Deadlocks”?
· How can I know what locks are running on which resource?

Database Optimization

· What are indexes?
· What are B-Trees?
· I have a table which has lot of inserts, is it a good database design to create
indexes on that table?
· What are “Table Scan’s” and “Index Scan’s”?
· What are the two types of indexes and explain them in detail?
· What is “FillFactor” concept in indexes?
· What is the best value for “FillFactor”?
· What are “Index statistics”?
· How can we see statistics of an index?
· How do you reorganize your index, once you find the problem?
· What is Fragmentation?
· How can we measure Fragmentation?
· How can we remove the Fragmented spaces?
· What are the criteria you will look in to while selecting an index?
· What is “Index Tuning Wizard”?
· What is an Execution plan?
· How do you see the SQL plan in textual format?
· What is Nested join, Hash join and Merge join in SQL Query plan?
· What joins are good in what situations?
· What is RAID and how does it work?
Chapter 13: Transaction and Locks

Replication Questions

· Whats the best way to update data between SQL Servers?
· What are the scenarios you will need multiple databases with schema?
· How will you plan your replication?
· What are publisher, distributor and subscriber in “Replication”?
· What is “Push” and “Pull” subscription?
· Can a publication support push and pull at one time?
· What are different models / types of replication?
· What is Snapshot replication?
· What are the advantages and disadvantages of using Snapshot replication?
· What type of data will qualify for “Snapshot replication”?
· What is the actual location where the distributor runs?
· Can you explain in detail how exactly “Snapshot Replication” works?
· What is merge replication?
· How does merge replication works?
· What are advantages and disadvantages of Merge replication?
· What is conflict resolution in Merge replication?
· What is a transactional replication?
· Can you explain in detail how transactional replication works?
· What are data type concerns during replications?

Data Warehousing / Data Mining Questions

- What is “Data Warehousing”?
· What are Data Marts?
· What are Fact tables and Dimension Tables?
· What is Snow Flake Schema design in database?
· What is ETL process in Data warehousing?
· How can we do ETL process in SQL Server?
· What is “Data mining”?
· Compare “Data mining” and “Data Warehousing”?
· (What is BCP?
· How can we import and export using BCP utility?
· During BCP we need to change the field position or eliminate some fields how
can we achieve this?
· What is Bulk Insert?
· What is DTS?
· Can you brief about the Data warehouse project you worked on?
· What is an OLTP (Online Transaction Processing) System?
· What is an OLAP (On- line Analytical processing) system?
· What is Conceptual, Logical and Physical model?
· What is Data purging?
· What is Analysis Services?
· What are CUBES?
· What are the primary ways to store data in OLAP?
· What is META DATA information in Data warehousing projects?
· What is multi-dimensional analysis?
· What is MDX?
· How did you plan your Data warehouse project?
· What are different deliverables according to phases?
· Can you explain how analysis service works?
· What are the different problems that “Data mining” can solve?
· What are different stages of “Data mining”?
· What is Discrete and Continuous data in Data mining world?
· What is MODEL is Data mining world?
· How are models actually derived?
· What is a Decision Tree Algorithm?
· Can decision tree be implemented using SQL?
· What is Naïve Bayes Algorithm?
· Explain clustering algorithm?
· Explain in detail Neural Networks?
· What is Back propagation in Neural Networks?
· What is Time Series algorithm in data mining?
· Explain Association algorithm in Data mining?
· What is Sequence clustering algorithm?
· What are algorithms provided by Microsoft in SQL Server?
· How does data mining and data warehousing work together?
· What is XMLA?
· What is Discover and Execute in XMLA?