Pages

Sunday, March 30, 2008

printers connected to your computer

This code sample gets you the list of all connected printers (both local and network printers).

Microsoft.Win32.RegistryKey rk;
rk = Microsoft.Win32.Registry.CurrentUser.OpenSubKey(@"Printers\Settings");
string[] printernames = rk.GetValueNames();
if (printernames.Length > 0)
{
foreach (string s in printernames)
{
{
Console.WriteLine(s);
}
}
}

Convert Database data to XML

The data existing in the relational database can be easily converted into an xml documant in the program. The following code illustrates this.

I have used SQLServer database table employee ( with columns empid, first_name,last_name,deptcd,salary and resig_status) The connection is established using appropriate connection string. The data from the RDBMS table is stored in the dataset dsEmp. The xml class XmlDataDocument is used to convert the data into an xml data file. It takes the dataset as input and converts to xml data.

{
DataSet dsEmp = new DataSet();
string strCon;
string strSelect;

strCon = "data source=SQLEXPRESS;initial catalog=dbemp;persist security info" +
"=False;user id=sa;password=jadoogar;workstation id=HCL;packet size =4096";
strSelect = "Select empid,first_name,deptcd from employee";

try
{
SqlConnection sqlCon = new SqlConnection(strCon);
SqlDataAdapter empAdapter = new SqlDataAdapter(strSelect,sqlCon);
empAdapter.Fill(dsEmp,"employee");
}
catch
{
}
XmlDataDocument empDoc = new XmlDataDocument(dsEmp);
empDoc.Save(MapPath("xmldata/newemp.xml"));

}

Saturday, March 15, 2008

Data Modeling

data modeling is the process of creating a data model by applying a data model theory to create a data model instance. A data model theory is a formal data model description. See database model for a list of current data model theories.

When data modeling, we are structuring and organizing data. These data structures are then typically implemented in a database management system. In addition to defining and organizing the data, data modeling will impose (implicitly or explicitly) constraints or limitations on the data placed within the structure.

Managing large quantities of structured and unstructured data is a primary function of information systems. Data models describe structured data for storage in data management systems such as relational databases. They typically do not describe unstructured data, such as word processing documents, email messages, pictures, digital audio, and video.

A data model instance may be one of three kinds (according to ANSI in 1975[1]):

--> a conceptual schema (data model) describes the semantics of a domain, being the scope of the model. For example, it may be a model of the interest area of an organization or industry. This consists of entity classes (representing kinds of things of significance in the domain) and relationships (assertions about associations between pairs of entity classes). A conceptual schema specifies the kinds of facts or propositions that can be expressed using the model. In that sense, it defines the allowed expressions in an artificial 'language' with a scope that is limited by the scope of the model. For generally applicable models, see below under 'Generic data model'.
--> a logical schema (data model) describes the semantics, as represented by a particular data manipulation technology. This consists of descriptions of tables and columns, object oriented classes, and XML tags, among other things.
--> a physical schema (data model) describes the physical means by which data are stored. This is concerned with partitions, CPUs, tablespaces, and the like.

Debug a T-SQL Stored Procedure

1) In a new SQL Server project, establish a connection to the AdventureWorks sample database.

2) Create a new stored procedure using the code from the first example section below, and name it HelloWorld.


CREATE PROCEDURE HelloWorld
AS
DECLARE @mynvarchar NVARCHAR(50),
@myfloat FLOAT
SET @mynvarchar = @@VERSION
SET @mynvarchar = 'Hello, world!'
SET @myfloat = 1.6180
PRINT @mynvarchar
RETURN (0)

3) Set breakpoints in HelloWorld, and step into the stored procedure. The instruction pointer, designated by a yellow arrow, will appear on the line SET @mynvarchar = @@VERSION, the first executable line of code in the stored procedure.

4) Try out different debugging features.

4.1) Make the Locals window visible. To do so, on the Debug menu, click Windows, and then click Locals. Notice that the parameters and local variables are displayed in the Locals window with their corresponding values. You can edit the values of the variables in the Locals window as the stored procedure runs.

4.2) Press F10 to step one line in the stored procedure. Notice that the value of the variable @mynvarchar has changed in the Locals window and its value is now displayed in red, indicating it has changed.

4.3) Make the Watch window visible. To do so, on the Debug menu, click Windows, and then choose Watch.

4.4) In the Text Editor, double-click the @mynvarchar variable to select it. Drag @mynvarchar to any location on the Watch window. The variable is now added to the list of watched variables.

4.5) In the Text Editor, right-click the line Return (0), and on the shortcut menu, click Insert Breakpoint.

4.6) On the Debug menu, click Continue.

5) Choose Continue again to finish debugging the stored procedure.

Enable CLR Debugging

1) Open Server Explorer.

2) In Server Explorer, right-click on the connection you want to debug and choose Allow SQL CLR Debugging. This setting enables SQL CLR debugging for all connections on the server.

3) A message box appears with the warning: "SQL CLR debugging will cause all managed threads on the server to stop. Do you want to continue?". When you are debugging SQL CLR database objects, breaking execution will break all threads on the server, affecting other users. For this reason, you should not debug SQL CLR applications on a production server.

4) Click Yes to enable debugging.

Sunday, March 9, 2008

undocumented extended stored procedures

An extended stored procedure (xp) is a dynamic link library that runs directly in the address space of SQL Server and is programmed using the SQL Server Open Data Services API. You can run extended stored procedures from the Query Analyzer, for example, just as you would normal stored procedures. Extended stored procedures are used to extend the capabilities of SQL Server. You can take advantage of the many extended stored procedures that come with SQL Server, or you can write your own in a programming language such as C or C++.

sp_MSgetversion
This extended stored procedure can be used to get the current version of Microsoft SQL Server. To get the current SQL Server version, run

EXEC master..sp_MSgetversion

Note. A more common way to retrieve the current SQL Server version (this way provides more information) is to use following SELECT statement:

SELECT @@version

xp_dirtree
This extended stored procedure can be used to get a list of all the folders for the folder named in the xp. To get a list of all the folders in the C:\MSSQL7 folder, run:

EXEC master..xp_dirtree 'C:\MSSQL7'


xp_enum_oledb_providers
This extended stored procedure is used to list of all the available OLE DB providers. It returns Provider Name, Parse Name and Provider Description. To get a list of all OLE DB providers for your SQL Server, run:

EXEC master..xp_enum_oledb_providers


xp_enumcodepages
This extended stored procedure can be used to list of all code pages, character sets and their description for your SQL Server. To get a list of all code pages and character sets, run:

EXEC master..xp_enumcodepages


keep watching for more information

Count T-SQL code execution

The first thing you can do is simply compare the difference between the timestamp BEFORE your query, and the timestamp AFTER.

DECLARE @start DATETIME, @End DATETIME
SET @start = CURRENT_TIMESTAMP

DECLARE @i INT
SET @i = 0
WHILE @i < 24000
BEGIN
SET @i = @i + 1
END
SET @End = CURRENT_TIMESTAMP
SELECT DATEDIFF(MS, @start, @End)

CASE in SQL Server

One of the keys to database performance if keeping your transactions as short as possible.we will look at a couple of tricks using the CASE statement to perform multiple updates on a table in a single operation.

This example uses the pubs database to adjust book prices for a sale by different amounts according to different criteria. In the example I am going to knock 25% off all business books from any publisher, and 10% off any non-business books from a particular publisher. You might be tempted to wrap two separate update statements into one transaction like this:

begin tran
update titles set ...
update titles set ...
commit tran

The down side of this technique is that it will read through the table twice, once for each update. If we code our update like the example below, then the table will only need to be read once. For large tables, this can save us a lot of disk IO, especially if the query requires a table scan over a long table

update titles
set price =
case
when type = "business"
then price * 0.75
when pub_id = "0736"
then price * 0.9
end
where pub_id = "0736" OR
type = "business"


Note that there is a definite "top-down" priority involved in the CASE statement. For business books from publisher 0736 the "business" discount will apply because this is the first condition in the list to be fulfilled. However, we will not give a further 10% publisher discount, even though the criteria for the second "when" clause is satisfied, because the CASE statement only evaluates criteria until it finds the first one that fits.

Improving ASP.NET Performance

To build ASP.NET applications that meet your performance objectives, you need to understand the places where bottlenecks typically occur, the causes of the bottlenecks, and the steps to take to prevent the bottlenecks from occurring in your application. A combination of sound architecture and design, best practice coding techniques, and optimized platform and Microsoft .NET Framework configuration is required.

I came accross a very good article around to improve the performance of the system.


http://msdn2.microsoft.com/en-us/library/ms998549.aspx

Impersonation in ASP.NET

At times users access a resource as though they were someone else. This is known as impersonation. For example, if a web page has no access controls, then any user can access that web page. HTML pages, ASP pages, and components in version 3.0 and earlier can be accessed through two accounts named IUSR_machinename and IWAM_machinename. Both the accounts are set up during IIS installation, and are automatically added to all the folders in every web site on the server.

Anonymous access to a resource in IIS makes the task of identifying a user extremely difficult. But there is no need to authenticate a user in the case of IIS. When IIS receives a request for a web page or other resource that has permission for anonymous access, IIS treats the IUSR_machinename account as the user's account, to access the resources. If the resource requested by the user is an ASP page that uses a COM or COM+ component, that component is executed using the IWAM_machinename account.

In ASP.NET, when impersonation is turned off, the resources can be accessed using a "local system process" account. When impersonation is turned on, ASP.NET executes every resource using the account of a specified user who is authenticated when the user makes the request. If you specify the IUSR_machinename account to be used as the user account, then ASP.NET will behave like previous versions of ASP, in providing access to the resources.

In ASP.NET, you first need to check whether the application is configured to use impersonation. In the case of IIS, the IIS impersonates users with its own IUSR account. In the case of ASP.NET, impersonation is used to decide whether the user's request should be executed using the account of the requested user, or that of a local system-process account that ASP.NET uses for anonymous requests.

The concept of impersonation is complex to some extent due to the fact that ASP.NET uses the dynamic compilation features of the .NET Framework. The IUSR account has only limited permissions on the local machine, and so is not suitable without some reconfiguration. This account is also used by IIS to access resources like HTML pages, documents, and zip files that are not executed as part of the .NET Framework.

If impersonation is enabled in an ASP.NET application then:
• If anonymous access is enabled in IIS, the request is made using the IUSR_machinename account.
• If anonymous access is disabled in IIS, the request is made using the account of the authenticated user.
• In either case, permissions for the account are checked in the Windows Access Control List (ACL) for the resource(s) that a user requests, and a resource is only available if the account they are running under is valid for that resource.

If impersonation is disabled in an ASP.NET application then:
• If anonymous access is enabled in IIS, the request is made using the system-level process account.
• If anonymous access is disabled in IIS, the request is made using the account of the authenticated user.
• In either case, permissions for the account are checked in the Windows ACL for the resource(s) that a user requests, and a resource is only available if the account they are running under is valid for that resource.

Monday, March 3, 2008

Constraints

There are a number of different ways to implement constraints, but each of them falls into one of these three categories: entity, domain, and referential integrity constraints.

Domain Constraints: A Domain constraint deals with one or more columns. It is important to ensure that a particular column or a set of columns meets particular criteria. When you insert or update a row, the constraint is applied without respect to any other row in the table. The focus is on the data that is in the column. These kinds of constraints will resurface when we deal with Check constraints, Default constraints and rules and defaults.

Entity Constraints: Entity constraints are all about individual rows. This constraint ignores the column as a whole and focuses on a particular row. This can be best exemplified by a constraint that requires every row to have a unique value for a column or a combination of columns. This is to ensure that for a particular row, the same value does not already exist in some other row. We’ll see this kind of constraint in dealing with Primary key and Unique constraints.

Referential Integrity Constraints: Referential integrity constraints are created when a value in one column must match the value in another column. It can either be in the same table or more typically, a different table. For example, we are taking orders for a product, and we accept credit payment. But we will accept only a few standard credit card companies like Visa, MasterCard, Discover, and American Express. Referential integrity constraints allow us to build what we would call a domain table. A domain table is table whose sole purpose is to provide a limited list of acceptable values. In our case we have a CreditCard table with CreditCardID, and CreditCard as fields. We can then build one or more tables that reference the CreditCardID column of our domain table. With referential integrity, any table that is defined as referencing our CreditCard table will have to have a column that matches up to the CreditCardID column of our CreditCard table. For each row we insert into the referencing table, it will have a value that is in our domain list. We will see more of this when we learn about Foreign key constraints.

Keep watching for more information

Saturday, March 1, 2008

Heap

When you first create a new table, there is no index created by default. In technical terms, a table without an index is called a “heap”. We can confirm the fact that this new table doesn’t have an index by taking a look at the sysindexes system table, which contains one for this table with an of indid = 0. The sysindexes table, which exists in every database, tracks table and index information. “Indid” refers to Index ID, and is used to identify indexes. An indid of 0 means that a table does not have an index, and is stored by SQL Server as a heap.