Tuesday, July 03, 2007

 

SQL Formatting

You can buy the RedGate Sql Prompt bundle and it will format your SQL code for you. You can also do it for free at:

http://www.wangz.net/cgi-bin/pp/gsqlparser/sqlpp/sqlformat.tpl

Wednesday, October 25, 2006

 

SQL Developers: Download SQLPrompt

RedGate, the providers of several completely excellent SQL Server-related software products, have a free download called SQLPrompt.
Very well-done, intellisense for your SQL Server connection. Works in Query Analyzer, VS.Net, SSMS.

An example of its robustness: having typed "FROM myTable INNER JOIN ", I'll automatically get a list of tables with foreign keys referencing myTable, or with primary keys that myTable references. Having selected one, then continuing to the ON phrase of my join, SQLPrompt will offer the right expression to complete the join.

If you go into the advanced options you'll find such goodies as "automatically change keywords to uppercase as I type."

It gets annoying some times when the software becomes a little confused, but not too often, and then I'll just reach down to the system tray and turn it off.

/T

Friday, October 20, 2006

 

Idempotence in idempotence in dropping tables, columns

When working on a database with a team of people all making changes, it's a great idea to make all change scripts idempotent, which in this case means "always having exactly the same output as the first time it was run." See Phil's article for a great demonstration of the principle and practice.

I have found that using the object_id() function and the system tables allows me to write a little less code than the information_schema view require - I'm too lazy to type information_schema.referential_constraints too many times. All tables, indexes, check constraints, default values, and foreign key constraints can be tested for existence using object_id('objectName'), which returns null if the object isn't there in the database.
The syscolumns table can be used to test for column existence.

Note that in SQL Server 2005, the tables have been renamed to sys.objects and sys.columns. I haven't tested this with those tables, but with my SQL Server 2005 still allows me to use the table names from SQL Server 2000.

Be careful with check constraints and foriegn key constraints that are autogenerated by the database system: the names given by one instance of SQL Server will be different from that given by another, so that if you're checking for existence of a constraint named 'FK__tblMyTab_O_9t5325', your script might only be idempotent on one machine. In fact, it may throw errors on any other instance of the database.

Two ways to deal with this:
1) When creating database objects, always name your constraints. This has a slightly higher overhead but can make it much easier to change the thing using DDL later on. Here's an example of how to achieve that:

IF object_id('dbo.Apples') is null
CREATE TABLE [dbo].[Apples](
[Id] [int] IDENTITY(1,1) NOT NULL,
[Description] [nvarchar](50) ,
CONSTRAINT [PK_Apples_Id] PRIMARY KEY CLUSTERED ([Id] ASC) ,
CONSTRAINT [UKey_Apples] UNIQUE NONCLUSTERED ([Description] ASC)
)
GO

IF object_id('dbo.Oranges') is null
BEGIN
CREATE TABLE [dbo].[Oranges](
[Id] [int] IDENTITY(1,1) NOT NULL,
[Description] [nvarchar](50) ,
CONSTRAINT [PK_Oranges_Id] PRIMARY KEY CLUSTERED ([Id] ASC) ,
CONSTRAINT [UKey_Oranges] UNIQUE NONCLUSTERED ([Description] ASC)
)
END
GO

IF NOT EXISTS (SELECT 1 from syscolumns
where id = object_id('Apples')
and name='OrangeId')
BEGIN
ALTER TABLE Apples
Add OrangeId int not null
constraint DF_Apples_OrangeId default 1 WITH VALUES
END

IF object_id('FK_Apples_Oranges') IS NULL
BEGIN
ALTER TABLE Apples
ADD CONSTRAINT FK_Apples_Oranges
FOREIGN KEY (OrangeId)
REFERENCES Oranges (Id)
END

IF NOT EXISTS (SELECT 1 FROM syscolumns
WHERE id = object_id('Apples')
and name = 'PickedOn')
BEGIN
ALTER TABLE Apples ADD PickedOn DateTime
CONSTRAINT DF_Apples_PickedOn DEFAULT GetDate()
END

--Note that if I try to drop the PickedOn column at this point the command will fail
--because of the default value. Luckily, there's a name associated with that constraint:
IF object_id('DF_Apples_PickedOn') is not null
BEGIN
ALTER TABLE Apples
DROP DF_Apples_PickedOn
END

if object_id('UKey_Apples') is not null
BEGIN
alter table Apples drop constraint UKey_Apples
END

if object_id('PK_Apples_Id') is not null
BEGIN
alter table Apples drop constraint PK_Apples_Id
END




2) If you already have objects with weird names, you can still find them using the
information_schema views:


declare @constraint_name sysname,
@cmd varchar(1000)
while exists( select 1
from information_schema.referential_constraints
where constraint_name like 'FK__tblTask%')
begin
SELECT @constraint_name = (
select top 1 constraint_name
from information_schema.referential_constraints
where constraint_name like 'FK__tblTask%')
PRINT 'Dropping constraint ' + @constraint_name
SELECT @cmd = 'ALTER TABLE dbo.tblTask_Object
DROP CONSTRAINT ' + @constraint_name
EXEC (@cmd)
end



Wednesday, October 18, 2006

 

Performance Follow up: SqlHelper, object[] vs SqlParameter[]

On a Q&D performance test as suggested by Phil, performance seems to differ only negligably, but on average, I (and the report I cited as well) was wrong.

Calling the methods included in a previous post, 5000 times each,
With parameters took 1203.125 milliseconds.
With objects took 1250 milliseconds.
Objects took -46.875 milliseconds less.
20000 times each:
With parameters took 4859.375 milliseconds.
With objects took 5015.625 milliseconds.
Objects took -156.25 milliseconds less.
If this test is accurrate, it costs pretty much a constant 0.0078 ms to use the SqlHelperParameterCache object to look up the parameters.

Thanks Phil! I think if we go by Haacked's recommendations on optimization, then in this case we should clearly go with the option that renders code most easy to read, write, and maintain.

 

Code Sample: Passing object[] to SqlHelper

Thanks, Steve. Of course there should be some code. Here:
Apologies for the lost indentation: that I can't use CopyAsHtml well at blogger.com is probably the biggest reason that I should be blogging with Subtext.


// Call SqlHelper with the 'params SqlParameter[] overload'

public void GetWithSqlParams(SystemUser aUser)

{

SqlParameter idParam = new SqlParameter("@id", aUser.Id);

SqlParameter nameParam = new SqlParameter("@name", aUser.Name);

SqlParameter emailParam = new SqlParameter("@name", aUser.Email);

SqlParameter loginParam = new SqlParameter("@name", aUser.LastLogin);

SqlParameter logoutParam = new SqlParameter("@name", aUser.LastLogOut);

SqlHelper.ExecuteNonQuery(

Settings.ConnectionString, CommandType.StoredProcedure, "User_Update",

idParam,

nameParam,

emailParam,

loginParam,

logoutParam);

}

// Call SqlHelper with the 'params object[] overload'

public void GetWithObjects(SystemUser aUser)

{

SqlHelper.ExecuteNonQuery(

Settings.ConnectionString, "User_Update",

aUser.Id,

aUser.Email,

aUser.Name,

aUser.LastLogin,

aUser.LastLogOut);

}

// Which is semantically identical with:

public void GetWithObjectArray(SystemUser aUser)

{

SqlHelper.ExecuteNonQuery(

Settings.ConnectionString, "User_Update",

new object[]

{

aUser.Id,

aUser.Email,

aUser.Name,

aUser.LastLogin,

aUser.LastLogOut

});

}


 

Pass Parameters as Objects to MS Data Access Application Blocks

Intuition seems too state that it's better to call the SqlHelper overloads that take SqlParameter[] for the parameters to a stored procedure or query.

But in fact, you'll get a performance boost in addition to easier coding, by calling the overload that uses object[]. That's because the parameters that get created on the first call to that proc or query get cached, so that the SqlHelper object doesn't have to create the SqlParameter objects anew, but only set their values and then give them to the SqlConnection object.

Here's a snippet from the Informit .Net Reference Guide:

SqlHelperParameterCache

Many of the SqlHelper methods query the database to discover stored procedure parameters. This is a huge benefit in that it prevents you from having to write code that creates SqlParameter arrays, but it's woefully inefficient if you do it for every stored procedure call. The SqlHelperParameterCache is a way to provide the convenience of parameter discovery while minimizing the cost. When you call one of the methods that does parameter discovery, SqlHelper checks the parameter cache to see if there has already been a call to that particular stored procedure on the supplied connection or connection string. If so, the parameter types are retrieved from the in-memory cache. If not, the SqlHelperParameterCache object discovers the parameters and then caches them. The result is that the first call to a given stored procedure incurs the cost of discovering the parameters, but subsequent calls get them from the cache, almost free. The cost of getting the parameters from the cache is less than creating the parameter array yourself for each call. The initial hit is expensive, but amortized over the application's lifetime, it's not expensive at all.


Monday, October 16, 2006

 

Stopping the War Problem

First, an anecdote from Michael Nagler, department chair of Peace and Conflict Studies at UC Berkeley:

A woman was in Washington, DC, doing work for an upcoming Anti-War action near the capitol. She happened to wind up in an elevator standing among four men in colorful military outfits.
Feeling a little uncomfortable, she made polite smalltalk: "What are you gentlemen doing?" One of them proudly responded to the effect of, "We are lobbying for funds to be allocated for a new missle defense system. And what are you doing?"
"Protesting the actions of the united states military."
Uncomfortable silence followed. Then, as the men filed out of the elevator, one of them turned to her and said quietly, "Hurry up!"


In Nagler's words: This story is indicative of the current state of the United States population. Out of five:
One is probably sold that war is the right way to handle today's conflicts.
Two are probably unsure of what to do and are going along, perhaps because they haven't thought it through all the way.
One is probably actively doing something to stop the proliferation of unjust/illegal war.
...And at least one knows that the war is wrong, but is depending on others to pave the way.

The message: It does take a little courage to start speaking out against actions that have been proclaimed to be matters of national security. But if you find yourself among those who agree with the pundits claiming that, in addition to the tragic deaths of many thousands of United States, Afghani and Iraqi citizens, in addition to the deception that is given credit for the US ever gaining support for this war, the current war and war plans pose severe threats to U.S. security, then by engaging in some dialog, we might all find ourselves in the company of others who agree but who haven't yet known who to talk to about it.

And if you have talked much on this topic, then you may have been placed in the difficult position of not knowing how to answer the question, "You say the U.S. should pull out of Iraq, how do you think that should be done?" The current issue of Harper's Magazine, includes a very well conceived overview of a plan, complete with cost estimates and a very realistic outlook.

This issue of Harper's also includes some very good food for thought regarding the U.S. military plans regarding Iran.

Wednesday, September 27, 2006

 

Unzip your files

I really don't like the unzip feature that comes with the Windows shell. Terribly slow if there are many files in the arrchive.
I also don't like 7zip, which I've seen used around. It frequently isn't able to read zip files.
I do like ICSharpCode.SharpZipLib, an open-source compression library.

Here's a little PowerShell script that will extract zip files for you:

Param($source, $destination=".")

$sourceFile = get-item $source
$destDir = get-item $destination

function using-library {
param($ns = $(throw "Enter Namespace to import"), [bool]$SuppressOutput = $false)

trap { throw "Could not import Namespace $($ns)" }

if ($SuppressOutput) {
[void][System.Reflection.Assembly]::LoadWithPartialName($ns)
} else {
[System.Reflection.Assembly]::LoadWithPartialName($ns)
}
}


using-library "ICSharpCode.SharpZipLib" $true

$unzip = new-object ICSharpCode.SharpZipLib.Zip.FastZip
$unzip.ExtractZip($sourceFile.FullName, $destDir.FullName, "Always", $null, "", "")

You'll have to add the assembly to the GAC. Open a Visual Studio SDK command window and type the command,

c:\>gacutil /i <path_to_ICSharpCode.SharpZipLib.dll>


Also, notice that in this script I am still instantiating objects using fully-qualified names. Can anyone tell me how to import namespaces into the global symbol table within a PowerShell script?

 

Adding Custom Logging Properties to your Log4Net configuration

In the Log4Net release notes there's a note about now to create your own custom pattern element in the logging configuration, so that, e.g., your conversion pattern might look like:

"%-5level %logger - %myConverter - %message%newline"

Maybe someone can comment on why this would be better than what I will recommend. Anyways, if you take this approach and want to include information about the actual logging event, you should know two things:
1) subclassing PatternLayoutConverter instead of PatternConverter will give you a Convert method that includes a LoggingEvent object.

2) The state object passed to PatternConverter.Convert is actually a LoggingEvent object!

Great.
I found it easier to create a new appender, which simply added objects to the LoggingEvent's properties collection.

Here's the appender class:

public class PropertiesExpanderAppender : ForwardingAppender
{
protected override void Append(LoggingEvent loggingEvent)
{
if (null != HttpContext.Current)
{
HttpContext context = HttpContext.Current;
loggingEvent.Properties["USER_IP"] = context.Request.UserHostAddress;
loggingEvent.Properties["REQUEST_URI"] = context.Request.Url.PathAndQuery;
}

if (null != UserProperties.CurrentUser)
{
UserProperties userProperties = UserProperties.CurrentUser;
loggingEvent.Properties["USER_EMAIL"] = userProperties.UserName;
loggingEvent.Properties["USER_NAME"] = userProperties.Name;
loggingEvent.Properties["USER_ID"] = userProperties.UserId;
}

// Delegate to base class which will forward
base.Append(loggingEvent);
}
}


Here are the important snippets from the log4net configuration:

<appender name="PropertiesExpanderAppender" type="NameSpace.PropertiesExpanderAppender">
<appender-ref ref="RollingLogFileAppender" />
</appender>
<appender name="RollingLogFileAppender" type="log4net.Appender.RollingFileAppender">
...
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%date ... %P{USER_IP} %P{REQUEST_URI} %P{USER_EMAIL} %P{USER_NAME} ..." />
</layout>
</appender>
<root>
<level value="ALL" />
<appender-ref ref="PropertiesExpanderAppender" />
</root>


Note the %P{USER_IP} pattern elements, which correspond to the properties added by the forwarding appender. Now you can add as many logging elements as you want, without having to add a bunch of classes.

 

Getting your Log4Net AdoNetAppender running w/ Sql Server

After reading through Michael Flanakin's post, Log4Net is Crap4Crap, and after some further hardship getting the AdoNetAppender to work, I have my own list of recommendations if you're trying to get it to work.

1) Don't rely on log4net to give you feedback. If anyone can explain this, please do. I did set log4net.Util.LogLog.InternalDebugging = true, and still didn't get any help coming through my RollingLogFileAppender. If you can help with this, please do, for posterity.

2) Start a SQL Profiler and watch all login, failed login, and start command events.
Then you'll know whether the Appender is actually managing to execute the command and, if so, you'll get to have a look at what's wrong with it.

3) Don't let your numeric values be set to null. Log4net will die. And be ready for null strings to actually be sent as the string value, "(null)". This might only be an issue if you're using custom logging properties.

4) Start simple. I wound up succeeding by adding defaults to the parameters on my stored procedure, and then adding stored parameters one by one into the AdoNetAppender configuration.

Thursday, September 21, 2006

 

HTTP ERROR: 400

Trying to get a self-signed SSL certificate installed on Windows Server 2003, sp1? Good luck.

I spent a few hours trying to use selfcert.exe to install a certificate on an x64 machine, 2003 Server R2, sp1. Every time I made a request to a URI other than the root, the response I got from the server was HTTP ERROR: 400. Period.

Here's what worked for me:

1) Start->Programs->IIS Resources->SelfCert.exe.
type selfcert.exe to create a certificate on the default web server.

2) Open MMC and add the Certificates snap-in. Specify the Computer Account when asked which certificates to open.

3) Find the newly created certicate under Personal, and export it to a pfx file. Be sure to specify a password as windows won't let you import the certificate without entering something into the password field.

4) On the Windows Server, Start->Run->Inetmgr to open Internet Services Manager.
under Directory Security on the web site properties, import the file from pfx.

That worked!

By the way, don't use CN= if you want SQL Server to continue working. Use a different canonical name, such as the hostname of the site you're trying to protect. See: http://support.microsoft.com/kb/302409/EN-US/ if you're using self-cert and suddenly you're getting SSL errors connecting to sql server.

This page is powered by Blogger. Isn't yours?