I was working on an SSIS Package the other day, when I began the process of looping over results from a query, which involved adding a variable to the loop for the field value that I needed. The database column has a datatype of varchar(50) and the variable it was being set to in the loop was marked as a string. However, I was getting an error message that read as follows:
How do you call SQL stored procedures from ColdFusion?
I recently started building an MSSQL stored procedure, which we are implementing to be sure that the various areas requiring the data are all working with the same dataset. The reason we went with a stored procedure, over a component, is because this stored procedure will be accessed from at least an SSIS package and a ColdFusion script.
I am relatively new to stored procedures and so far, I've found two ways to effectively call a Stored Procedure from ColdFusion. The two ways are using the tag, CFStoredProc, and using EXEC within a CFQuery tag.
Below are some examples of what I have found in my tests.
The title to this blog post is literally the phrase I type into the Google search engine every time I need to use a join in SQL. I've looked at many articles about joins looking for some little rhythm or acronym to help me remember what will be included with each join, but I think I'm just one of those people who will always have to look it up.
Here is a link to Coding Horror: A Visual Explanation of SQL Joins. The article is short and sweet and tells you exactly what you need to know about joins.
Does anyone have an easy way of remembering this?
Selecting a top number of results in Microsoft SQL Server is very easy. You just use the TOP indicator and indicate a specific number or a percentage. But, what if you want to select a variable number of results? Below are examples showing how to dynamically select a specific number of results leveraging ColdFusion and SQL.
Recently I was tasked with setting up and importing a group of large TSV text files into a SQL database using a SQL Server Integration Services (SSIS) package. I have accomplished this task many times before and always thought the longest most annoying part was setting up the data types for each column in the text file. Anyone who has done this knows that the auto detect datatypes feature of setting up a flat file connection is not a silver bullet. This time round I had too many files with to many possible columns that were going into tables that were already setup and defined. Believe it or not, there is no way to link the flat file connection to a table to grab the datatypes from there, so I was dreading this task. There had to be a better way.
After much googling the answer actually came to me deep in a thread of comments on some SSIS forum. The random user with no name simply said..."why not just perform the task backwards?". Backwards, what could he mean? Then after much thought on the comment it hit me, I realized what he might be alluding to and hopefully he was right.
Instead of importing from a flat file into my table, I started first by setting up my task to import my table into a empty flat "txt" file. Here are the steps.
- Create an empty flat text file (this will be used as a dummy file).
- In SSIS Create a "dataflow" task.
- Then create an OLDB data source and connect it to your table.
- Then create an flat file destination, create a new connection and connect it to you empty file.
- Drag the arrow to connect the two.
- Open the flat text file properties and go to the columns.
- You columns should now be set for the flat file connection to the columns of the table.
- Delete the dataflow task.
- Modify the connection and set it to the real text file.
You should now have a new flat file connection saved that has the proper columns from the table to use for you flat file import. Hopefully someone finds this helpful. It saved me hours of manually setting each column type.
While searching for SQL Joins I found an article that I believe explains the different types of joins pretty well - A Visual Representation of SQL Joins
I don't know about you, but remembering what is returned with each different type of join is not my strong suit.
Do you have any tricks or sayings that help you?
While looking into an error with a SSIS task that executed a stored procedure, I found that SQL Server was truncating a string being used in a varchar variable.
This stored procedure needed to dynamically build a SQL statment, then execute it, based on a generated SQL string that was stored in a database table. The generated SQL string was much longer than anticipated when the stored procedure was set up originally, and the result was the generated SQL string being cut off in the middle and the final SQL to execute being invalid SQL.
To fix this, I modified the variable declaration for the searchQuery to be a varchar(8000) value and then modified the final executable SQL variable to be nvarchar(max).
Without the nvarchar(max) setting, the final SQL string just kept getting cut off in the middle and executing invalid sql statements.
Luckily, the generated SQL was invalid and just failed and gave me an error in the SSIS package. However, I can see how it would be possible for an unexpected SQL statement to be generated in the variable which could have very bad results if it was a valid statement that didn't do what you expected (for example a DELETE statement that didn't have the entire WHERE clause).
Fixed stored procedure example:
-- Add the parameters for the stored procedure here
@searchQuery varchar(8000) = N'' OUTPUT
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
-- get settings for segment
SELECT @searchQuery = searchQuery from emcDB_segment_travel_history where segment_id = @segment_id
-- Insert statements for procedure here
Declare @SQL nvarchar(max)
IF LEN(@searchQuery) > 0 and @searchQuery ALTER PROCEDURE [dbo].[sp_exclude_trams_rescard_matches]<> ''
SET @SQL = 'UPDATE emcDB_emc_segment_clients set exclude = 1, exclude_reason = ''Rescard Exclude'', rescard_exclude = 1 where segment_id = ' + @segment_id + ' and client_id in ( ' + @searchQuery + ') and agency_id not in (select agency_id from emcDB_emc_trams_exclude_overrides where segment_id = ' + @segment_id + ') and agency_id in (select agencyid from emcDB_members where emc_rescard_exclude_dest_date = 1)'
EXEC sp_executesql @SQL
The lesson here is to make sure that your variable definitions will allow the full SQL statement to be generated before you dynamically create and execute.
As you may or may not know, I am an avid fan of database design. This morning I read an excellent editorial I just had to share:
Here's an excerpt: "...So, just as we no longer code in Assembler, then C, followed by C++, now we program in bloated languages like Java and C#. Not because either model is superior...because of Cost. At the end of the day, How Much Working Code can you produce and how much did it cost you? If you can purchase hardware for half the cost of writing smaller, faster, more efficient software, then maybe the best solution is to throw hardware at the problem."
I thought the article made an excellent point on the fact that we really do just throw hardware at problems more-so now than any time in the past, and many of the conventional old school DBA tasks can now be handled by web developers themselves. What do you think?
Sometimes you'll want to sort rows in your SQL queries in a way other than the default available sorting methods. That's just the situation I found myself in the other day. I didn't want to create a new column. Take the following SQL:
ORDER BY user_rank
I have spent countless hours of my life staring at SQL code. While I made every effort to keep my own SQL neat, it's not always easy to decipher errors in complex queries - especially those in legacy code or written by developers who don't share my passion for code formatting.
Today I was fortunate to find an excellent SQL prettifier that analyzes code (like a lint-style tool) and formats it with proper capitalization, indentation, and so forth.
Here's the link; I hope it saves somebody time in debugging and managing SQL code as it has just done for me.