My First Look at Azure

Azure is a cloud based SQL Server service hosted byMicrosoft and has been available since 2009.  So what makes this news important or interesting to you?

Well, recently Microsoft added Windows and SQL Azure to the Visual Studio MSDN subscription.  That’s right, if you have a subscription to Visual Studio MSDN, you can use create and use small Azure databases for free to test, learn and show others how to run SQL in the cloud.  There are different limitations to how much free capacity you qualify for depending on your subscription level.  The three levels include:

Visual Studio 2010 Ultimate with MSDN
Visual Studio 2010 Premium with MSDN
Visual Studio 2010 Professional with MSDN

I’m not going to list the monthly capacity limits here because you can find them for yourself by logging into your subscription.  Furthermore, who knows, they might change over time.  After all, it is Microsoft.

This free use of Windows and SQL Azure with the MSDN subscriptions are not meant to be used for production systems.  Rather I see them as way to test out the capabilities and determine how SQL in the cloud might be an alternative for some projects rather than purchasing new hardware for every project.  One warning, you must provide information for a valid credit card when you register for your free Azure account.  That may sound a little odd at first, but rather than stop your application dead in its tracks if you exceed the monthly limits, Microsoft will simply start billing your credit card for any usage over the monthly maximum free limits.

While I’ve only started to look into Azure, let me give you some quick impressions.

  • Azure is a great way to hedge your hardware investment using a pay as you go model to growth rather than having to buy your maximum expected capacity right from the get go.
  • Azure removes a lot of the hardware and network support issues you would have to provide for yourself or pay someone else to do for you just to keep SQL running.
  • You still need DBA skills for managing schemas, indexes, statistics and creating queries and stored procedures.
  • You may not need to worry about disaster recovery because Microsoft provides automatic fail-over and growth support as well as making sure that all system and application patches are up to date.

Some things that Azure does not provide are:

  • SQL Agent so there is no automation of jobs in the cloud at this time.
  • No replication, data mirroring or log shipping, but then you are not responsible for hardware or network uptime, Microsoft is.  Don’t you trust them?
  • You do not have access to Profiler to help optimize your processes, but you still can optimize your system via normal SQL best practices.
  • No Reporting Services or Analysis Services yet.  (and I was just getting into Analysis Services for pivot tables.)

Overall, it is worth checking out if for no reason than to get an understanding of how it works.  That’s what I’ll be doing over the summer.  In future releases of Azure I’m sure we will see the capabilities grow.  So getting a better understanding of how SQL in the cloud works now will give you a head start later when you decide it is appropriate for your organization to jump up to the cloud.

Building a Comma Delimited String from Data in a SQL Table

For those of you who are also into SQL Server and haven’t checked out www.SQLShare.com yet, you really should, if for nothing else but the daily dose of SQL tidbits you can get in your email box.  Each workday a different video tip is posted on a SQL Server related topic.  You may not be interested in every one, but I’m sure you will find a couple each week that you will want to watch.  Most are only about 2-3 minutes and therefore don’t take a lot of time out of your day.  Occasionally, there may be one or two that are between ten and fifteen minutes.

On today’s video, Plamen Ratchev did a short (less than 3 minute) presentation on concatenating the values from a single column across the rows in a table using: FOR XML PATH.  It was an interesting technique, but not a lot of people have really dived that much into XML or the various related commands to have found this technique.  What I’m going to describe here is another method that I use when creating dynamic pivot tables in SQL Server.  It uses the COALESCE function.

First you need to build a sample table.  I’ll use the same code as Plamen to build a simple product table:

Create Table Products (
  sku INT PRIMARY KEY,
  product_desc varchar(35)); INSERT INTO Products VALUES (1, 'Book');
INSERT INTO Products VALUES (2, 'Magazine');
INSERT INTO Products VALUES (3, 'DVD');
INSERT INTO Products VALUES (4, 'Video');
INSERT INTO Products VALUES (5, 'CD');
INSERT INTO Products VALUES (6, 'Map');

Next, you need to write the code to build a comma delimited string that includes the values from one of the columns in the table.  The technique I use here is similar to the one that I use to build a comma delimited list of field names to create a dynamic Pivot statement.  It involves the use of a temporary string variable in which I will progressively build the string value and the COALESCE command.  The code you need is here:

DECLARE @Product_String varchar(8000) SELECT @Product_String =
  COALESCE(@Product_String + ', ', '') + Product_Desc 
FROM
(
  SELECT Product_Desc
  FROM Products
) AS A
ORDER BY A.Product_Desc
PRINT @Product_String

The COALESCE command is a simple command that returns the first non-null value from a set of arguments.  I use it in the SELECT statement because it will automatically combine each of the fields and I can add commas between each value that it adds.  To begin the process, I define a temporary variable, @product_string which I declare, but do not initialize.  This means the initial ‘value’ of the variable is NULL.  Then I use the COALESCE statement with two parameters, one is @product_string and the other is an empty string ‘’.  The first time through the SELECT, @product_string is null, so the function returns the empty string.  I then concatenate the result of the COALESCE function with the value of the column I’m interested in.  This is result is saved back into @product_string. 

For the rest of the passes through the table, COALESCE function adds the the next product name value and the comma separator until the end of the table in this case. (Of course, you could limit the result by using a WHERE clause.) 

The final result is shown here:

Book, CD, DVD, Magazine, Map, Video

While this string is comma delimited, it does not place the text values in quotes.  However, that can easily be fixed by using the following variation of the SELECT statement to add quotes to the individual values.

SELECT @Product_String =
  COALESCE(@Product_String + ', ', '') + '"' + Product_Desc + '"'
FROM
(
  SELECT Product_Desc
  FROM Products
) AS A
ORDER BY A.Product_Desc
PRINT @Product_String

Which results in:

"Book", "CD", "DVD", "Magazine", "Map", "Video"

Hope you found that interesting.  Remember to check out www.SQLSHARE.COM to get your daily dose of SQL tips.

MagicPASS Meeting April 2011

Kendal VanDyke last week asked me if I would do one of my PowerPivot presentations at this month’s MagicPASS meeting on April 20th because the scheduled speaker was not able to attend.  .  I had already prepared my presentation for SQL Saturday Jacksonville on the 30th so I figured, sure, why not.

Despite the fact the Excel has supported pivot tables and pivot charts for some time now, the built-in capabilities of Excel still limited the amount of data that you can use.  It was also not convenient to use multiple data sources.  But with the introduction of Excel 2010, Microsoft created an add-in to Excel called PowerPivot (originally named Gemini) which would use the Analysis Services VertiPaq engine and run in-process in Excel.  It would allow you to import data from many different data sources in building the pivot table and/or chart.  It would also allow you to have tables with millions of records.  It would take all of the data and compress it very highly and store the data in the Excel .xlsx file although not in the same way that it stored the other native Excel data.

From my point of view, PowerPivot is a great addition for business analysts to create different views of their data without having to write a bit of code.  They can choose different dimensions (parameters in a way) to look at, sort, and filter the data to answer many different questions.  Why should a database administrator or analyst care?  Because, why the business analyst can very quickly learn how to manipulate the pivot tables and charts for a given set of data to answer different questions, they probably cannot create the original data set or define the relationships to link the various data sources.  As a DBA or analyst, you can spend just a few hours to build the basic pivot tables files (individual or by building a data warehouse or even an Analysis Services cube) and then show the analyst how they can pick different measures from their fact table and slice and dice that data with dimension data.

If you are in Orlando and have time Wednesday April 20th, come on down to Stetson University in Celebration by around 5 PM.  If you can not make it, I’ll be doing a similar presentation at SQL Saturday Jacksonville on April 30th.

Jacksonville SQL Saturday – April 30th

I’ve been busy lately and have not had time to post anything major, but I do want to announce that I will be attending the Jacksonville SQL Saturday on April 30th where I will be giving my presentation: Introduction to PowerPivot for Excel.  Ok, it is not a SharePoint conference, but SQL has been my ‘other’ favorite application since switching from Visual FoxPro in the mid 1990’s.  The short description of the presentation is:

This session focuses on the basics of how to use PowerPivot with Excel to create simple pivot tables (cubes) and pivot charts using both data already in Excel as well as how to connect to SQL tables as your data source. It will also cover how add slicers, how to add calculated columns and how to define hierarchies in your pivot tables.

For further details of SQL Saturday in Jacksonville and to register to attend, go to: SQLSaturday #74