Getting the Right Context – Part 2

Last time, I introduced the concept of context within DAX expressions using by PowerPivot to calculate columns and measures. We saw that the default context for column calculations was row context while the default context for measures was filtered context. However, I ended the discussion by showing that within a measure calculation, I could use column context with certain aggregate functions like SUMX which can be used to apply an expression across all the rows of a table. However, before that expression is evaluated, that table is automatically filtered since I used it in a measure which begins by applying the filtered context of the pivot table to the rows used by the expression.

So this time, let’s dive a little deeper. Let me begin by going to the FactSales table of my Contoso data model and calculate a measure for total sales. This is not difficult and can be achieved by using the SUM() function as shown in the following figure.

Now let’s assume that I want a measure that shows only the sales made through the store channel. At first, I might try to use the SUMX() function along with the FILTER() function. The FILTER() function also has two parameters. The first parameter is the name of the table that I want to apply a filter on. The second parameter is a filter expression. I may at first assume that because I created a relationship between the FactSales table and the DimChannel table, that I can simply reference the column [ChannelName] and compare it to the string “Store” to filter the FactSales table. However, as you can see in the following figure, this expression would result in an error.

The reason for the error is that while the FILTER() function references the table FactSales, there is no context to link records in FactSales to DimChannel. I know you might ask, “Doesn’t the relation between these two tables define that context?” The answer is that the relationship between tables while defining the ‘mechanism’ of how to connect the two tables, it does not activate a context between the rows in FactSales with a row in DimChannel. When pointing from the many side of a relationship to the one side of the relationship, we must use the RELATED() function to activate the context within the expression. I show this in the following figure.

You can see that now I have a total sales for the just the stores as a measure. If I use this measure in a pivot table that displays sales by month and by product category, I will have the additional filter of sales by store in each of the pivot table cells.

What if, however, I wanted to create a calculated column in the table DimChannel that displayed the total sales for that channel? Again you might start with the SUMX() function because you want to calculate an expression from another table. In this case, the first parameter of the SUMX() function would be the FactSales table and the column that we want to sum would be the [SalesAmount] column. However, if we were to create this column, we might be surprised by the result, shown in the following figure.

All of the values in the column are exactly the same. Furthermore, if I refer to the image earlier in this blog for the total sales across all channels, I would see that the value displayed here in each cell of the column is actually the total sales. Again the problem is context. There is no context to refer back to FactSales from DimChannel. Therefore, when SUMX() evaluates the [SalesAmount] in table FactSales, it pulls values from all the sales records, not just the sales from the channel represented by the current row in DimChannel.

In this case, because I am going from the one side of the DimChannel relationship to the many side of FactSales, I need to return a table that contains just the rows from FactSales that represent sales from the channel in the context of the current row. I can do this by using the RELATEDTABLE() function which uses a single parameter, the name of the table on the many side of the relationship. I must also have the relationship explicitly defined between DimChannel and FactSales. I have already done this. So Power Pivot can use the relationship to create a subset of rows from FactSales for the current channel. I can then use this resulting table in the SUMX() function to sum the [SalesAmount] column as shown in the following figure.

As you can see in the figure, the first row which represents the store channel displays the same sales total as we calculated from the measure in FactSales earlier.

So again, you can see that there are many different ways to define the context of an expression. When dealing with multiple tables, it is important to understand whether you can perform a row context calculation by using the RELATED() function to extend the row context to the related table on the lookup side of the relationship or whether you need to use the RELATEDTABLE() function to filter the rows used in an aggregate function like SUMX() to calculate values for a column which uses the row context to define the link to the many side table.

Next time, I’ll look at some functions that let you turn off a filtered context and show where you might use it.

C’ya next time.

Can You Put That Into Context For Me?

Over the past several years, I’ve conducted many sessions on how to use DAX in PowerPivot to create columns and measures. Throughout this time, the one thing that has confused many people with whom I have talked with is the concept of context for an expression. Many do not understand why some equations appear to work across for defining new columns while other equations only appear to work when calculating measures within a pivot table. This week, and perhaps for another week or two, I am going to take a shot at explaining the concept of context. Until you get a good understanding of context, many of the DAX expressions for columns and measures may appear to be magic that somehow get the correct answer but you may just not understand why.

Let me start by defining the two major context types that DAX expressions have to deal with:

  1. Row Context – This is probably the easiest context to understand. When you define a formula to calculate the value for a new column, the formula is applied to each row individually. In fact, the value calculated can depend only on the value of other columns in the current row. This is much like the way you calculate a new column value in Excel. However, unlike Excel, it is more restrictive. Using Row Context, it is not possible to directly access the values in a prior or successive row because those terms do not really have meaning within a table in the data model of a pivot table.
  2. Filtered Context – This is probably easiest to understand when you think of the entire set of rows in a table. If you sum a numeric column or simply count any column in the table, you are applying a filtered context. Of course, in its simplest form, the filter is really the same as all of the rows in the table, or no filter at all. But let’s assume for a moment that you have a sales table that includes sales from each of your stores in each of the states of United States. Furthermore, you only want to know the sales for stores in the state of Florida. In database terms, you apply a filter using a WHERE clause to include only those rows from the sales table in which the sales came from a store in Florida. Of course you can have more than one filter. You might filter not only on the state, but also on a specific product that was sold during a specific month of a specific year. That would require four filters to be applied to the table to reduce the total number of rows of sales to only those you wanted to sum. Filtered context is often used in expressions that involve summing, counting, averaging, finding a minimum or finding a maximum. These are also referred to as aggregate functions.

So let’s see how this works in DAX. Using my Contoso sales table, suppose I want to calculate the profit for each sale. Since I want to perform the calculation per sale and because my sales table has one row for each sale, I can use a calculated column. The formula for my calculated column begins by taking the value of the column [SalesAmount] which is the money received from the sale. This is not the profit. To calculate profit, I need to subtract costs from [SalesAmount]. To do this, I can subtract the value from the column [TotalCost]. This gives me a pretty good idea of what my potential profit might be. However, in the Contoso data, I also have to consider any returned item. These are items returned because they are damaged or do not work. The policy is to refund the customer their money. However, the returned product it not worth anything and is considered trash. Therefore, I need to also subtract from [SalesAmount] the [ReturnAmount]. If I define a new column in my sales table, I can use the formula shown in the following figure to calculate the profit for each of the sales.

As you know, the PowerPivot data model applies the column formula to each record in the table. I can then use this column in pivot tables to display profit for sales by any of the appropriate dimensions such as time, location, product, etc.

Fortunately, in PowerPivot for Excel, displaying profit in the pivot table is as simple as moving the column [Profit] to the Values area in the field list. PowerPivot lets us use any numeric field in the values area without any additional work. However, if I attempt to create the same model in Analysis Services, I must create a measure as shown below.

The SUM() function is an aggregate function and therefore defines a measure, never a column. Note however, I must still first create the column [Profit] as previously defined. I cannot simply create a measure that sums the result of the expression as shown in the following figure.

This is because the calculation: [SalesAmount] – [TotalCost] – [ReturnAmount] is a row context expression since it must be calculated on every row of the table. However the SUM() function is a filter context expression (even if the filter is all the rows). I cannot combine the two in this expression without getting the Semantic Error shown above. The SUM() function can only aggregate the values of a single column, not aggregate the values of a calculation.

What I can do however, is use one of the other DAX functions, SUMX(). This function has two parameters, the first parameter is the name of the table I want to perform a calculation on. In this case, the table name is FactSales. The second parameter is an expression that can include one or more columns from the designated table. In this case, I can use the expression I previously used to calculate the [Profit] column as shown in the following figure.

As you can see in this figure, the resulting total profit across all the records in the FactSales table is exactly the same whether I use the expression SUM([Profit]) which sums the calculated column or if I use the expressions SUMX(FactSales, [SalesAmount] – [TotalCost] – [ReturnAmount]). This second expression essentially calculates the row context expression for profit on each row in the table FactSales. However, as a measure rather a calculated column, the expression is filtered by any dimensions used in the pivot table.

In the following figure, the only dimension is sales channel. Therefore, with four sales channels, there are four rows. Note that both calculations of total profit result in the same values for each channel.

What is the difference between the two? Since all of the sales records were used to build this table since all four sales channels were included, the same number of calculations were effectively performed in both cases, calculating the profit for each sales row and then summing the profit for each channel. The only real difference is that the first one required the prior creation of a [Profit] column which takes some of the precious data model memory while the second method using the SUMX() function does not require that column.

But what if we applied other dimensions any perhaps by using a slicer displayed the profit from only some of the sales. In the following example, the row dimension displays countries. However, only countries in Asia appear based on a slicer that limits which continent I want to see profits. I also included a slicer on year to limit the years in my analysis to only two years. Therefore, in the following figure, the profit amounts represent only a subset of the total profits for the Contoso organization.

How does this affect the calculation? To calculate Total_Profit, I still need the column [Profit] to calculate value for every sales row in the FactSales table, even rows that I will not use. On the other hand, to calculate Total_Profit_2, the filter on continent and year is applied first limited the total number of rows that are needed to be calculated from the FactSales table. The calculation of profit is then only performed on this filtered subset of rows and then aggregated appropriately based on the dimensions shown.

Thus, you might say for now that column calculations are generally row context and measures are filter context expressions. However some functions when used in measures let you perform a row context expression after applying a filter context like the SUMX() function and other aggregate functions that end in ‘X’ like COUNTX(), AVERAGEX(), MAXX(), and MINX().

Next time, I will dive a little further into understanding context within DAX.

C’ya next time.

It’s a Fiscal Thing

In past blog articles, I’ve talked about the need for a date table, not just for BI/PowerPivot analysis, but for any application. You might be hard pressed to come up with many applications you have written or have used in your organizations that do not include some aspect of time as one of the dimensions by which you collect data and/or report on it. Therefore, building a Date table once that you can use for many different applications can save you a great deal of time when trying to determine how to group your data for different reporting period or to display different date labels along one of your dimensions.

One of the easiest ways to build a date table that you can use anywhere is to begin with Excel. It is a simple matter to create a column called DateKey and enter the first two dates of a large general purpose date range as shown in the following figure.

Then to create any size table, simply select the first two cells and drag the bottom right corner down to any number of rows you want. Given Excel’s limit of a million rows, this should cover you for most applications.

Once you have the first column, you can add columns for other ways of displaying date information. The following table shows a few of the more common expressions I have used to create additional columns.

<insert table here>

If you use a standard calendar, January 1 through December 31, the functions provided work well to calculate the names of months, days of the week, quarters, years, etc. However, if your organization uses a fiscal calendar, some of these calculations are not as easy. For example, suppose your organization uses a fiscal year that begins October 1 and goes through September 30th. The calculation for month number is not as simple as using the MONTH() function. Nor is the calculation of year simple either because the fiscal year for 2015 may in that case go from October 1, 2014 through September 30, 2015. So I thought you might like to see a few functions that could help you calculate fiscal columns in your data table.

Let’s start with the calculation of the fiscal year. Using the example date range above, how can I calculated the fiscal year from any given date. If I were to add 3 months to any date and then calculate the year using the YEAR() function, I would get the fiscal year. Why three months? Because there are three months in the prior calendar year. If my fiscal year started July 1, 2014 and went to June 30, 2015, I would add 6 months to the date before calculating the fiscal year using the YEAR() function. The following figure shows the column expression I need to create a CompanyFiscalYear column.

Calculating the year was not that difficult. However, calculating the month number of the fiscal year is a bit more complex. Why do I need a fiscal month number? Remember how I used the month number column as the sort by column for the month names so that the months appear in the correct order. I again have a similar problem when I am dealing with fiscal calendars. In the above example fiscal year beginning October 1, 2014, the order of the month names must be:

  • October
  • November
  • December
  • January
  • February
  • March
  • April
  • May
  • June
  • July
  • August
  • September

I can calculate the fiscal month number using the expression shown in the following figure. Note that it is a bit more complex than the expression to calculate the fiscal year. Where I’m using the number ‘9’, I’m really using ’12 – 3′ in which ’12’ is the number of months in the year and ‘3’ is the number of fiscal months in the first calendar year which in this case is October, November, and December. Therefore, if my fiscal calendar started in July, I would use ’12-6′ or ‘6’. A similar argument would explain why I need a ‘3’ at the end of argument for months in the second calendar year to increase the fiscal month number of these months appropriately. Again I would use a ‘6’ for a fiscal year starting in July because there are 6 months in the first calendar year that are part of the current fiscal year.

The final column I’m going to show today is the calculation of the fiscal quarter. In my October-September scenario, the first quarter of the fiscal year includes the months October, November, and December. Since I already have calculated fiscal month numbers, I can simplify my calculation of the quarter by simply dividing the fiscal month number by ‘3’ and rounding the result up to the next whole number if necessary using the ROUNDUP() function. Thus the following figure shows an easy calculation of the fiscal quarter.

That’s all for this time. I hope this helps you create useful date tables with not just traditional calendar dates and calculated fields, but also fiscal calendar fields that you can use in your pivot tables and reports.

C’ya next time.

I Like Your Moves

Moving Files between SharePoint Libraries

You may occasionally encounter the need to move a file from one SharePoint library to another. While there is the brute force method (download the file to your local machine and then upload it back to the new library which may be in a different site), there are at least two other easier methods.

For the first method, you need to know the URL of the library to where you want to move the file before you begin the process. Then follow these steps.

Step 1: Navigate to the library containing the file you want to move. For the purposes of this article, I’m going to start from a library in our team collaboration site to move a file to our intranet site. The specific starting and ending destinations are not the important issues here. Rather the technique is what is important.

Step 2: Locate the file you want to copy or move from one library to the other and hover over the right side of the document’s Title. Click the down arrow to open the document menu as shown in the following figure and select the option: Send To and then select Other Location from the secondary fly out menu.

Step 3: This opens the Copy dialog in which you need to specify the destination library or folder as well as the filename to be used in the destination library. (If you do not know the URL of the destination library, you may need to close this dialog and find that URL first. Don’t say I didn’t warn you.).

Step 4: Note that the default location is the site in which the current library resides. If you simply want to copy a file from one library to another library within the same site, you can just add the library name to this URL. If the library name has spaces, you must replace each space with the characters %20. The following figure shows that the file is being copied to a library named: “Server Training”. It also shows that the name of the file will be changed to ‘Add a new user role’. (Yes, this means that you can create a copy of a file in the same library by giving the copy a new name.)

Step 5: When you click OK, the Copy Progress dialog appears. You can review the information on the copy here before you press OK. After you click OK, the copy proceeds and if successful updates the Copy Progress by showing the text string that: The copy operation was successful.

If you wanted to move rather than copy the file, you must go back to the source library and manually delete the original copy of the document.

If the copy fails either because you entered the URL incorrectly or you do not have Add rights to the destination library or there already was a file in the destination library with the same name, you will get a message that the copy operation failed.

This method may be okay if you only have a few files to copy, but if you have a large number files, copying them one at a time from one location to another can be time consuming. A better way to transfer multiple files from one location to another may be to open the library using Explorer. To do this, follow these steps.

Step 1: Open the source library and display the Library ribbon. In the Connect & Export group, locate and click on the Open with Explorer command/icon. (I could be wrong, but I believe you must have Internet Explorer on your local machine for this to work.)

SharePoint opens an instance of Windows explorer that looks just like the display of any local folder from your desktop, but displays the contents of the SharePoint site.

Step 2: In a similar fashion, open another tab on your browser or even another browser instance and navigate to the destination library. Again open the library and then click Open with Explorer from the Connect & Export group of the Library ribbon. You should now have two separate instances of Windows Explorer open, each displaying a different SharePoint library. Now either close or minimize your browser windows to get them out of the way.

Step 3: Arrange the two Windows Explorer instances so you can see the source file as well as the destination library.

Step 4: Now simply click on the file or files you want to move and drag them from the source window to the destination library.

Note, as the image below shows, you can even copy SharePoint files back to your local computer by opening Windows Explorer from your local computer and navigating to the folder where you want to copy or move the files.

If you just click on the files and drag them from one window to another, the files will be copied. However, if you drag the files by pressing and holding down the right mouse button while you drag the files, when you release the mouse button you will be prompted with the following dialog.

In this menu Copy here is bolded because that is the default option. However, by clicking the Move here button, you can copy the files to the new location and then have the original files from their original library automatically deleted.

You can even create shortcuts to the files and place them on your desktop by pressing the right mouse button and dragging the files from their source to the Desktop within Windows Explorer and then select Create shortcuts here. Your copy of Windows Explorer may also support other options like the ability to create a zip file from the copied files and placing just the zip file in the destination folder.

That’s all for this week.

C’ya next time.

Populating Your Data Model with A Query

Over the last two weeks we talked about normalizing your logical data model in PowerPivot to help make it easier for users to understand and work with (https://sharepointmike.wordpress.com/2015/03/21/your-logical-data-model-is-not-normal/) and (https://sharepointmike.wordpress.com/2015/03/28/dealing-with-blanks-in-your-data-model/). However, in each of these models, we did all the data manipulation inside the data model. That left a lot of technical fields and tables that we could not remove from the model, but which took extra memory which could limit the size of our data model. So how do we solve that problem? We could do much of the data manipulation outside of the data model or as part of the import step. Let’s see how.

I’m going to start by importing product data from the Access database version of Contoso into the Power Pivot data model.

In the Table Import Wizard, I’m going to select the option to write a query that will specify the data to import. This option allows me to combine data from the dimProduct, dimProductCategory, and dimProductSubCategory tables into a single file. I can also embed the calculation to fix the blank Class fields we visited last time.

Clicking Next, I see the Specify a SQL Query dialog. This is an intimidating dialog. I could manually enter a SQL query here. However, I would have to know all the field names from all the tables that I want to bring together. Not impossible, but there has got to be an easier way. Wait. What is this Design button in the bottom right of the dialog?

When I click the Design button, I get an equally intimidating dialog that looks like it has an area at the top to again manually enter a SQL statement and a grid area at the bottom to display the results of the SQL statement if I were to test run it by clicking the exclamation point (!). Obviously, this screen is not very friendly toward building Access database queries. Nowhere near as friendly as the query builder inside Access itself.

If I go back into Access and build my query there using the query builder, I can visually design my query. First, I add the tables I want to use from the Show Table dialog that appears when I open the Query Designer from the Create ribbon. Then I can add the fields I want to include from any of the included tables by double clicking on their names in the tables at the top of the dialog.

I can even create my calculated column here by entering a new name in the row of the design grid followed by a colon (:). Then I can use the same expression as I did last time to test the value in the ClassName field.

NewClassName: IIF(([ClassName]=””),”[No Class]”, [ClassName])

Note that the IIF() function in Access has a double ‘I’. Otherwise, it works the same. Also note that I’m using a different name for the new column here to keep it separate from the field in the table dimProduct.

After I have finished creating the query, I can test it by clicking the Run button in the Query Tools Design window. If the query executes correctly, I am ready to copy my query to my Power Pivot data load screen. If I open the dropdown menu under SQL View in the Query Tools Design window, I see an option: SQL View as well as some other views.

This option displays the SQL statement which I can copy and then paste back in the Excel Data Model Table Import Wizard as shown below.

If I were to try to run this query directly, it would fail. Admittedly, it took me awhile to figure out why. I finally found that it did not like the one field definition: dimProduct.Size. To test what was going on, I clicked on the Design button to take me to the screen which displays both the SQL query and a grid of results from testing the query. After some testing, I found that if I redefined this field as: dimProduct.[Size], the parsing engine was happy and gave me the results I expected as shown below. The square brackets are used to clarify to the browser that the text within the brackets represent a field name, not a command or reserve word.

I then clicked OK for this screen and Finish on the next screen. I was rewarded with the data engine reading and loading my Product data. Note that it is a single table now. By including the name of the product category and the product subcategory in my query, I no longer need those two additional tables thus reducing the overall size of my data model a bit.

Next I still need to load the Sales data from Contoso along with the date and channel tables. The follow screen shows that I can open the Access database a second time to extract the other tables leaving out the product and related tables.

After the FactSales table is loaded, I still need to create a relationship with the new Product table I pulled in with the query. Even though FactSales and dimProduct were linked in the original Access database, by loading the tables separately, the relationship is not automatically discovered. Therefore, I have to use the Create Relationship dialog to link these two tables.

If I were to display the diagram view of my Power Pivot data model, you can see that the model has been simplified down to only four tables rather than six as we used before.

If I now build the same Pivot table that I did last week to shows sales by class, you can see in the following figure that the results are the same.

I can even create a hierarchy in my product table to go from category to subcategory to product. I covered building hierarchies before (). After building the hierarchy, I strongly recommend hiding the supporting columns so as to not confuse the end-users. (See: https://sharepointmike.wordpress.com/2012/07/21/power-pivot-hierarchies-part-2-of-powerpivot-2012-series/)

Denormalizing your data as you import it may sound a bit more complex, but it can save you memory and Power Pivot’s main limitation is that it must hold the entire data model in memory. So the smaller you can make that model, the more data you can load into it.

Some people might ask how far do I denormalize the model. Of course, in theory, you could denormalize to a perfectly flat single table. However this is not practical as it can serious limit the flexibility of the model and it can even cause other data errors. I tend to think in terms of model objects. For example, Products are an object in the model. Therefore, collapsing the Product Subcategory and Product Category tables into Product makes sense to me. If I had tables Stores, Cities, Countries, and Geography, I might be tempted to collapse them together as well since they represent a Location object for sales. On the other hand, I would not denormalize all the product information into the FactSales table.

So if you are thinking that there is no hard and fast answer to when should you denormalize a model, that is somewhat true. It is a bit of an art. However, if you have a table that is used just as a lookup for a name or perhaps one or two other associated fields and it is not a role playing table (See: https://sharepointmike.wordpress.com/2012/07/21/power-pivot-hierarchies-part-2-of-powerpivot-2012-series/), denormalize that table.

That’s it for this series. C’ya next time.

Dealing with Blanks in Your Data Model

Last time, I discussed some reasons why you might want to denormalize your data model when using PowerPivot (https://sharepointmike.wordpress.com/2015/03/21/your-logical-data-model-is-not-normal/). As you may remember, one reason is to make the model easier for the end user to understand. A second reason is that if you denormalize the data outside of PowerPivot, you can substantially decrease the model size by eliminating many if not most of the lookup tables and just adding the lookup information you need in the entity table such as the Product table. I will talk more about denormalizing the data outside of PowerPivot, or at least during the process of importing the data next time. This time, I want to look at another issue that sometimes occurs in your data that could make your data harder to interpret by your end-users.

I’m talking about blank or null fields in the data. Often times a table will have one or more columns which are optional. Take for example a product table like the one we are using from Contoso. Not every product may be sold by color. Some products may not belong to a style or a class. Some products may not have a measured size or a weight. In some databases, the corresponding lookup tables have a special entry for no color, no style, or no class so that every product can point to one and only one value in the lookup. However, you may have a lookup table that only contains specific values. If a product does not exactly match one of those values, the reference from the product table to the lookup table may be left blank. When the user sees a blank value, they may wonder if the user just forgot to select a value or if no value from the lookup table applied. In other words, they do not really know the reason why the field is blank.

Suppose we start with the data model using Contoso data in the following figure. Note to anyone trying to reproduce this example, I went into the Product table ahead of time and removed the ClassID and ClassName from all products in which the color was ‘Blue’ just to create some records with blank values.

After bringing this data into an Excel PowerPivot data model, I create a simple pivot table to show the sales amount and sales counts by class. Notice that the three product classes that are defined include:

  • Economy
  • Regular
  • Deluxe

But you can see from the report that 87,597 of the over 2 million sales records had no class defined. From a user’s viewpoint, they do not know if someone forgot to enter the class for some of the products or if the definition of class does not apply to some products.

As you build your data model in Excel, you can fix this problem by replacing the blank values with something like the string: “[No Class]”. The difficulty is, however, after you load your data into the PowerPivot data model, you cannot edit individual cells. You can, on the other hand, create a new calculated column.

Since I want the new calculated column to have the name ClassName and because I cannot have two columns in the same table with the same name, I begin by renaming the existing ClassName column to ClassNameOriginal. The actual new name I use does not matter as long as it is different. Also note that changing the column names in the Excel data model has absolutely no effect on the name of the column in the original data source.

After renaming the original ClassName column, I add a new column to the end of the table called ClassName. For the calculation to replace the blanks from the original column, I need to know if those blanks are Null values or if they contain an empty string. If the original column used empty strings when a class value was not defined, I can use the following DAX expression to create my new class name value.

= IF(([ClassNameOriginal]=””),”[No Class]”, [ClassNameOriginal])

This expression uses a simple IF() function to see if the value in the column ClassNameOriginal contains the empty string. If it does, the function outputs the string: [No Class]. Otherwise, the function outputs the current row value of the column ClassNameOriginal. This is shown in the following figure.

On the other hand, if the original ClassName column uses a NULL value when there is no corresponding class, the expression needed here must be defined with the ISBLANK() function to test for NULL values as in:

= IF(ISBLANK([ClassNameOriginal]),”[No Class]”, [ClassNameOriginal])

Whichever method is appropriate for your table, you now have a column that you can use for the PivotTable. Before proceeding however, be sure to go back and hide the original class name column: ClassNameOriginal so as not to confuse users with two similar columns that they could use as dimensions in their tables.

Since I still have my original pivot table open, I can just switch over to the Pivot table and it should automatically update to show that the blank class now has the name [No Class]. (Which goes to show you that having no class is better than just being a blank. J )

That’s it for this week. By the time you read this I will be at Code Camp Orlando giving my presentation: Calculated Columns, Measures, and KPIs, Oh My!

C’ya next time when I will show you how to use the query designer while loading data to the PowerPivot model to fix some of the issues from this week and last week.

Your Logical Data Model is not Normal

If you have been reading this blog over the last several years, you know that I have been a strong supporter of using PowerPivot and the Tabular model to perform data analysis, even for power users, not just DBAs. What you may not have realized is that I’ve been secretly teaching you a little about data modeling. What is data modeling? It is really nothing more than the collection of all the data in tables and the relationships between those tables in a database. Did you know that there are at least two major ‘types’ of data models when it comes to how your structure your data into tables? Each one serves a different purpose. Therefore, it can reasonably be argued that neither one is more correct than the other. But you need to understand when to use each type. That is what I want to discuss today.

Most DBAs and developers who work with databases like SQL Server, Oracle, and many common database are comfortable with the relational model for creating tables and defining the relationships that connect them. In fact, they immediately start to normalize a denomalized database in their heads within seconds of seeing the data schema. The relational model relies on the application of the rules of data normalization introduced by Edgar F. Codd in 1970. In essence, the goal of the rules of data normalization is to minimize the data redundancy which also has the effect of decreasing the overall size of the database while at the same time making it easier to maintain information that would otherwise be repeated through many records.

There are three primary rules that are used to determine whether a database, a collection of tables, has been normalized. These rules are:

First Normal Form: No two rows of data may contain repeating data or groups of data. Such repeating data must be split into a separate but connected table. For example, a sales order may contain the purchase of one or more items. Because the number of items included in a sales order is not predefined, it must be split into a separate table with one row for each item on the sales order. These two tables are then typically connected by an order id.

Second Normal Form: This rule only applies to tables which have a compound primary index, an index built from two or more fields. In this rule, all other fields in the table must depend on the entire compound index value, not only a portion of it. A possible example of this might be a table that includes students at a school in which the primary index combined the school name (or id) along with the student name (or id). Imagine that the table also included the address and phone number of the school. This information is not depended on the combination of the school and the student. It only depends on the school. Therefore, this violation of the second normal form requires that the data related only to the school be split into a second table that includes only school information.

Third Normal Form: This rule requires that every field not part of the primary index be depended on the primary index. Going back to my favorite Contoso database, one could argue that in the FactSales table, the Sales Amount field is redundant since this table also includes the sales quantity, sales price, and any sales discounts or returns. Why is it redundant? Because it can be calculated based on the other values of other columns in the record. Therefore, to fully achieve, third normal form, this field should be removed.

While there are other special case rules that can be applied to normalization of a database, most DBAs will be satisfied with a database that satisfies these three rules. They will then build the tables in their database corresponding to these rules and thus create the physical data model. It is called the physical data model because it defines the physical schema of the tables and the relationships between them.

However, business users of the data don’t look at the database that way. In fact most business users would be baffled by the large number of tables required and the relationships between them. In fact, they will not understand why they need to combine data from a half dozen to a dozen tables just to answer a single query question. As an example, let’s look at how a business user might think of just the products within the Contoso database.

The figure below shows the normalized data model for products and their subcategories and categories. As you can see from the schema, the only important piece of information in the subcategories table is the name of the subcategory. Similarly, the only important piece of information in the categories table is the name of the category.

Most business users would not think of category and subcategory names as part of separate tables, but as attributes of the product itself. In fact they would think of product information more like the following figure.

Imagine a database model with dozens of normalized tables and then try to envision how the typical business user sees the data with perhaps only a half dozen tables after denormalization.

In PowerPivot, we can address this issue in one of two ways. The first way would be to add the category and subcategory names to the product table in SQL Server before loading the data into PowerPivot. This would essentially make the physical data model the same as the logical data model used by the end-users. However, this data model would no longer be normalized.

Physically denomalizing the data tables is not the only solution. I could, as shown before in https://sharepointmike.wordpress.com/2012/07/21/power-pivot-hierarchies-part-2-of-powerpivot-2012-series/, build two calculated fields in the product table that use the RELATED() function to get the category and subcategory names and display them in the product table. I could then hide the two tables, dimCategory and dimSubcategory, from the user so that they would see a structure similar to their expected denormalized logical data model even though the data still is physically stored in a normalized structure.

The advantage of making the changes to the data model outside of PowerPivot is that it effectively reduces the amount of data that PowerPivot must store in memory thus potentially allowing a larger model to be created. The advantage of making the changes inside PowerPivot by using the RELATED() function and then hiding the technical tables that are still needed in the model but do not need to be seen by the client is that it preserves the sanity of those DBAs who cringe every time someone tries to their denormalize their data. Both methods will allow me to create similar Pivot tables in my Excel spreadsheet. Thus both methods can be considered correct.

The thing to remember is this. Data Normalization was developed to make data storage and maintenance easier and more efficient. However, data analysis and reporting often requires a fair amount of data denormalization. This is especially true when performing data analysis using reports, pivot tables and charts, and even third party data analysis tools. In fact, the logical view of the data schema aids in the performance of most data analysis. Just remember that your logical model is not a normalized model.

C’ya next time.