Analyzing SharePoint Survey Results with Power BI

The last two times, I introduced some of the basics of using Power BI with web site data from a third party. This time, I am going to show you how you can retrieve data from SharePoint and analyze it with Power BI.

To get started, I am going to open a new Power BI desktop instance and select More from the Get Data button to define a new data source, a data source within my SharePoint farm.

When the Get Data dialog appears, I can scroll down through the available data sources until I encounter SharePoint List.

When I want to pull data from a SharePoint list, I need to enter the URL of the SharePoint site, not the URL for the list itself as shown in the following figure.

I can navigate to my SharePoint site and capture the URL from the address bar in the top of the browser.

Note however that I do not need the entire URL, I only need the portion of the URL that defines the site. Therefore, I can remove the /Pages/default.aspx which references the home page of the site.

When I click OK, Power BI will go out the SharePoint site and look for the lists defined in the site. But before I get to that, I want to make a point that Power BI did not ask me for my credentials when I went to this collaboration site. The reason is that particular farm uses Kerberos authentication so my Windows authentication was passed through to SharePoint which allows me to see the contents of the site. If I did not have rights to the site, Power BI would not authenticate me or provide a list of the lists. A little later, I show you what you may see if you don’t use Kerberos to pass through your Windows credentials.

The next thing I noticed when I saw the Power BI Navigator showed me the names of the available lists is that some of the items were not lists, at least not strictly speaking. When I looked at the names, I noticed that some of them were from the Discussion Board area and some were from the Surveys area. Interestingly, the Navigator does not treat libraries as lists. Therefore, if I use a form tool like InfoPath which writes the metadata back to the form library, I do not think at this time that I can access that data from Power BI.

On the other hand, Power BI could provide an interesting way to access and analyze survey data. I just happen to have a survey result list available to take a look at, so I begin by entering the URL of the survey site.

This instance of SharePoint is not currently setup to work with Kerberos. Therefore, I get the Access SharePoint dialog shown below by clicking Windows authentication in the left side menu. In this case, I need to switch from using my current credentials to using alternate credentials and then supplying the appropriate username and password. I still want the full address to the survey site as before. Then I can click Connect.

Now Power BI can successfully interrogate my site to look for lists and display them in the Navigator as shown below. For this demo, I only need the single list Senior Exit Survey. (Note that all blanks are removed from the list names.) In a future blog, I will demo how to work with multiple lists/tables from a single source, so be patient.

After selecting a list, the Navigator displays a preview of the data in the right side panel. Note that the column names, like the list names, have had all blanks removed. Actually Navigator removes other non-alpha characters as well. In fact, column names cannot even begin with a number. If you have column names that begin with a number, Power BI will automatically precede the names with “c_”.

Another thing I noticed is that the preview only shows the first several lines of data and then displays the text: ‘The data in the preview has been truncated due to size limits.’ Don’t worry about this. The data is all there.

After I click the Load button, Power BI starts to load the data into its data model. Depending on the number of column and rows, this process can take from a few seconds to a minute or more. However, eventually, the load completes and I can see the table by clicking on the Data icon along the left side of the screen and then clicking on the table I want to view (if I had more than one table).

Note that I could click on the Edit Queries button at this point to clean up some of the data getting rid of columns that I will not need, but I’ll skip that step for the sake of keeping this blog a little shorter.

Next, I click the Report icon in the left margin of the desktop. Here I can see the columns in my table. If there were multiple tables, I could collapse and expand the column list for any table by clicking the arrow to the left of the table name.

Let’s say that I want to plot the number of each response to the question: ‘I clearly understood the requirements for graduation.’ I can click in the check box to the left of that column to identify the question and then click the check box to the left of the ID value which I can use to count the responses.

Note that by default, Power BI may try to sum the ID value. Remember from last week I should you how to change the aggregation of a value from SUM to COUNT.

I’m also going to initially default the visualization of this data to table. Therefore, my two right columns for Visualizations and Fields should look like the following figure.

My table should look like the following figure. If the columns are not in the correct order, you can simply click and drag the column names in the Values section of Visualizations to get the order you want.

Instead of seeing my data as a table however, I might want to display the data using the Funnel visualization and sort by the count of ID as shown in the following figure. Changing the sort can be accomplished by clicking the ellipsis in the upper right corner and selecting Sort By to change the sort field.


In a similar fashion, I can click in a blank area of the desktop and create a column graph visualization for the responses to the question: “This high school has prepared me for my career choices.” The chart shows the degree to which the respondents agreed or disagreed with that statement. Note that in this case neither sorting by the count of the IDs or by the text of the answers provides a satisfactory ordering of the data. I’ll return to this at a later date.

Clicking in another blank area, I selected the count of ID along with the date the record was created or when the survey was taken. As you can see in the following figure, the data appears to be a little odd in that there is no date with more than 4 surveys. I did not believe this because with over 9500 records spread out over less than 150 days, that low count per day just does not make mathematical sense. Remember last time when I said that Power BI can create good charts and bad charts. The user must know their data to determine when the data is not being displayed correctly. In this case, the reason for the unusual result is that the create date is not just a date, but a date and time. Therefore, most surveys, even when taken by over 9500 students, do not occur within the same minute although statistically, some will.

So how can I fix this? I need to return to the data table and make a few changes. I really don’t want to lose the original data, so I create a duplicate of the Create column that I can modify.

In my new column, I can change the column name to: Day Created and then by right clicking on the header, select Change Type and then Date to change the field to a simple date with no time associated with it.

When I go back and change my chart to display the count of IDs by Day Created rather than the Created column which included time, I get a more realistic chart as shown below.

If I were to zoom out, you can see that I have all four of these visualization on the desktop at one time. In fact, other than the amount of screen space, there is no limit to the number of visualizations you put on a single screen.

Now let’s try something interesting. Suppose I select one of the days in the last chart that I create, specifically, Friday, January 23, 2015. I can see that the popup information box tells me that were 697 surveys taken on that day.

But more interestingly, you will notice that the rest of the visualization on the page automatically changed to emphasize the counts representing that one day with a darker shading and for the table, the numbers in the first column represent the count for each answer for only that one day.

That’s quite a bit for this week. Next time I take a further look into working with Power BI and SharePoint Survey data.

C’ya then.

Advertisements

The Best Year for Television Was 2010 says Power BI and imDB.

So last week I started by showing you how to get Power BI Desktop downloaded and installed on your computer. Hope you were able to do that because I’m going to focus on it for the next couple of months.

In addition, I took you through the steps of how to reference data from the movie and television web site: www.imdb.com. I specifically suggested that you download the top 250 television shows by going to a specific page URL that I included in the blog. (Go back to last week’s blog to see it if you are just joining us now. Really! I’ll wait right here.) I specifically left you in the Navigator dialog which showed a preview of the first 20+ records in the table. Now I must say that there is really nothing special about this page or the data. Any data displayed within a table structure on a web page can be referenced from any site. While I am not going to show it this week, this includes referencing data from a SharePoint list displayed as a web part on a page within SharePoint. (You were wondering if I would tie this all back to SharePoint. Admit it.)

After clicking the load button on the Navigator, Power BI begins to load the data into its data model. While this step acts much like the data load step in PowerPivot when getting data from an external source, there is at least one major difference. Power BI does not currently have a preview capability like PowerPivot to select columns and filter rows. While I will show how to do some editing of the loaded data in a moment, the thing that bothers me about this is that it inflates the size of the data model during load. When I talk about pulling data from a SharePoint list in a future week, remember that I can customize the list view to show only the data that I need to load into Power BI.

So here is the view of the data table loaded from the top 250 television shows.

Notice that the table has some columns that I do not need and it has a column which concatenates three different pieces of information: Rank, Show Title, and Year. I need to edit the table and I can do that by clicking on the Edit Queries button in the Home ribbon.

Now the first thing I want to do is to get rid of columns that I don’t want. In this case, the first column appears empty. The Your Rating column and the column with a heading value of ‘2‘ are not needed either. My first thought might be to select each column and then select Remove Columns from the Home ribbon as shown the prior image here. But I can also select the columns I want to keep and then select the Remove Other Columns option shown in the following figure. Note, I can press and hold the CTRL key while I click on each column header I want to select or I can click on the first column header I want and then while pressing the SHIFT key, click the last column I want and all columns between these two columns will also be selected. Which method I use to select the columns I want or don’t want is up to the individual situation as one method may be easier than the other.

I now have two columns, the Rank & Title column and the IMDb Rating column. Next, select the Rank & Title column and click Split Column in the Transform group of the Home ribbon. I want to split the column into multiple columns. In some cases, I may have a data structure which lends itself to be split at a specific number of columns (characters). But that is not the case here. I want to split the data at the first space character in the field which may be the third, fourth or fifth character from the left. Therefore. I select the By Delimited option.

In the dialog that appears, I can choose from one of 6 common delimiters. If the data includes these delimiters within their values, I may need to create a data set that uses a different, custom delimiter. The interesting feature of the custom delimiter is that it is not limited to a single character. However, in this case, I can use the Space character delimiter. Using the Custom delimiter (period-space) might also be a good choice in this case so that Rank can be displayed as an integer without a decimal point.

After selecting the delimiter, I can choose whether to apply this rule at the left-most occurrence of the delimiter, the right-most occurrence or at every occurrence. In this case, I only want to apply it to the first occurrence from the left.

You should also note that when Power BI Query Editor splits the column it creates one column from all the characters to the left of the delimiter and a second column with all the characters to the right of the delimiter. The delimiter itself is thrown away.

After applying the command to split the column, I may want to rename the new column. I can do this by right clicking the column header and selecting Rename from the drop-down menu.

In a similar fashion, I can split the year of the TV show into its own column. (I’ll leave you to figure out the steps you need to do this based on my description above.) After renaming the column with the year and renaming the Rank & Title column, my table now looks like the following.

Hint: Use the Split Column function to remove the right parenthesis and then deleted the resulting created column.

I am now ready to start using my data to perform some analysis. To do this, I click on the Close and Apply button on the far left side of the Home ribbon.

This returns me to the working desktop as shown in the following figure. Note the three icons along the left side. The top icon opens the visualization desktop where I can display different charts and tables of my data. The second icon shows me my data tables. If I have more than one table in my data model, I can switch between the tables using the list of tables on the right side of the screen. I will not need to do that here since I only have a single table. Finally the last icon lets me see and work with the relationships between multiple tables. Again, for this week, I do not have to worry about relationships between tables because I only have a single table. I will cover working with multiple tables at a future date.

Switching to the visualization page, I can see the fields in my table along the right side of the screen. To begin a visualization, I can simply click on a table field and drag it into any blank area on the desktop. For example, supple I drag over the fields ID and TV Show and then change the visualization to a vertical column chart as shown below.

While Power BI did create a chart, the chart does not make much sense. I can see the names of the TV Shows across the bottom of the chart, but plotting the value of the show ID field as the vertical value of each column does not make any sense. Maybe if I had the total number of people who watched the show, it would be a better chart.

However, my point in making this chart is to emphasize the need to know and understand your data before you just go off and create visualizations. While Power BI can make analyzing your data easier, it can also make creating meaningless charts easier as well.

So what can I do? Suppose I plot year across the horizontal axis instead of the TV Show name. Now Power BI plots the vertical column as the sum of the show ID values. Again, not very useful. However, if I open the dropdown for the ID field in the second column on the right side of the screen under Value, I can change the default action from Sum to Count.

Now I have something useful, the number of top TV shows by year as shown below.

You can see that 2010 had the most top TV shows. I wonder what they are? Next, I drag the TV Show field by itself to an empty area of the desktop and choose the Table visualization. (Just hover over the visualization icons and you will see their names.) But I don’t want to see all the TV shows, just those in the best year for TV, 2010. So I add the field Year to the Visual Level Filters in the second column on the right. Clicking on the down arrow to open the properties, I unselect (All) and select 2010.

Now my list only displays TV shows in 2010.

Finally, I want to see the rating of each show so I can click on the checkbox for Ratings or I can click on Ratings and drag it over to my table. To see the TV Shows from the highest rating to the lowest, I can click on the Ratings header. Repeated clicking on the header changes the sort from ascending to descending. Once I sort the shows by descending order, my list of top TV shows from 2010 looks like the following:

My point in doing this exercise was not really to get a list of the top TV shows for the year with the most top TV shows (although that is what I did), my point was to show you how easy it is to grab data from a web page that you did not create and perform analysis on the data found there. That is HUGE.

In future blogs, I will cover additional features of Power BI.

Hope you found this blog interesting and want to learn more about Power BI. Be sure to come back regularly.

C’ya next time.

Power BI Desktop – Your First Hour

For those of you who have been reading my blog for a while, you know my interest in PowerPivot, Power Chart, and Power View go back several years. (If you don’t, you have a lot of old blog entries to catch up on.) Anyway, I’ve been spending some time recently looking for what the next big thing is for data analysis. Several others have indicated that Power BI might fill that requirement but I spent some time looking around first before agreeing and picking up Power BI to see what it could do. I’m still getting acquainted with it, but I decided that I like it enough to take you along on my journey. So first I need to help you catch up a little.

The first thing you will need to do is to get a copy of Power BI Desktop installed on your local computer. Note that I said local computer, not your server. In fact, I’ve been running it quite successfully on a Surface Pro 3 with Windows 10 and SQL Server 2014. If you do not have SQL Server on your desktop but have it on a server that you can access, that will work just fine. For today though, you will not even have to worry about SQL at all.

To download and install a copy of Power BI Desktop, go to https://powerbi.microsoft.com/desktop.

You can also go directly to the download page at: http://www.microsoft.com/en-us/download/details.aspx?id=45331. You will see the system requirements on this page and you may notice that Windows 10 and SQL Server 2014 are not listed. That was all the more reason to try it and I can tell you that it appears to work just fine.

However you get to the download page, start the download and then install the application on your desktop.

Download and install the desktop. After the desktop is installed, it should start automatically. If it does not, go to your Start menu an entry that says Power BI Desktop. On Windows 10, this appears in a separate section along the left side under Recently Added. It may also create an icon on your desktop and even possible get added to your taskbar across the bottom of the screen in Windows 8 through 10. In some way, start it and you should see the following dialog at appears on top of the desktop application which fills in the background:

Along the left side are options to get data. We will see and explore some of the many ways you can get data from different sources over the coming weeks. You can also return to recent sources that you used. Next is a menu item that says, ‘Open Other Reports’. Desktops with their various charts, tables, and other visualization are referred to as Reports and are stored with a .pbix extension. If you had previous reports that you worked on and saved, they would also appear in this section allowing you to click on them to directly open them.

There are also several videos and tutorials listed in this dialog to help you get started. I strongly recommend that you watch these videos and perhaps read some of the tutorials. In fact, there are many more videos and tutorials for Power BI Desktop. Another good page to help you get started is the Microsoft Power BI Support page at: https://support.powerbi.com/knowledgebase/articles/471664.

Microsoft provides several sample databases and in future weeks, I will probably use the ContosoDW databases that I previously used for many of the PowerPivot blogs. But for today, let’s just try something to amaze your co-workers.

Almost everyone knows the site www.imDB.com. Using the new Edge browser, I went to the site and discovered that there are some new pages available. Most people use it to look for information about movies and the actors in them. The site also has information on television shows. In fact, it has two new pages of interest. One of the new additions is a page that displays the top 250 movies and another page shows the top 250 TV shows. Just because everyone does movies, I went to the top 250 TV shows page just to find out which year had the most highly rated shows. The following image shows a little of that page.

But what I’m really interested in is the URL. I can select the page URL from the Address box at the top of my browser. It should look something like the following:

Next I go back to my Power BI Desktop and click the Get Data icon in the External Data group of the home ribbon. Like many buttons, this one has a top portion and a lower portion. The bottom portion will open a dropdown of common data sources as shown below:

However, I can also click on the top portion of the button to display the following dialog box. This box shows categories of external data sources on the left and the names of specific data source types on the right. Since I want to get data from a web page, I need to specify a Web source. I could check each category just to see what was available and eventually find Web in the Other grouping as shown in the following figure or I could have selected Web from the previous figure. Either way, takes me to the same place. (But it is interesting to see all the possible data sources, isn’t it?)

After selecting Web, I am prompted to enter the URL of the web page that holds my data. This is where I past the URL that I captured previously when I displayed the page of top 250 TV shows.

When I click OK, Power BI analyzes the page and displays objects that might be something I’m interested in.

Of course the names like Table 0 and Tab1 have little meaning. However by selecting any of the objects, Power BI Desktop displays a preview of the contents of the object on the right side. This is how I ‘discovered’ that Table 0 contains the data representing the top 250 TV shows as shown below.

When I click load, Power BI Desktop loads the data into its local data model. In many cases, that model may contain columns that we do not need. It may also contain columns of concatenated data such as in this case in which the show ranking, show name, and show year appear concatenated together. We will need to fix this. Ultimately, I want to get to a table that looks something like this:

However, that will be next time. Have a great Labor Day weekend and get Power BI Desktop installed on your machines because next time we are going to start manipulating the data and creating visualizations.

C’ya.

A New View in Excel

I have used many of my blogs over the last couple of years to talk about PowerPivot in Excel 2010 and Excel 2013. By now you should have a pretty good idea how to build data models and analyze your data. You many even have used the Pivot Chart capability to display your data results visually. But did you know that there was another way to visualize your data within Excel 2013?

Just like PowerPivot that now comes preinstalled with Microsoft Office Excel Professional Plus 2013, so does Power View. To use this tool, you must also have installed Internet Explorer and Silverlight on your computer. If you have those prerequisites, you can enable Power View by opening the options dialog in Excel and selecting Add-Ins. At the bottom of the right panel is an option to manage different classes of Add-ins. Using the drop-down menu select: Com Add-ins and click the Go button. From the list of available add-ins, locate Power View and make sure the checkbox to the left is selected. Then click OK.

Next open the Insert ribbon in Excel 2013 and click the Power View button that appears in the Reports group. If this is your first time using Power View, you will need to enable it by clicking the Enable button that appears. This opens a separate Power View worksheet in the current workbook which will look something like the following:

If you do not have Silverlight installed, you will be prompted across the top of the Power View worksheet to do this. Click “Install” to install Silverlight. When the process completes, click the “Reload” button in the message bar to return to Power View.

In addition to the Power View worksheet, you should see a Power View tab with several options as shown below:

Because I started from a blank spreadsheet, I have no data I can use in my Power View report yet. In fact, the Power View Fields panel tells me that I need to create and select a range of cells with data and then click Power View from the Insert tab to proceed. For today’s quick example, I will create a simple table as shown below:

Once I’ve added this table to my Power View report worksheet, the Fields panel now shows me my active tables and the fields within those tables. Notice that numeric fields are automatically treated as aggregated sums. I can select the fields that I want to appear in my table by either clicking on the checkbox before the field name in the top half of the Power View Fields panel or I can drag the fields that I want down to the FIELDS box in the lower half of the panel. The advantage of dragging fields to the FIELDS box is that I can arrange the order of the fields here by dropping new fields in the position I want and I can even drag and drop fields later to rearrange the order of the fields in the table. The following figure shows the result of including all of the fields from the original table in my Power View table.

Of course if I don’t want to display all the fields, I can remove them by simply unchecking the check box in the upper portion of the Fields panel or I can use the dropdown menu to the right of any field in the FIELDS box to remove the field or change the aggregate function used for that field. For example, suppose I only was interested in the total student populations of each of the schools. I can simply remove the Student Grade column to achieve that result.

Next, suppose I wanted to have a second table or a chart on the same page. To do that, I begin by clicking in any blank area (not defined by the rectangle created by the first table). I then select the fields I want to include in my table or chart. In the example below, I use only the school name and student population just as was done in the first table.

I can then go to the Design ribbon associated with the Power View worksheet whose tab is displayed to the immediate right of the Power View tab. The first group contains options to switch the visualization of the data. The fourth icon, Other Chart, displays a dropdown that includes options to display the data as a line, scatter, and pie chart as shown below. There are some other interesting options in this ribbon like Map and Tiles that I will cover in future installments of this blog, but for now, let’s display a Pie chart of this data.

When I select PIE, Power View attempts to determine which fields to use for each part of the pie chart. In this case with only two fields, the answer is easy since only one of the fields is numeric. The numeric field is chosen as the size field that determines the size of the pie slice while the text field becomes the identifier of the slice which Power View calls Color. Note that there is no way currently to change the individual colors used in the pie chart. However, you can use the Themes dropdown in the Themes group of the Power View ribbon to change the color set used. Keep in mind that this can also change the font used for text on the page.

So what happened to Student Grades? If I click anywhere within the pie chart and the select the checkbox next to Student Grade in the Fields panel as shown below, Power View creates dividers in each of the schools that represent ‘sub-slices’ one for each grade in the school.

Unfortunately these sub-slices are not labelled so it is not possible to determine visually which slice belongs to each grade. Suppose we went back to the first table in this worksheet and added the Student Grade column back in as shown below.

Now the table includes a row for each grade with the number of students in that grade. At first you might say that did not help much.

However, if we click on any of the sub-slices in the pie chart, the table automatically filters to the information for that sub-slice/grade as shown below.

Ok, I know we have a lot more to learn about Power View and over the coming weeks, I will attempt to introduce you to its many other features. I also know that the charts created today were not that dissimilar to charts and table you could create with PowerPivot tables and charts. In fact, those tables and charts have a greater degree of formatting flexibility. However, we have to walk before we can run. In future segments, I will show how to create Power View charts that you could not create before.

Until then, C’ya!

Can You FILTER() That Down For Me

The last several weeks I have been looking at how PowerPivot in Excel works with Row Context and Filtered Context. I showed that most column expressions use a row context while measures use a filtered context although I could add and remove filters using certain expressions that allowed me to define a filter as a Boolean expression in one of the parameters. Last time we even looked at how to remove the filters by using the ALL() function. This time I will explore the FILTER() function which allows me to define a permanent filter condition to a measure no matter what dimensions or slicers the user chooses for the pivot table. In fact, in the case I am going to show you today, I need to do this because I need one measure to use all filters defined by the dimensions in the pivot table, and I need another measure to obey those filters plus one more.

Again I will use my Contoso data model that I’ve been using for all the examples in this set. I want to look at the number of orders that have returns and compare that to the total number of orders. I initially will want to show this information by sales channel and year/month. However, once I have my pivot table defined, I could of course change the dimensions I want to explore.

Let’s begin with a basic Sales pivot table as shown below.

I built this table using my basic data model with no additional calculated columns or measures except the calculated column in the date dimension that I use to order the name of the months correctly. I can use any of the columns in the FactSales table as my value field as long as I change the aggregate function from SUM to COUNT. By default, Pivot tables assume that numeric fields are summed and non-numeric fields are counted. But as long as I change the aggregate function for numeric fields to COUNT, I will get my desired results. I also modified the formatting to get rid of any decimal places and to add a thousands separator. Other than that, I did nothing special to build this table.

However, now I am going to return to the FactSales table and add a simple measure to count the total number of sales. The expression I will use is shown in the following figure.

I use the COUNT() function which has a single parameter, the name of the column I want to count. Again I could choose any column, but I chose the column [ReturnQuantity]. I will come back to format this measure in a moment, but you can see that the count is a little over two and a quarter million sales records. In fact, I know that this is correct by simply looking at the number of rows in my FactSales table.

Next, I want to count the number of sales records that have returns. This I can do by comparing either the [ReturnQuantity] column or the [ReturnAmount] column to 0. Only sales records which have values greater than 0 for these two columns represent orders which had returns. How can I do this?

One way I could do this is to use the SUMX() functionwith a second measure named [ReturnCount2]. This function has two parameters. The first parameter must be a table and second parameter is an expression of what I want to count. So I might think that I could do something like the following expression:

ReturnCount2:=SUMX(FactSales,IF(FactSales[ReturnAmount]>0, 1, 0))

The theory is that I want to compare the column [ReturnAmount] to 0 and if it is greater than zero to add one to my ReturnCount2 value. I cannot simply sum the [ReturnAmount] because this column represents the dollar value of the return. Nor can I use [ReturnQuantity] because the buyer may have returned more than one of the item from the order and summing the quantity would over count the total number of orders with returns.

I could also use the COUNTX() function. However, if I simply replace SUMX() with COUNTX(), I will get the total number of orders in the FactSales table because COUNTX() will could all non-blank rows. But I can trick the IF() into returning a blank for orders without returns by using the following expression:

ReturnCount2:=COUNTX(FactSales,IF(FactSales[ReturnAmount]>0, 1, BLANK()))

But both of these solutions used the entire FactSales table. There is one other way I want to show you today. I can use the FILTER() function to apply a filter to the FactSales table to return a subtable that only has rows with returns by using the following expression to return a table

FILTER(FactSales, FactSales[ReturnAmount]>0.0)

I can now replace the first parameter in COUNTX() with this FILTER() result which is a filtered table. I can then use any column in FactSales as the column I want to count. Well, almost any column. Actually, I cannot reuse the [ReturnAmount] column which is used in the FILTER() expression because this confuses DAX, but as I said before, I can count on any column in the table. Therefore, my [ReturnCount2] measure expression is shown below.

In this image you can see that I already formatted my measures as numbers without decimals but with thousands separators. Why do I format the numbers here? Simply because it saves time from having to format the numbers in each pivot table in which I use the measures. If I display these two measures in my pivot table side by side, I can see the total number of order by channel in each month along with the number of orders that had returns.

Suppose I wanted to show this information to management and rather than look at the raw counts which could take a bit of time to interpret, I decide to calculate the percent of orders that have returns. I can create a third measure as shown in the following figure that uses the results of the first two measures. I can then format this measure as a percentage prior to using it in my pivot table.

Returning to my pivot table, I remove the counts which I no longer need to display and replace them with the [Percent_Returns] so that management can quickly see that Catalog sales result in the most returns and Store sales in the least returns. Returns do not vary greatly by month, something that I will leave up to you to explore with a Pivot Chart.

Well, I hoped you learned some new ways to apply different filters in your measures from this discussion. C’ya next time.

I Want It ALL()

This week I am going to reverse direction on applying filters to my pivot table and show you how and why you might want to remove all filters instead of adding filters to an expression.

Again working with the Contoso dataset, I am going to start by looking at sales by product category. In addition, I want to be able to slice my data by channel or combinations of different channels. To do this, I want to use the visually friendly slicer tool as shown in the following figure. Note that in this case, I already have selected only the Online channel which results in a total of $2.6776 billion in sales. Keep in mind that the total of all sales across all channels is a little more than $12.4 billion.

Now for each product category, I want to see the total sales in that channel compared to the total sales across all product categories and all channels. If I refer back to my earlier blog on the different ways pivot tables can represent data by using built-in features, I might try looking there first to see if there is a fast way to accomplish my task. By right clicking on any of the rows in the Sales Total column and selecting the option: Show Values As, I can pick from a dropdown menu of different built-in calculations. Many of these options calculate percentages of row, column or grand totals of either the entire pivot table or a group level. These options also provide difference and running total calculations.

For example, if I were to select % of Grand Total, I would get some interesting percentages. However, these values would be based on the total sales of the slicer filter, in other words, the total sales for online sales as shown in the following figure.

So let’s play a little with a different pivot table that shows total sales by each of the channels. In the figure below, I’ve included the channel as my row filter and a have two columns which both show the total sales amount. Note that each row of the total sales amount is filtered by the channel. This is an example of filtered context when calculating a measure.

Now I’ve labeled the first of the two column: Total Sales Amount and will therefore let the pivot table display the sum of the measure filtered by the channel.

However, I’ve labeled the second column: % Dales by Channel. I can right click on any of the values in this column to select one of the other built-in calculations. In this case, calculating a percent of the grand total will show me the percent of sales that come from each of the four channels as shown below. The value in the Grand Total row displays 100% because all sales are represented by one of the four channels. Note here that it is clear that online sales account for only 21.57% of the total sales. I can use this information to validate what I’m about to do in the next step.

I am going to create a new measure named: Percent_of_Total_Sales. To generate a value for this measure, I want to sum the column FactSales[SalesAmount] for the filtered context of each place this measure appears in my pivot table. However, to get a percentage of total sales, I need to calculate the sum of FactSales[SalesAmount] for all sales, not just sales for a channel or product category, or any other filter criteria. In effect, I want to calculate the total sales as if there was no filter context in the pivot table at all. I can do this by creating a ‘new’ table for the SUMX() function (remember SUMX() has two parameters, the first of which must reference a table of values). There is another function I must use to eliminate the filter context for this ‘new’ table. This function is aptly named: ALL(). When I use ALL(FactSales) (and yes, the parameter for the ALL function must be the name of the table and it returns a table with all filters removed), I can get a ‘copy’ of the FactSales table without applying the filter context of the pivot table. In other words, all of the records in the original FactSales table will be included in ALL(FactSales). If I use this ‘new’ table as the data source for my SUMX() function and then simply sum the Sales Amount column using the SUM() function as shown below, I can return the total sales of the unfiltered FactSales table which then can be used as my denominator in my calculation. The numerator is a SUM() function of the Sales Amount also, but is calculated on the filtered context which in my case is filtered for online sales and product category.

Initially the measure returns a value of 1 because in the data model there is no filter context so the sum of the ‘filtered’ sales amount total divided by the ‘unfiltered’ sales amount total will be equal to 1. Rather than go directly to the pivot table, let’s first format this value as a percent by right clicking on the measure definition cell and selecting Format.

This option displays a dialog that lets me select the data category which is: Number. I then select the format of the number as Percentage with 2 decimal places. I then click OK to accept the format for the measure.

My measure calculation in the data model now displays a value of 100.00%. However, if I place this measure in my pivot table that displays sales by product category and uses a slicer to include on online sales, I can see my sales percentages as shown below. These values are now correctly dividing the product category sales for online sales by the total sales of my company. I can feel confident that the calculation is correct because the percent in the Grand Total line which represents sales from all product categories made through online sales is 21.57% which is the same percent I calculated in the pivot table earlier that only looked at sales by channel.

Using the channel slicer, I can select store sales instead of online sales. My previous pivot table told me that this should be 55.93% of the total sales. As you can see in the following figure, the Grand Total of my pivot table that displays sales by product category has percentages that also (accounting for rounding) add up to 55.93% since this pivot table uses the channel slicer.

In case any reader is wondering, the second pivot table that displays sales by channel, while on the same spreadsheet as the first table that displays sales by category does not use the slicer. If you have a slicer and multiple pivot tables or pivot charts, you must associate the slicer with each table and/or chart to which you want it to apply. It is not automatic nor implemented by spreadsheet page.

I hope you are starting to get a better feel for how row context and filter context work together with DAX expressions and functions to calculate values. Being able to correctly apply and remove filter contexts is essential in calculating values in many pivot table tables.

C’ya next time.

Populating Your Data Model with A Query

Over the last two weeks we talked about normalizing your logical data model in PowerPivot to help make it easier for users to understand and work with (https://sharepointmike.wordpress.com/2015/03/21/your-logical-data-model-is-not-normal/) and (https://sharepointmike.wordpress.com/2015/03/28/dealing-with-blanks-in-your-data-model/). However, in each of these models, we did all the data manipulation inside the data model. That left a lot of technical fields and tables that we could not remove from the model, but which took extra memory which could limit the size of our data model. So how do we solve that problem? We could do much of the data manipulation outside of the data model or as part of the import step. Let’s see how.

I’m going to start by importing product data from the Access database version of Contoso into the Power Pivot data model.

In the Table Import Wizard, I’m going to select the option to write a query that will specify the data to import. This option allows me to combine data from the dimProduct, dimProductCategory, and dimProductSubCategory tables into a single file. I can also embed the calculation to fix the blank Class fields we visited last time.

Clicking Next, I see the Specify a SQL Query dialog. This is an intimidating dialog. I could manually enter a SQL query here. However, I would have to know all the field names from all the tables that I want to bring together. Not impossible, but there has got to be an easier way. Wait. What is this Design button in the bottom right of the dialog?

When I click the Design button, I get an equally intimidating dialog that looks like it has an area at the top to again manually enter a SQL statement and a grid area at the bottom to display the results of the SQL statement if I were to test run it by clicking the exclamation point (!). Obviously, this screen is not very friendly toward building Access database queries. Nowhere near as friendly as the query builder inside Access itself.

If I go back into Access and build my query there using the query builder, I can visually design my query. First, I add the tables I want to use from the Show Table dialog that appears when I open the Query Designer from the Create ribbon. Then I can add the fields I want to include from any of the included tables by double clicking on their names in the tables at the top of the dialog.

I can even create my calculated column here by entering a new name in the row of the design grid followed by a colon (:). Then I can use the same expression as I did last time to test the value in the ClassName field.

NewClassName: IIF(([ClassName]=””),”[No Class]”, [ClassName])

Note that the IIF() function in Access has a double ‘I’. Otherwise, it works the same. Also note that I’m using a different name for the new column here to keep it separate from the field in the table dimProduct.

After I have finished creating the query, I can test it by clicking the Run button in the Query Tools Design window. If the query executes correctly, I am ready to copy my query to my Power Pivot data load screen. If I open the dropdown menu under SQL View in the Query Tools Design window, I see an option: SQL View as well as some other views.

This option displays the SQL statement which I can copy and then paste back in the Excel Data Model Table Import Wizard as shown below.

If I were to try to run this query directly, it would fail. Admittedly, it took me awhile to figure out why. I finally found that it did not like the one field definition: dimProduct.Size. To test what was going on, I clicked on the Design button to take me to the screen which displays both the SQL query and a grid of results from testing the query. After some testing, I found that if I redefined this field as: dimProduct.[Size], the parsing engine was happy and gave me the results I expected as shown below. The square brackets are used to clarify to the browser that the text within the brackets represent a field name, not a command or reserve word.

I then clicked OK for this screen and Finish on the next screen. I was rewarded with the data engine reading and loading my Product data. Note that it is a single table now. By including the name of the product category and the product subcategory in my query, I no longer need those two additional tables thus reducing the overall size of my data model a bit.

Next I still need to load the Sales data from Contoso along with the date and channel tables. The follow screen shows that I can open the Access database a second time to extract the other tables leaving out the product and related tables.

After the FactSales table is loaded, I still need to create a relationship with the new Product table I pulled in with the query. Even though FactSales and dimProduct were linked in the original Access database, by loading the tables separately, the relationship is not automatically discovered. Therefore, I have to use the Create Relationship dialog to link these two tables.

If I were to display the diagram view of my Power Pivot data model, you can see that the model has been simplified down to only four tables rather than six as we used before.

If I now build the same Pivot table that I did last week to shows sales by class, you can see in the following figure that the results are the same.

I can even create a hierarchy in my product table to go from category to subcategory to product. I covered building hierarchies before (). After building the hierarchy, I strongly recommend hiding the supporting columns so as to not confuse the end-users. (See: https://sharepointmike.wordpress.com/2012/07/21/power-pivot-hierarchies-part-2-of-powerpivot-2012-series/)

Denormalizing your data as you import it may sound a bit more complex, but it can save you memory and Power Pivot’s main limitation is that it must hold the entire data model in memory. So the smaller you can make that model, the more data you can load into it.

Some people might ask how far do I denormalize the model. Of course, in theory, you could denormalize to a perfectly flat single table. However this is not practical as it can serious limit the flexibility of the model and it can even cause other data errors. I tend to think in terms of model objects. For example, Products are an object in the model. Therefore, collapsing the Product Subcategory and Product Category tables into Product makes sense to me. If I had tables Stores, Cities, Countries, and Geography, I might be tempted to collapse them together as well since they represent a Location object for sales. On the other hand, I would not denormalize all the product information into the FactSales table.

So if you are thinking that there is no hard and fast answer to when should you denormalize a model, that is somewhat true. It is a bit of an art. However, if you have a table that is used just as a lookup for a name or perhaps one or two other associated fields and it is not a role playing table (See: https://sharepointmike.wordpress.com/2012/07/21/power-pivot-hierarchies-part-2-of-powerpivot-2012-series/), denormalize that table.

That’s it for this series. C’ya next time.

Dealing with Blanks in Your Data Model

Last time, I discussed some reasons why you might want to denormalize your data model when using PowerPivot (https://sharepointmike.wordpress.com/2015/03/21/your-logical-data-model-is-not-normal/). As you may remember, one reason is to make the model easier for the end user to understand. A second reason is that if you denormalize the data outside of PowerPivot, you can substantially decrease the model size by eliminating many if not most of the lookup tables and just adding the lookup information you need in the entity table such as the Product table. I will talk more about denormalizing the data outside of PowerPivot, or at least during the process of importing the data next time. This time, I want to look at another issue that sometimes occurs in your data that could make your data harder to interpret by your end-users.

I’m talking about blank or null fields in the data. Often times a table will have one or more columns which are optional. Take for example a product table like the one we are using from Contoso. Not every product may be sold by color. Some products may not belong to a style or a class. Some products may not have a measured size or a weight. In some databases, the corresponding lookup tables have a special entry for no color, no style, or no class so that every product can point to one and only one value in the lookup. However, you may have a lookup table that only contains specific values. If a product does not exactly match one of those values, the reference from the product table to the lookup table may be left blank. When the user sees a blank value, they may wonder if the user just forgot to select a value or if no value from the lookup table applied. In other words, they do not really know the reason why the field is blank.

Suppose we start with the data model using Contoso data in the following figure. Note to anyone trying to reproduce this example, I went into the Product table ahead of time and removed the ClassID and ClassName from all products in which the color was ‘Blue’ just to create some records with blank values.

After bringing this data into an Excel PowerPivot data model, I create a simple pivot table to show the sales amount and sales counts by class. Notice that the three product classes that are defined include:

  • Economy
  • Regular
  • Deluxe

But you can see from the report that 87,597 of the over 2 million sales records had no class defined. From a user’s viewpoint, they do not know if someone forgot to enter the class for some of the products or if the definition of class does not apply to some products.

As you build your data model in Excel, you can fix this problem by replacing the blank values with something like the string: “[No Class]”. The difficulty is, however, after you load your data into the PowerPivot data model, you cannot edit individual cells. You can, on the other hand, create a new calculated column.

Since I want the new calculated column to have the name ClassName and because I cannot have two columns in the same table with the same name, I begin by renaming the existing ClassName column to ClassNameOriginal. The actual new name I use does not matter as long as it is different. Also note that changing the column names in the Excel data model has absolutely no effect on the name of the column in the original data source.

After renaming the original ClassName column, I add a new column to the end of the table called ClassName. For the calculation to replace the blanks from the original column, I need to know if those blanks are Null values or if they contain an empty string. If the original column used empty strings when a class value was not defined, I can use the following DAX expression to create my new class name value.

= IF(([ClassNameOriginal]=””),”[No Class]”, [ClassNameOriginal])

This expression uses a simple IF() function to see if the value in the column ClassNameOriginal contains the empty string. If it does, the function outputs the string: [No Class]. Otherwise, the function outputs the current row value of the column ClassNameOriginal. This is shown in the following figure.

On the other hand, if the original ClassName column uses a NULL value when there is no corresponding class, the expression needed here must be defined with the ISBLANK() function to test for NULL values as in:

= IF(ISBLANK([ClassNameOriginal]),”[No Class]”, [ClassNameOriginal])

Whichever method is appropriate for your table, you now have a column that you can use for the PivotTable. Before proceeding however, be sure to go back and hide the original class name column: ClassNameOriginal so as not to confuse users with two similar columns that they could use as dimensions in their tables.

Since I still have my original pivot table open, I can just switch over to the Pivot table and it should automatically update to show that the blank class now has the name [No Class]. (Which goes to show you that having no class is better than just being a blank. J )

That’s it for this week. By the time you read this I will be at Code Camp Orlando giving my presentation: Calculated Columns, Measures, and KPIs, Oh My!

C’ya next time when I will show you how to use the query designer while loading data to the PowerPivot model to fix some of the issues from this week and last week.

Your Logical Data Model is not Normal

If you have been reading this blog over the last several years, you know that I have been a strong supporter of using PowerPivot and the Tabular model to perform data analysis, even for power users, not just DBAs. What you may not have realized is that I’ve been secretly teaching you a little about data modeling. What is data modeling? It is really nothing more than the collection of all the data in tables and the relationships between those tables in a database. Did you know that there are at least two major ‘types’ of data models when it comes to how your structure your data into tables? Each one serves a different purpose. Therefore, it can reasonably be argued that neither one is more correct than the other. But you need to understand when to use each type. That is what I want to discuss today.

Most DBAs and developers who work with databases like SQL Server, Oracle, and many common database are comfortable with the relational model for creating tables and defining the relationships that connect them. In fact, they immediately start to normalize a denomalized database in their heads within seconds of seeing the data schema. The relational model relies on the application of the rules of data normalization introduced by Edgar F. Codd in 1970. In essence, the goal of the rules of data normalization is to minimize the data redundancy which also has the effect of decreasing the overall size of the database while at the same time making it easier to maintain information that would otherwise be repeated through many records.

There are three primary rules that are used to determine whether a database, a collection of tables, has been normalized. These rules are:

First Normal Form: No two rows of data may contain repeating data or groups of data. Such repeating data must be split into a separate but connected table. For example, a sales order may contain the purchase of one or more items. Because the number of items included in a sales order is not predefined, it must be split into a separate table with one row for each item on the sales order. These two tables are then typically connected by an order id.

Second Normal Form: This rule only applies to tables which have a compound primary index, an index built from two or more fields. In this rule, all other fields in the table must depend on the entire compound index value, not only a portion of it. A possible example of this might be a table that includes students at a school in which the primary index combined the school name (or id) along with the student name (or id). Imagine that the table also included the address and phone number of the school. This information is not depended on the combination of the school and the student. It only depends on the school. Therefore, this violation of the second normal form requires that the data related only to the school be split into a second table that includes only school information.

Third Normal Form: This rule requires that every field not part of the primary index be depended on the primary index. Going back to my favorite Contoso database, one could argue that in the FactSales table, the Sales Amount field is redundant since this table also includes the sales quantity, sales price, and any sales discounts or returns. Why is it redundant? Because it can be calculated based on the other values of other columns in the record. Therefore, to fully achieve, third normal form, this field should be removed.

While there are other special case rules that can be applied to normalization of a database, most DBAs will be satisfied with a database that satisfies these three rules. They will then build the tables in their database corresponding to these rules and thus create the physical data model. It is called the physical data model because it defines the physical schema of the tables and the relationships between them.

However, business users of the data don’t look at the database that way. In fact most business users would be baffled by the large number of tables required and the relationships between them. In fact, they will not understand why they need to combine data from a half dozen to a dozen tables just to answer a single query question. As an example, let’s look at how a business user might think of just the products within the Contoso database.

The figure below shows the normalized data model for products and their subcategories and categories. As you can see from the schema, the only important piece of information in the subcategories table is the name of the subcategory. Similarly, the only important piece of information in the categories table is the name of the category.

Most business users would not think of category and subcategory names as part of separate tables, but as attributes of the product itself. In fact they would think of product information more like the following figure.

Imagine a database model with dozens of normalized tables and then try to envision how the typical business user sees the data with perhaps only a half dozen tables after denormalization.

In PowerPivot, we can address this issue in one of two ways. The first way would be to add the category and subcategory names to the product table in SQL Server before loading the data into PowerPivot. This would essentially make the physical data model the same as the logical data model used by the end-users. However, this data model would no longer be normalized.

Physically denomalizing the data tables is not the only solution. I could, as shown before in https://sharepointmike.wordpress.com/2012/07/21/power-pivot-hierarchies-part-2-of-powerpivot-2012-series/, build two calculated fields in the product table that use the RELATED() function to get the category and subcategory names and display them in the product table. I could then hide the two tables, dimCategory and dimSubcategory, from the user so that they would see a structure similar to their expected denormalized logical data model even though the data still is physically stored in a normalized structure.

The advantage of making the changes to the data model outside of PowerPivot is that it effectively reduces the amount of data that PowerPivot must store in memory thus potentially allowing a larger model to be created. The advantage of making the changes inside PowerPivot by using the RELATED() function and then hiding the technical tables that are still needed in the model but do not need to be seen by the client is that it preserves the sanity of those DBAs who cringe every time someone tries to their denormalize their data. Both methods will allow me to create similar Pivot tables in my Excel spreadsheet. Thus both methods can be considered correct.

The thing to remember is this. Data Normalization was developed to make data storage and maintenance easier and more efficient. However, data analysis and reporting often requires a fair amount of data denormalization. This is especially true when performing data analysis using reports, pivot tables and charts, and even third party data analysis tools. In fact, the logical view of the data schema aids in the performance of most data analysis. Just remember that your logical model is not a normalized model.

C’ya next time.

It’s Only the Role I’m Playing

This week I’m going to return to my favorite sample database, Contoso. If you remember, Contoso consists of sales data for a company that produces several different lines of customer electronics, from computers to phones to TVs. The data spans several years of sales. In past examples, I related the FactSales table (the table containing all of the sales records) to several tables including Channel (DimChannel), Date (DimDate), Product (DimProduct) and Product Sub-Category (DimProductSubCategory). In fact, the data source pre-defined these relations so that when I imported the data into my PowerPivot model in Excel, these relationships appeared by default as shown in the following figure.

Visually, I could switch to the Diagram view of the data model to see these relationships represented by solid lines connecting these five tables as shown below.

However, suppose I have additional information in my FactSales table. The DateKey field that I have used in the past identifies the sale date for each of the sales records. In the real world, there may be additional dates associated with each sale. For example, it is not hard to imagine that each sale would also have a Delivery Date and an Invoice Due Date. Now I might ask, does it make sense to report on sales based on the sale date or does it make more sense to report on sales based on the invoice due date or even the date of payment. Well, you might say, ‘That depends on who is asking the question.’ Very true. My Marketing/Sales manager might want to see sales by the sales date. He or she does not care about when the invoice is paid off. They just need to know if they hit their sales quotas for each month. Therefore, they might need a report that uses the default relationship and looks something like the following:

On the other hand, the Chief Financial Officer is not as concerned about when the sale took place, but when the invoice for the sale is due because only then is the income truly realized for the company. Thus the above report does not meet their needs.

Now suppose that I had a few additional columns, as mentioned earlier, that told me some other dates such as when the items were delivered and/or when the invoice for the sale was due. Let’s further suppose that these columns exist in the FactSales table and might appear like the following figure.

With these columns in my FactSales table, I would want to create relations between them and my date table (DimDate) so that I could generate reports using them. The following figure shows the Create Relationship screen in which I define a new relationship between the delivery date (DeliveryDate) column in FactSales and the date (DateKey) column in DimDate. When I click the Create button, the Power Pivot engine creates a second relationship between these two tables. (Remember the first relationship was between the sales date (Datekey) column in FactSales and the (Datekey) column in DimDate.

In a similar fashion, I create a third relation between these two tables to connect the invoice due date (InvDue) column in FactSales and the date (Datekey) column in DimDate. For the purposes of this demo, I am going to stop there. However, I could create additional relationships between any other date fields in the FactSales table and the date (DateKey) column in DimDate. Switching to the Diagram view, I could now see something like the following between these two tables.

Notice that there are three lines between FactSales and DimDate. One of these lines is solid and the other two lines are dashed. You can only have one active relationship between any two tables. That active relationship is represented by the solid line. The other two relations are inactive at the moment. These three relationships are collectively called role playing relations and DimDate is a role playing dimension because only one relation at a time can be active from the DimDate dimension. Think of it this way, the date column (DateKey) in DimDate can play one of three different roles. It can either play the part of the sales date, the delivery date, or the Invoice date. However, it can only play one role at a time. Unless all three dates were exactly the same, I have to choose which role I want DateKey in DimDate to play with the FactSales table by selecting one of the relationships.

The initial data import associate the DateKey column in DimDate to the DateKey value in FactSales which represented the date the item was sold. As long as the report requested wants to allocate sales to the sales date, I need to do nothing other than generate the Pivot Table report as shown below.

However, if I now have to generate a report for my CFO, I would have to go into the model and change the role that DateKey in DimDate plays. I might try to simply right click on the dashed line representing the connection between the DateKey in DimDate with the InvDue column in FactSales and select: Mark as Active.

This action would generate the following error message.

The problem is clearly stated. You can only have a single active relationship between the two referenced tables. Therefore, I must first right click on the current active relationship to deactivate it before activating a different relationship between the two tables.

Once the original relationship is deactivated, I can go back and activate the new relationship between DateKey and InvDue.

Knowing that only one role (relationship) can be active at a time, I am a little surprised that Microsoft did not automatically deactivate the original relationship when a new one is set active. However, like many things, they did not ask me. Plus they may have had other reasons for not automatically deactivating the current relationship. In any case, after making this change and returning to my pivot table (without making any changes to the pivot table itself), the data automatically updates using the new relationship to show the sum of sales by month of the invoice due date rather than the sales date.

If you examine the sales totals for either the years or individual months, you can quickly see that this role change for the DimDate dimension makes a significant change in the sales numbers reported each month.

That’s all for this week. I hope that now you have a better idea how you can use a single dimension to play different roles. (You could also have added the DimDate table three times to the model, once for each date column in FactSales so that each instance of the DimDate table could have a single relationship to FactSales. Then by changing which dimension you select to display in the Pivot Table, you can achieve essentially the same result. This may be easier for the end-user who does not have access to the data model to make the changes listed above. However, your power-users may prefer a simpler model with fewer tables and can change the role played by those dimensions as needed.)

C’ya next time.