When Quitters Win!

Everyone is familiar with the phrase “Winners never quit and quitters never win.”  (BTW, this quote is not originally form Coach Vince Lombardi as many believe.  Rather it comes from a 1937 self-help book from Napoleon Hill named “Think and Grow Rich.”  Today people believe that phrase to be an absolute truth.  But like most absolute truths, it is fatally flawed.  In fact there is an equally good argument to the phrase, “Quitters also can be winners.”  Okay, this is not going to be one of those stories where all of the players on the losing team also get trophies.  I hate that.  I would never recommend that.  However, what I do want to talk about is knowing when to quit.

First, quitting is a time-honored practice when you know there is no chance to win.  It could be argued that soldiers in battle who surrender rather than fighting until the end (which usually means their end) at least live another day.  That is contrary to the Klingon believe that, “Today is a good day to die.”  Another example is resigning in a chess match when it becomes apparent that there is no way you can win against your opponent.   People also resign or quit jobs because of a variety of reasons ranging from not liking their new managers or a new corporate policy to having to deal with family issues.  But I’m not going into those issues of quitting either.

What I want to talk about is quitting for economic reasons.  Specifically, I want to talk about the economic concepts of sunk costs and opportunity costs.  Let’s first get the definitions straight.  A sunk cost is money spent on a project that cannot be recovered.  To that extent, if there is any salvage value to a project that could potentially be cancelled, that salvage value can be credited against the sunk cost reducing the loss.  Opportunity cost is the cost of not doing something else with the project resources  that might yield great returns that you instead continue to spend on the current project.

But how should you determine whether a project should be failed?  One method looks at all future costs and potential revenues.  If brought back to present value dollars using some reasonable rate of return the project results in a loss or a negative number, the project should be discontinued immediately.

On the other hand, even if a project has a positive rate of return, if an alternate project using those same resources or a project designed to meet the same goal has a greater net present value, you still would be better cancelling the current project and pursuing the alternate project.

Note that neither of these methods look back at sunk costs.  Does this mean that you should always ignore sunk costs?  No.  You only ignore sunk costs when determining whether to cancel a project or continue it, not to compare it to alternatives.  However, huge sunk costs may reveal a problem in the way you initially evaluate projects, perhaps being overly optimistic in those early estimates underestimating costs or overestimating benefits.

In any case, you want to catch and fail projects that do not live up to their initial expectations or new alternatives as fast as possible.  In fact, there is a saying among the most innovative companies, “Fail often but fail fast.”  You will never fail if you do not try new things, new projects, new ideas.  You want to have lots of ideas or projects because some of them may be wildly successful.  However, don’t get so bogged down in sunk costs for a project that you fail to fail a project quickly that is showing signs of failure.

Therefore, quitting may release resources that can be better applied to something else that could be a success.  This concept can be applied equally in the workplace and in your personal life.  Is there anything you should quit so you have more time to do something else that might be successful?

C’ya next time for more SSAS fun.


Formatting Your SSAS (Cube that is)

When you first build your cube in SSAS, you may not like the way it looks.  Typically, the measures are not formatted.  If you have large numbers, there are no commas separating the thousands.  Each number could have a different number of decimal places.  The numbers are left justified, not right justified or even decimal justified.  Finally, currency values do not display any currency symbols.  Perhaps this does not both you, especially if you plan to create a report with Reporting Services for your end users.  You can certainly format your data in the report instead and honestly, this is what I would usually do.  However, if want to capture an image of the pivot table directly from the SSAS browser screen to be placed in a Word document for example, you may want to dress up your table a little first.

Last time, we left our cube looking like the following figure.  Not too pretty, and certainly not easy to read or interpret.

In fact, it is hard to tell at a glance, which numbers are large and which are small because the numbers are left-aligned.  Perhaps your first thought is to look for a properties option when you right click on a measure much as we did back in the Excel pivot tables.  After all, since this is just another Microsoft product, you would expect it work the same way.  Right?  (You in the back, stop snickering!)  Instead, when you right-click on a measure, you will find an option named Commands and Options at the bottom of the menu which displays the following dialog.

First note that the Select option is a dropdown that defines the portion of the pivot table you last clicked on.  Changes to formatting in this dialog typically apply to the selected portion of the table.  However, you can also use this dropdown to change the portion of the table to which you want to apply formatting.  It should also be interesting to note that this dialog can be left open while you click on different parts of the pivot table to format different elements.  Finally depending on what part of the pivot table you click on or select from the dropdown, the options that appear and are enabled including the tabs that appear in this dialog will change accordingly.

Before we look at formatting our numbers, let’s look at some of the other formatting options beginning with sorting.  Suppose we had the following pivot table that displays sales by year and channel just to keep things simple.  If we right-click in any column and then display the Commands and Options dialog (if it is not already open) and click the ascending sort button , the rows in the table shuffle to display the data in the selected column in ascending order.  Similarly, the descending sort button  reorders the rows so the values in the selected column appear in descending order.

Likewise, if we click in the row headers, we can sort the rows by the header labels in ascending or descending order.  You can even click in the column headers and sort the order of the columns in ascending or descending order using the column header text.

Formatting of the column and row header text to use a different font, color, font size, style or even the background color is also easy.  The following figure shows a couple of format changes to show what is possible, not to show what good design is.  Notice that when you select any one of the column or row headers, the format applies to all of them except for Sub-Total and Grand Total columns or rows.

You can format Grand Total headers separately by clicking on the header Grand Total.  The interesting twist here (and it is more than a bit confusing) is that changes to the font style (bold, italic, underline) get applied to the header, but if you change the font itself, the font color or the font size, these properties only affect the data in the column or row, not the header text.

In a somewhat similarly twisted way, if you click on a measure in the table and change the font, font size, font color, or background color, you only affect the detail measure values, not sub-total or grand totals.  These can be selected and formatted separately.

(Perhaps to eliminate confusion as to what formatting to apply, formatting changes to rows such as the Grand Total row at the bottom of the table will have priority over formatting changes applied to Grand Total columns.)

So now, it is time to look at changing our numbers to currency.  If you select any of the measure cells in the table and attempt to use the seemingly obvious Number dropdown to select Currency thinking that dollar signs and nicely formatted numbers will appear, you would be wrong.  The Currency option does not appear to change the formatting of the numbers in the table at all.

So how do you display the measures as nicely formatted currency values?  For that, you will have to come back next week.

By the way, my daughter graduates from college this week with a Doctorate in Pharmacy.  Therefore, while next Saturday’s post will be short, you will not want to miss how to format the measures in your SSAS cube as currency.  C’ya later.

The Commute to Work Is Killing Me and You.

A recent article in the American Journal of Preventative Medicine points to a relationship between the amount of time spent driving to and from work and cardiovascular mortality. They did this study using over 4000 commuters in Dallas and Austin Texas.

Their study shows a high correlation between long commuter times and higher blood pressure. Really? Are they trying to imply that shouting at the other idiot drivers (it is always the other driver 🙂 ) might raise your blood pressure or that sitting in stop-and-go (okay mostly stop) traffic when you need to get somewhere is something less than relaxing. I’m hoping they did not spend a lot of money on that revelation and I certainly hope it wasn’t paid for with tax dollars.

They also say that there is a relation, but not as strong between longer commutes and increased waist size and BMI (body mass index). Again I must control my shock. After all, who would think that longer commutes which steal time during those precious few hours between work days might cause some people to prefer extravagances such as eating, sleeping and just relaxing a bit after a grueling day over boring exercise.

On the other hand, look around your neighborhood and count the number of people who hire lawn services to maintain their lawn just so they have time to go to the gym. Or how about those who hop into their car to go a couple blocks down the road to a convenience store to pick up something they forgot rather than walking or riding a bike. Okay, walking and riding a bike may not keep you safe from those idiot drivers you just yelled at on the way home, but that’s not the real point.

Here is my real point, actually several real points:

  • If you work in a job where you sit all day, get up occasionally and walk around the floor. If you are a manager, this might be a good way to keep in touch with what your staff really does during the day.
  • Take a break at lunch to walk around the block. Lunch is not just ‘the simple joy of lunch’ as in the McDonald’s commercial. It is also a chance to get some fresh air and let your mind regroup and process what you’ve been doing. You might even discover that a walk stimulates thinking of solutions to problems or breaking through that mental block on the report you are trying to write.
  • Walks also allow you to decompress from the work pressures you’ve been dealing with. Think about the ‘forced’ exercise programs that are part of the Japanese and Chinese corporate cultures.

Maybe they know something that we forgot. Is it possible that longer commutes could affects what people eat for supper? With less personal time, perhaps some people rely too much on fast food stops or frozen dinners that can be quickly nuked in your microwave. Between the salt and preservatives in both of these, they could also directly correlate to blood pressure and BMI issues.

The study did not compare the effect of long commutes where the commuter relied mostly on bus or train transportation. When commute times can be used to finish some last minute work, catch up on news, or just relax and decompress while someone else ‘drives’ is the effect on blood pressure reduced?

Could more companies offer the ability to work from home one or two days a week to their best performers. How much would this reduce blood pressure and encourage more healthy activity to keep employees fit and productive? (BTW, it would also reduce traffic congestion and need for more and bigger office complexes and reduce the cost of heating and air conditioning.)

Here’s to thinking (dreaming) outside of the box. C’ya next time.

Building a Basic SSAS Cube

Last time we started looking at how to use SSAS to load fact and dimension tables from the Contoso Retail Data Warehouse example.  We saw that even after only defining our data source and data views that we could generate simple pivot tables from the raw data.  However, those pivot tables were limited to data from a single source table or view.  Sure, you could denormalize your data further so that a single fact table included all the measures and dimensions you needed to create a pivot table, but there are bigger problems such as how to give end users access to the pivot tables and pivot charts without also giving them access to SSAS.  However, a discussion of user access will come later in this series.  First, let’s see how to create a proper cube in SSAS.

Let’s pick up the discussion from the point where we have defined the data source view.

We could immediately jump into creating a New Cube by right clicking on Cube and then selecting New Cube from the popup menu.  This action invokes the cube wizard that allows us to select which tables to use as fact tables and which tables to use as dimension tables.  While the wizard allows us to select the measures to include in the cube from the fact table, no such option exists for selecting the attributes from the dimension tables.  Rather, the wizard only selects the key field from the dimension table.  That does not provide any more information than the corresponding foreign key value in the fact table.  While you can go back and fix this, I generally ignore this wizard and build my dimensions manually.

Therefore, I would begin by right clicking on Dimensions in the Solution Explorer and selecting New Dimension.  Skipping the first screen of the Dimension Wizard, I would select the option to create the dimension from an existing table.  Note some of the other options available in this wizard related to time dimensions and non-time tables built with other dimensions.  I promise to explore these topics in a future blog.

After clicking Next, the Specify Source Information screen allows me to select the Data Source View.  Since I only have one view defined in this project, the default view is correct.  However, for the Main Table, I want to use the dropdown to select the first dimension table.  Note that I can even use fact tables as dimension tables.  A fact table used like this is called a degenerate dimension.  For now, I will ignore the fact table and go immediate to my first dimension table.  Note that just because a view includes a dimension table, that does not mean the current cube must use it.  For example, I could build a simple cube here using only the Channel and Date dimensions.   However, I must build one dimension at a time, so let’s build the first dimension on DimChannel.

Note that in addition to selecting the table, the wizard automatically selects the key column, ChannelKey, and lets you select a Name Column.  For this table, the obvious choice for the name column is ChannelName.  We will see a little later how the Channel Name will automatically appear in our cube when we drag the Channel Key into one of the dimensions.

The next page of the wizard allows you to select other attributes of the dimension to include.  You can choose any attribute individually by checking the checkbox to the left of the attribute name.  You can also select all of the attributes in the table by checking the checkbox to the left of the header Attribute Name at the top of the list.


The final page of the wizard lets you name the dimension.  By default, the wizard parses the name of the table selected and automatically puts spaces before each capital letter after the first character in the table name to create a user friendly name as it assumes you are using Camel or Pascal case.

When you click Finish, you should see the new dimension definition in the Solution Explorer under Dimensions.  Similarly, I would create dimension for DimDate.  Notice before we move on to creating the cube using our manually defined dimensions that after each dimension is created, a three panel screen appears in the main portion of the screen with the headers: Attributes, Hierarchies, and Data Source View.

Here we can build an attribute hierarchy.  For example, we know that years contain quarters which contain months.  We can build a hierarchy that represents this relationship by first dragging Calendar Year Label to the center column.  Then drag Calendar Quarter Label to a position immediately beneath Calendar Year Label.  Finally, drag Calendar Month Label beneath the other two fields.  By right clicking on the header: Hierarchy, we can rename our custom hierarchy to anything we like such as: YearMonth.

Note that the tab, Dim Date.dim [Design], at the top of this screen section has an asterisk after the name.  This tells you that the changes you entered have not yet been saved.  You can either click the Save icon in your toolbar or you can right-click this tab and click Save Selected Items.

With our dimensions defined, right click on Cubes in Solution Explorer and select New Cube.  Skipping over the Welcome screen for the Cube Wizard, you can choose to build the cube using existing tables, build an empty cube, or generate tables in the data source.  Select Use Existing Tables and click Next.

On the next screen in the wizard, you need to select the Measure Group table.  In this example, the only measure group table is FactSales.


In the next screen, you can choose the measures to include in the cube.  By default, the wizard includes all the fields in the fact table as measures.  Often a fact table includes columns that you may not be interested in using as measures in the cube.  Therefore, you can deselect any of the measures that you do not think the users will need by deselecting the checkbox to the left of each measure name you want to remove.  Notice that the wizard automatically provides a new column that did not exist in the original fact table named: Fact Sales Count.  This measure contains a formula that displays a count of records when added to a cube.


The next screen in the wizard shows the names of the existing dimensions.  As mentioned before, you do not need to use every dimension defined for every cube.  In this case, only two dimensions appear because I only manually created those two and will use both in the definition of this cube.

The wizard also recognizes from the view definition that there are other tables for which potential dimensions have not been defined and offers to create new dimensions based on those tables as seen in the next figure.  In this case, I already deselected all of these additional dimensions to keep this example simple.

The final screen of the wizard shows the selection of measures in the fact table and the selected dimensions.  It suggests a name for the cube based on the name of the view used.  However, you can change the cube name.  If you use the same data source view to create multiple cubes, you must define unique cube names for each cube.

When you click Finish, the center working area shows panels that display the Measures, the Dimensions and a Data Source View.

Also a series of tabs appear across the top of this section.  For now, the only other tab we will examine is the Browser tab.  However, before we can click this tab, we need to build our cube.  To build the cube, click Build ContosoPivot (the name of our solution) in the dropdown menu from the Build option in the main menu.  For a cube of this size, the build should complete in only a second or two.  You will see the text: Build Succeeded in the left side of the Status Bar when the build completes.  If an error occurs during the build, the Error List window will open and itemize the errors found.  Some errors might just be warnings and the cube will still build.  However, you should always review warnings, not just errors.

After the build of the cube succeeds, you still need to deploy the cube before you can view it.  Again you can go to Build menu option but this time select Deploy ContosoPivot.  Again because of the small size of this cube, the deploy step should complete relatively quick.  However, for larger cubes with many dimensions, this process can take from a few seconds to a few minutes.  If the deploy is successful, you should see a panel in the lower right with the tab name Deployment Progress which displays a green Deployment Completed Successfully message or a red Deployment Failed message.  Again the Error List will help you debug any errors that occur during the deploy process.

Now when you click on the browser tab, you will see an empty view of the cube.  From the list of groups on the right side, you can add facts to the body of the cube and then define row and column dimensions from the data used when defining the cube.  In the final figure for this week, you see a cube that displays Sales Amount from the fact table as a function of the channel and date dimensions.  Note that I was able to drag Channel Key to define the rows, but instead of displaying an integer key value, the Channel Name appeared because of associating the Channel Name field with the Channel ID field when I defined the dimension.  Similarly, I use the YearMonth hierarchy that I defined in the date dimension to define my columns.  Note in the figure that I opened the levels so you can see that the levels: Year -> Quarter -> Month were build correctly for me.

Next week, I talk a little more about some of the formatting features for the cube and the use of filters.

Evaluating Competency

Today the State of Florida released the test scores for the 4th grade writing test.  Results were to say the least, disappointing.  The number of students with a passing score (which means a 4 or higher on a 6 point scale) fell from 81 percent last year to 27 percent this year.  Everyone has their thoughts on why this number fell so low and I will not add or subtract from those theories.  However, reading the local Sentinel newspaper, I found several groups who did offer suggestions and so I turned to my Magic 8 Ball to evaluate them.

The state made the scoring harder this year.  Fourth grade students are expected to use correct spelling and grammar and to present logical arguments backed with evidence from the reading.   Others might ask how many professional writers today could create books or even simple articles without the help of built-in spell checkers and grammar checkers in programs like Microsoft Word and other similar computer based word processors.  I admit that I depend on their help.  On the other hand, how important is it to think critically, analyze a situation, and answer questions backed with the details from the story.  Since the state has not indicated at this time how much of the grade is based on spelling, grammar, or logical presentation, I asked the Magic 8 Ball how important these changes were and it said, “Ask again later.”

Some suggest that the state may have evaluated the writing too harshly considering they were grading fourth graders, not high school students.  One write-in reader offers the opinion that “parents don’t want to admit their kid can’t spell, or string a sentence together with any kind of cohesive thought.” I rolled the Magic 8 Ball across my desk and when it stopped, I read, “Reply hazy, try again.”

Another reader wrote in to suggest that maybe if students spent more time with their studies rather than texting or going out to Facebook, they would have done better.  If texting and Facebook had just been introduced in the past year, he might have a point, but both have been around for several years.  Does that mean that spending time on texting and Facebook is not a factor or it is that the effect is only kicking in now?  Again, turning to the Magic 8 Ball, it hesitated before saying, “Cannot predict now.”

Will these results add more fuel to the fire for groups like FairTest who oppose standardized testing and will they call for a further investigation into the value of these testing methods?  I tossed the Magic 8 Ball into the air and when I caught it, it said, “Most likely.”

Finally, will the state somehow amend these test results so as to not skew the overall school results too much lowering the school ratings leading to more ‘F’ schools and fewer ‘A’ schools.  I spun the Magic 8 Ball like a top, and when it finally stopped spinning, I saw the answer, “Outlook not so good.”

All kidding aside, this is a serious issue that will affect the future of these students.  Will they feel like failures or will it encourage them to try harder.  With one last shake of the Magic 8 ball, it replied in a hushed tone, “Better not tell you now.”

C’ya Saturday for a technical blog on building cubes in SSAS.

For those who want to read the Orlando Sentinel article, you can find it at: http://bit.ly/JXR1G7.

Pivot Tables in SSAS?

And you thought I was not going to talk about pivot tables for awhile…

This week I’m going to take a look at SQL Server 2012, specifically SQL Server Analysis Services (SSAS).  Since SSAS was introduced in SQL Server 2005, it supported only a single mode of operation.  That mode was to use a multi-dimensional model for building cubes that frankly many DBAs had trouble with because it encouraged denormalization of the data to flatten the structure, reduce the number of tables and create a star or snowflake schema for building cubes.

Then Excel 2010 introduced PowerPivot which used a new engine called the VertiPaq engine which allowed users to build pivot tables and charts from relational tables without necessarily having to go through all of the denormalization that SSAS expected.  PowerPivot was introduced in two flavors, PowerPivot for Excel and PowerPivot for SharePoint.  Both are distributed by Microsoft free of charge.  Both relied on the new VertiPaq engine but the SharePoint version also required SQL Server 2008 R2 because the SSAS engine was modified to support this new VertiPaq mode specifically for Excel spreadsheets published to SharPoint.

With SQL Server 2012, you can install SSAS using a third mode, the tabular mode which allows you to directly build in SSAS pivot tables using the same VertiPaq engine used by PowerPivot for Excel.  So why should you care?  In Chapter 9 of Microsoft’s free ebook: Introducing Microsoft SQL Server 2012, the authors provide a table to compare the features between the three server modes for SSAS.  Rather than repeat that here, download the book at:  http://blogs.msdn.com/b/microsoft_press/archive/2012/03/15/free-ebook-introducing-microsoft-sql-server-2012.aspx.  There is a lot of other good information about the new SQL Version here.

So onward to the practical demo for today.  I am going to show screens here using SQL Server 2008 R2 since many of you may not have converted to SQL 2012 yet.  Since the multi-dimensional part of SSAS has not changed in SQL Server 2012, the steps described here should work the same way.  I’m also going to use a Contoso data set which you can download from: http://www.microsoft.com/en-us/download/details.aspx?id=18279

To begin a SSAS project, open SQL Server Business Intelligence Development Studio from the folder Microsoft SQL Server 2008 R2 in your Start menu.  Create a new project.  In the New Project dialog, select Business Intelligence Projects as the Project Type and Analysis Services Project as the Template.  In the figure below, you can see that I named the project ContosoPivot, but you can name your project whatever you like.

After you click OK, you will see the standard development environment that uses the Visual Studio shell.  Go to your Solution Explorer panel.  If you do not see a panel labeled Solution Explorer, open the View dropdown menu and click on Solution Explorer there.

First, you need to define your data source.  Right click Data Sources in your project in Solution Explorer and select New Data Source from the dropdown menu.

If you already have a connection defined, you can simply select it and click Finish.  However, if you do not have an existing data connection to the database you want to use, click the New button.

In the connection manager, make sure your Provider says: Native OLD DB\SQL Server Native Client 10.0, at least if you are using SQL Server 2008 R2.  Otherwise, use the dropdown arrow to display the available providers and hopefully you will find the correct one for your version of SQL.  Next, enter your server name.  If you are running this on your local machine, that is probably your machine name.  You can also use: Localhost, or you can use the dropdown arrow to scan for available database servers, but the fastest way is to manually enter your server name if you know it.  Next you can specify the type of authentication and if you are using SQL Server Authentication, the user name and password.  I prefer to use Windows Authentication for two reasons.  First, it is one less password to remember, and second, SQL authentication passes the username and password over the network in clear text.  Finally, use the dropdown for the prompt: Select or enter a database name to select the database you want to use in the selected server.  These options appear in the figure below.

(Note: There is a button to test the connection.  You may want to do this to insure that the authentication provided earlier has permission to access the database.)

When you click the Ok button, you will again see the Data Source Wizard, but now your new connection should appear in the Data Connections box on the left.  Select the data connection and click Next.  Here you can provide impersonation information that defines how SSAS connects to the data source.  You could use a custom account that limits rights to the database by using a custom Windows user name and password.  Keep in mind that you must give this user rights to the database through SSMS.  You can also use a service account or you can use the credentials of the current user.

Click Next to finish defining the Data Source by giving it a name.  By default, the wizard uses the name of the database as the source name.  Usually this is acceptable.  Click the Finish button to complete the wizard.  You should now see your data source in the project with the extension .ds.  Now you need to create a Data Source View to define which tables you want to use.  Right click on the Data Source Views folder in the Solution Explorer.

In the first screen after the Welcome screen, you can select one of the Relational Data Sources you created in the previous step.  You can also create a new data source here by clicking the New Data Source button.  Let us assume that the Data Source exists and you can just select it and click Next.

In the Select Tables and Views page of the wizard, you can select the tables you want to use in your cube.  Only tables you select to be part of your view can appear in the cube.  You do not need to include all of the tables in the selected database.  I prefer to select first the fact table I want to use which in this case is: FactSales.

I could manually select the dimension table I wanted to use or I could just select the fact table in the Included Objects list and then click the Add Related Tables button.  The wizard looks at the relations between the keys in the fact table and the other tables in the database to determine which tables to pull over to the Included Objects list.  If the wizard pulls over any tables I really do not care about for the pivot table I am trying to create, I can simply select them and use the left pointing arrow key to send them back to the Available Objects list to wait for another opportunity to join a cube.

When I click Next, the wizard prompts me to provide a name for the view.  Again, by default, it uses the name of Data Source that is usually fine with me.

You will then see a diagram of your tables that shows our fact table FactSales as the center of a star schema.  We will come back to this diagram in future weeks to show how we can rename objects to more user-friendly names and create calculated columns.


First, let us just right click on the fact table header.  This opens a popup menu.  Select Explore data from the list of options.

SSAS now shows us a new window with a set four tabs across the top: Table, Pivot Table, Chart, and Pivot Chart.  In the Table view, you can see all of the fields in the fact table.  In fact, this almost looks like an Excel spreadsheet of the data.

Next, click the Pivot Table tab.  This begins by displaying an empty pivot table with a list of the fields in the fact table in something that looks like the Excel Pivot table field list along the right side of the screen.  You can drag any of the fields into the table to create display measures as a function of any of the other fields such as the dimension keys.  In the figure below, you can see that I can display data by year and quarter.  In fact, there are several other date options available from the field list.  How did this data get there when the original fact table did not include these fields?  There was a field that contained a Date Key.  SSAS uses this field to create automatically some of the more common hierarchy elements for dates.  On the other hand, the Channel Key shown as the row dimension only contains four values, 1 through 4 which by themselves does not tell us much.  However, the point is that we were able to generate a simple pivot table directly from the fact table.

The third tab is Chart.  Clicking on this tab displays simple counts of records by the selected fields from the fact table.  It picks several fields by default, but if you click the bar at the top of the charts (Currency Key, Store Key, Promotion Key, and Channel Key in the figure below, you can pick any of the fields from the fact table.  Note that for some fields, the data chart groups values together to display the columns because any individual chart will not display more than 10 columns.

Finally, the fourth tab is a pivot chart.  In the chart below, I am only displaying the sum of the Sales Amount by Channel Key.  I could add another field as a series by dragging one of the fields to the Drop Series Fields Here area on the right side of the chart.

Perhaps if we had further denormalized the fact table by adding the dimension labels into the fact table instead of having keys to link to dimension tables, we could have created the entire pivot table/chart analysis right here, captured the screen images and pasted them into a Word document as a report.  However, what I showed here today is merely a way to get a quick look at the data, not to create a final product.  To create a final product, you will want to take the next step and create a cube.  Next week I will continue this example by showing how to build that cube.

C’ya later.

Are iPads Toys or the Tool of the Future

My thought for this week is not just about iPads.  It’s about any tablet sized device.  However, I knew mentioning iPads in the title would grab your attention.  But I serious want to address similarities between what is happening today with the iPad and what happened years ago when Apple introduced their Apple II personal computer.

I started thinking about this after hearing from employees at several companies and organizations where management thinks of iPads and tablets in general as nothing more than a distracting toy that employees want to bring to work.  Perhaps they fear that employees will become ‘addicted’ to playing with their tablet and going on the internet rather than focusing on the work that they are supposed to be doing.  When I hear these comments, I cannot help but think of similar comments a few years ago when Apple introduced the first APPLE II and Radio Shack tried to compete with the TRS-80.

Ok, it was more than a few years ago, but I remember a different management group saying that the Apple II and TRS-80 were nothing more than toys that could not do any real computing and therefore did not belong in the workplace where serious computing needed to take place.  We laugh at those days now, but having a personal device with 16 KB of memory was a big deal then which is several orders of magnitude more than the memory of even the low end iPad.  There were some of us who tried every trick possible to write serious applications in that limited space to prove that the device could be an asset to an engineer.  Big business also looked down on a company with a weird name like Apple and later Lotus as proof that they were not serious contenders for a space in their Information departments.  This was a time that required large computer centers with special environment controls, raised floors, and highly trained technicians who carefully controlled access to the beast and fed it boxes of punch cards, reams of special color bar paper and of course, lots of electricity.

But slowly renegades started to bring these personal computers into the business world.  When IBM finally introduced its first PC, business sat up and at least paid attention although many still thought that this was just a way to appease the rabble-rousers.  Early in the IBM PC life, I convinced my management of their value by developing a computer application to calculate the NPV (Net Present Value) of different boiler configurations.  In fact, we won additional money from our client to build out the final version.  But the point is that the PC took hold only as business applications were written for it.  And while it took years for the transition, eventually the personal computer, now made by many other companies other than IBM has taken over the corporate computing environment from the large mainframe computers at least in the majority of companies.  Today, it is not uncommon to walk into a company and see a desktop or laptop computer on every employee’s desk.

While Apple was not the first personal computer and they certainly did not lead the charge into the corporate environment, you can definitely say that they were very instrumental in changing the face of computing from a grass roots level.  However, it was business software that drove the ultimate acceptance.  The question I have for you today is whether Apple and the iPad represents the next revolution in personal computing and if so, where is the driving software?

Like the introduction of the Apple II was not the first personal computer, the iPad was not the first tablet available to consumers.  However, like the Apple II, the iPad did catch the attention and imagination of the public.  Currently there are a lot of other competing tablets, and while Apple fans will say that the other brands just do not measure up to the iPad, I will grant that many of the competitors have individual features that are superior.  Also like the early days of the personal computer in the late 1970’s and early 1980’s, the available software tends to be more game oriented rather than rigorous business applications.  Maybe these business applications need a stronger platform that is just not there yet.

Therefore, I believe that we are waiting for the right hardware combination along with the right business productivity software that will transform the way people work in such a compelling way that personal table devices will replace traditional desktop and even laptop computers in the same way that early PCs replaced mainframes.  Perhaps some of the answers to these questions are in the cloud.  By that I mean applications that run in the cloud with data storage and access to a wide diversity of source information that can be mashed together to help business make better decisions.  While there are some very interesting applications for tablets even today, I don’t think we have seen that killer hardware/software combination that will lead the way of transforming data into business information.  In fact, I believe that we are still in the early pre-82 days of exploring the potential of these devices.  On the other hand, I don’t think that future is very far away.  I only hope that when that magical combination of hardware and software arrives, that I will recognize it early enough to take advantage of the wild ride it will offer.

Hiding Pivot Table Columns

Over the months, I have talked a lot about Pivot Tables because I believe that they are an awesome and free add-in for Microsoft Excel.  This article will be the last one for awhile on Pivot Tables so I want to cover a topic related to how to hide columns that you do not want the end users of the pivot table to see or use.

Method 1:

The first and easiest way to hide a column from a user is to never include it in the Pivot Table source tables in the first place.  When importing data from SQL Server, this is not only easy but is also highly recommended.   I’ve covered the basics on loading data from SQL Server in the past and will not repeat all those steps here.  However, I will pick up the process with the Table Import Wizard as shown in the following figure.  This figure appears after you have selected the connection (or created a new one) to a specific database in one of your SQL Servers.

Note that this figure shows a list of the tables in the database.  You can select the tables you want to import to your PowerPivot window by selecting the checkbox to the left of one or more of the tables.  That that in the fifth column, you can provide a Friendly Name for the table rather than the default which is the table name itself.  However, the last column, Filter Details allows you to filter not only the rows that you import, but also the columns.  To define a filter for a table, click the Preview & Filter button in the bottom right of this dialog.

The screen that appears shows a grid of your data including the column names in the header and several rows of sample data.  Rather than just importing the entire table, you can choose which columns you want to keep and which columns you to exclude.  By default, the import dialog assumes you want to import all the columns.  However, by clicking the checkbox to the left of the column name to remove the checkmark, you can exclude columns that you do not want.  In the following figure, I’ve excluded the columns: ProductSubcategoryDescription, ETLLoadID and LoadDate.  (Note, ProductSubcategoryDescription appears to be a duplicate of ProductSubcategoryName.)

Clicking OK returns me to the dialog page where I can continue to select other tables and filter them.  When I click the Finish button on the Table Import Wizard page, Excel PowerPivot begins the process of downloading the selected data to the PowerPivot Window.  After Excel loads the data, you can open a downloaded table by clicking on the tab displaying the table’s name.  As shown in the following figure, the downloaded table for ProductSubcategory does not contain the three fields specifically excluded.

This is the best way to eliminate data that you do not want to display to the end user of the Pivot Table for at least 4 reasons.

Eliminating columns that you do not want increases the speed of the data import.

  • Fewer columns downloaded means a small table size for a given number of records.
  • A smaller table size directly correlates to better performance.
  • Finally, Excel PowerPivot tables have a 2GB file size limit.  The fewer columns you include, the more rows you can import.

Method 2:

Suppose you already imported your tables when you realize that several columns simply are not needed.  You can then use the Hide Columns option in the Design ribbon of the PowerPivot window to display a list of the fields in the current table (remember to first select the table you want to work with by clicking on its tab).  The resulting dialog as shown below displays all the columns in the table along with two columns with checkboxes.  The first column lets you select whether the field appears in the PowerPivot window.  The second column lets you select whether the field appears in the resulting Pivot Table Field List.

In this figure I hiding some columns in the PowerPivot Window, but not in the Pivot Table itself.  Why do this?  When building calculated columns, you member from a few weeks back that you can either enter the name of the column you want to use in your expression or you can simply click in one of the rows of that column.    If I have a very wide table with lots of columns, I may come into this window, hide first all the columns with the (Select All) option and then just turn on the columns I need.  Then I build my expression by clicking in the column to select them rather than trying to make sure I spell them correctly.  Then after the expression is built, I turn all of the columns back on.

On the other hand, you might want to hide columns from the PivotTable, but not the PowerPivot window.  You might do this because you need to see the column in the PowerPivot window so you can build expressions using those columns, but you want users to work with the calculated column only, not the columns that went into defining it.

For this example, I will simply eliminate the columns ProductSubcategoryKey and UpdateDate from the PivotTable leaving only 3 active fields.  When I build a pivot table from this dataset, the PowerPivot Field List only includes those three fields as shown in the following figure.

If you build PivotTables for end user or even to publish to SharePoint, this is an excellent way to include the column in the PowerPivot window so you can create calculated columns, but then hide those unnecessary columns from the user eliminating any confusion.

Method 3:

Suppose you have a pivot table like the one shown in the next figure.  Notice that the columns use a hierarchy that goes from year to quarter to month.  (To get the months in the correct order, remember to check out my blog entry on creating custom lists to sort data.)

Suppose you only want the first month and last month of each quarter (skipping the middle month).  You may not want to filter the imported data because in other Pivot tables or Pivot charts, you might need all the months.  Therefore, you need a way to filter the data in the output PowerPivot table.  You can do this with a Set.

With the PivotTable selected, open the Options ribbon and find the Fields, Items, & Sets option in the Calculations group of this ribbon as shown below.

When you click on this option, a dropdown menu appears that lets you define a set.  Basically a set is either a selection of rows or columns that you want to show.  In this case, we want to create a custom set of columns so select Create Set Based on Column Items…

A dialog with all of the columns appears.  In this dialog, begin by giving this combination of columns (set) a unique identifying name.  In the following figure, I named my set: First_and_Last.  Then go through the list of columns selecting the columns you want to get rid of and clicking the Delete Row button.  When you have finished, click the OK button.

When you PowerPivot table appears, you will see that the months of February and May do not appear in quarter 1 and 2 respectively.

Want to return to the table with all of the months again.  Simply look at the Column Labels box in your PowerPivot Field List.  Notice that column label no long shows the hierarchy: Year, Quarter, Month as it did before, but now shows the name of the Set you just created.  Remove the set name by dragging it out of the box and place the year, quarter, month hierarchy back in the Column Labels box.

You can define multiple sets, but each set must be based off the full set of columns and rows.  You cannot build a set based on another set.  Your sets appear as if they were a separate dimension table in the field list under the name Sets at the bottom of the list.  When you open Sets, you will see each of the sets you have created and can drag and drop them into the label pivot boxes as you choose.

Method 4:

Our discussion would not be complete with the easiest way to hide columns or rows that Excel has supported for many versions.  Simply select the column(s) and right click the select column header to display the dropdown menu as shown below.

You can select the Hide option to hide the data.  However, remember that any observant user will see notice the bold line separating the columns where the hidden column resides or they may notice the skip in the column letters or row numbers along with border of the Excel spreadsheet.

Obviously this last method is not very secure.  But then the only really secure method is to never import columns into the PowerPivot window that you do not want the user to ‘discover’.

I hope you found this series on PowerPivot interesting.  I hope to return to it with new topics at a future time.  In the meantime, I’m going to explore some SQL Server features over the next several weeks.

C’ ya later.

Is Innovation a Four Letter Word?

Is Innovation a four letter word where you work.  Are employees expected to do the same thing day in and day out with hardly a moment to stop to think about why they are doing something a particular way or whether there is a better more efficient way to do it?  Several months ago I wrote a blog titled: Too Busy to Think? (https://sharepointmike.wordpress.com/2011/12/06/too-busy-to-think-im-thinking-thats-bad/)   If you spend your entire day doing everything you can just to get your job done and when you get home you just want to think about something else, anything else, how can you ever be creative and innovative.  Thomas Edison, probably the greatest inventor of our nation with over 1,093 patents, once said, “Genius is about 2% inspiration and 98% perspiration.”  But I assure you that the 98% perspiration did not come from continuing to do things the same old way.  Innovation needs to be part of your corporate plan, not an afterthought.

Does this mean that you should all ignore the daily tasks turning everyone loose to research and innovate?  No, of course not.  First, innovation is not for everyone.  Some people actually prefer coming into work each day knowing exactly what they will be doing and what management expects from them.  However, many employees can get really jazzed when given the opportunity to think outside of the box.

There is not one way to kick-start innovation at your organization or any organization for that matter.  Over the years, I worked at many places and have known people who have worked at far more.  Some were innovative, but many were not (and no longer exist. Hhmm… is there a correlation there?) Here are some examples.

Back when I lived in Pennsylvania, a friend worked at a local utility company and they were required to spend 2-4 hours each week learning something new.  If you were a manager or a manager-want-a-be, you could spend that time reading management books or listening to seminar tapes from their well stocked library on how to be a better manager.  If you wanted to learn a new computer language, resources in terms of books and equipment to ‘play’ on were also provided.  On the other hand, another company was so concerned about keeping their employees on task and never deviating from current project needs that when their market collapsed, the company had no direction, no plans, and eventually folded into bankruptcy.

Over the years, the approach that I found most interesting was in companies that formed innovation teams that included not just IT, but staff from other departments.  Most of the people in these teams were rotated in and out for some period of time ranging from a few months to a year.  However, some of the positions were permanent assignments for innovation leaders who facilitated each new group through their chosen innovation project.  The teams were encouraged to take a fresh look at a current problem.  Maybe a product was not selling well.  Maybe a software report took too long to run or was too hard to understand and printed across so many pages that no one wanted to look at it.  Whatever the issue, the team was guided by the innovation leaders to take a step back to first evaluate why the problem existed.  Then through focused research, they jointly developed a new approach to solve the problem.  Not only did this approach often solve problems with applications, systems, or business process issues, but it also helped unite teams across departments that whether they succeeded or not opened lines of dialog between previous operational silos.

And speaking of whether an innovation team succeeds or fails, the point is more important to get employees from different departments excited about working together to solve problems rather than fighting each other for control over their own solution to a problem.  This does not mean that you don’t care if the innovation teams fail.  Of course you would like all of the new ideas to be wildly successful.  But realistically, they will not.  Failure of an innovative approach should not be criticized or punished.  Without failures, nothing can ever change.  I’ve read stories that the head of MTV would fire employees who did not fail enough.  In fact, Caesars expects a 50% failure rate according to an article in a recent InformationWeek.  But it is not the failures that are the focus of why these companies operate the way they do.  Rather it is the successes that occur that would otherwise never have happened that are important.  Again thinking of Thomas Edison, not everything he ever did was wildly successful.  However, if you innovate enough, you will surely hit on enough successful ideas every now and then that make it all worthwhile.

An important key to this constant innovation approach is the ability to quickly recognize when something is not going to be successful and stop work on it as quickly as possible.  Never use the previously spent time and money as a reason to continue to pursue a potential innovation that does not continue to show promise.  I’ve heard companies say, “We can not stop now after spending millions of dollars on this or that initiative.  The board will have a fit.”  The truth is, what is done is in the past and nothing can change that.  Focus instead on the potential of the future and whether the work to get there still justifies the return or whether it is time to learn from what happend and to look for a different solution.

One closing story on this topic that you may already be familiar with.  While I don’t personally know anyone who works at Google, they have a reputation for new products and new services.  It seems to me like every couple of weeks they are introducing something new.  They keep up this pace by encouraging all of their engineers to spend 20% of their time on individual projects.  Many of these projects have been failures and have never even seen the light of day.  However, some of these products originally developed during employee’s 20% free time have become core to Google’s success like Gmail.

I hope you get the chance to innovate at your company.  At a time when the economy is pulling out of a recession, innovation can make or break a company.  Ask your manager if you can spend a little time innovating with other team members, maybe even from other departments on some new approaches that could help your company solve current problems or meet new needs.  It could give you a reason to want to go to work each day.