Monetary vs Fiscal Stimulus

Because I have an MBA (in addition to working in SQL Server and Power BI and because I am interested in the financial analysis side of things), a few people have asked me to explain the difference between Monetary and Fiscal stimulus when it comes to the way the federal government is fighting to keep our economy from falling into a depression at this time of the Coronavirus. So, let me give it my best shot.

Monetary stimulus is often associated by changes by the U.S. Federal Reserve to the interest rates charged to banks for loans. The theory being that as interest rates are lowered by the Federal Reserve, those banks in turn will make more money available to business, especially small business at lower rates. If this occurs as expected, it may help to keep small businesses which our country relies on from going out of business because they cannot otherwise pay their debts, their employees, and the cost of materials they need to conduct business. The Federal Reserve, I believe, can even now buy corporate debt to help those businesses survive.

Fiscal stimulus on the other hand consists of tax cuts, unemployment benefits, credits, government spending on infrastructure (which we could probably use) and even direct payments to individuals and families such as those checks many of you have recently received. This is money directly to individuals, not businesses. This is an entirely different tool than Monetary stimulus.

Still don’t see the difference? Think of it this way, Fiscal stimulus is the direct transfer of money from the federal government to the public through one or more different methods with no expectation of getting that money back. Yes, this increases our national debt and will someday in some way have to be repaid. If the economy soars, this can be done through normal fiscal methods without undo taxing of businesses and individuals would could hurt the economy. Only the spending on infrastructure has some ‘public’ return in the form of perhaps better roads, repaired bridges, and updated water and sewer lines. The rest of the fiscal stimulus is simply a hand out of money in the hope that people will pump it back into the economy to buy goods and services that they might not have bought before. If people just put it in their savings accounts, it will not help at all which is why most fiscal stimulus plans phase out as individual or family income rises.

On the other hand, Monetary stimulus does not simply hand out money. It is effectively money used for loans to business which the government expects to paid back at some time in the future. Even when the Federal Reserve buys corporate debt, they expect that debt to be paid back at some time in the future. Monetary stimulus also does not apply typically to individuals the way Fiscal stimulus does.

So, I hope that helps you understand a little about what is going on in the news these days. This will eventually all pass and we can get back into Power BI features and capabilities.

By sharepointmike Posted in Finance

Yes, I Have Been Away for A While

It has been a while since my last blog post.  No, I did not drop off the edge of the world or get abducted by space aliens, although that would be an interesting new blog perhaps.  I’m still a big fan of SharePoint and Power BI, perhaps more now than before.  Although my SharePoint activity has migrated to SharePoint online more than SharePoint on-premise and even that has decreased due to my emphasis on web site accessibility challenges.  But the concepts I learned there are similar to those faced by Power BI and SharePoint users. BTW, I’ve seen some really awful SharePoint sites now that I’m aware of accessibility issues. Really, you think blue text on a black background looks good? More on that later. Anyway, what do I mean by all this?  For the last year and a half, I became deeply entrenched in learning all things ADA (Americans with Disabilities Act) and WCAG 2.1, not just for web sites, but for any kind of electronic information display shared with others. So I want to share that information with you over the next few blog posts, maybe a few dozen posts, who knows. But making your Power BI reports as well as web sites, and everything else you do ADA compliant is a learning curve with many side-trips. At first, most of us have difficulty getting our reports and sites compliant. I sympathize. However, over time, I’m hoping it will get easier, perhaps even second nature. I’ll probably even take a stab at what you can do within WordPress to create sites that are more accessible.

Anyway, I will be dropping my first post here on January 1, 2020 to start off the new year and hopefully submit posts on a more regular basis in the new year.  See you soon.

Know Your Audience

Know Your Audience

I found this post in an old folder that I never posted two years ago. But with a few changes, it is just as relevant today.

In my prior posts, I’ve focused on the technical issues of how to gather data into a Power BI Data Model, how to transform that data using calculated dimensions and measures, and how to display the data using the reporting and charting objects built into Power BI Desktop and even a few that are provided by third part tools. However, I need to discuss another important aspect of data analysis. That is determining the best way to present your analysis to different audience types.

The easiest way to classify audience types is by management levels. At the top level are the C-level executives. You might know them better as the CEO, CIO, CFO, COO, and probably several other TLAs (three letter acronyms) that represent the people that set the overall goals or direction of what your organization will be doing in the future. People at this level rely on well-designed charts to guide their decisions. This is not because they cannot interpret tables and matrices. Rather it is because they need to quickly grasp what the data is telling them and how it affects their decisions. This is best accomplished with charts often in the form of dashboards.

When a data analyst is asked to create a dashboard for their organization’s leaders, they may feel that they must somehow fit every piece of information into a single dashboard. But that would be the wrong approach. The correct approach is to first determine what decisions leadership is trying to make and then determine what specific information they need to make those decisions. Each dashboard should then be created to address the information needed to make a single decision. That decision might be to expand the organization in one part of the country while shutting down operations in another less profitable region. The decision might be to add new product/service lines or to decrease the number of similar offerings within a category that are competing against each other rather than against the competition. Or perhaps they need to evaluate difference source of raw materials or to determine where the best location for the new facility should be or maybe what type of advertising works best for different market segments. In any case, each dashboard must focus on a single issue and completely address that issue within the confines of that single dashboard if at all possible.

When it comes to Power BI, the dashboard capabilities exist to create compelling presentations that can be included in PowerPoint slides or displayed directly. The worse thing you can give managers at this level is a boring presentation that included tables and tables of numbers whose relevance to the decision is hard or even impossible to visualize. Each item on a dashboard may have the ability to drill down to a deeper level of detail, but rarely if ever will they need to drill down to the original data source level. It is always limited to summarized data.

Beneath this top level of management is typically a middle layer of management and staff who develop the strategy on how to achieve the goals and implement the decisions of top management. This level also benefits from similar dashboards with data that applies only to their area of concern. Interactivity in these dashboards to drill down and filter data is more important than that needed by top management but still should not rely heavily on table and matrix reports except as a backup to verify the analysis or to verify or explain outlier data. I would suggest that if management at this level regularly insists on getting full table and matrix reports of the data, they may not trust the summarized data prepared for them, or perhaps they just don’t know what the right questions are quite yet and still need to explore different data relationships. Some might refer to this activity as ‘playing with the data’ to determine which strategies make the most sense to achieve the overall organization goals. Keep in mind that once these strategies are finalized, the information defining what dimensions and measures are most important will be used to create the dashboard for top management to guide their decisions. The analysts at this level may be called the strategic data analysts because their focus is in determine what data is needed to support and define the organizations strategy.

Finally, there are what I call the functional data analysts. While they may work in part from a set of requests for data needed by the strategic data analysts, they also spend considerable time gathering as much data as possible and building different data models to test different assumptions. Often these assumptions are based on their general knowledge of the business/service, but sometimes it is just a ‘game’ of playing one set of dimensions against another set of dimensions to see if there is any relationship that best predicts the observed measures. A tool like Power BI makes it relatively easy to test different assumptions to see if a pattern emerges that might predict the measures. Sometimes the best relationships in the data are discovered quite by accident causing the data analyst to exclaim “That’s funny!” Then they will proceed to adjust the dimensions to fine-tune that factors that best model the observed measures. This is what often occurs in the study of customer demographic effects on purchasing of different products/services in different parts of the county. For the functional data analyst, charts are still important in discovering those special relationships, but the tables and matrices that back up those charts take on increasing importance to verify that the observed effect was not an accident.

Ultimately, the source data needed at each of these levels may actually be the same. However, to include all the detailed data in the data models provided to top level managers of the organization may result in analysis that is slow to update. Top management does need as much flexibility in filtering or drilling down through the data using different dimensions. The strategic managers and data analysts have already determined what dimensions they want top management to focus on and can provide dashboards based on summary data for those dimensions rather than detailed data.

Similarly, strategic managers and data analysts do not need all of the data collected by their functional peers. In fact, their function managers and data analysts should have already weeded out dimensions that do not have a direct impact on the observed measures. They have reduced the data model by removing non-essential columns such as a store’s phone number and perhaps even summarizing the data slightly perhaps by including only daily sales rather than each details on each individual sale.

Think about prices on the stock market for example. Looking at the second by second price fluctuations of a stock can be distracting to the decision on whether to buy or sell a particular stock. In fact, looking at the price trend first by hour, then by day, then by month, and finally by year can give you different decisions at each point along the way. If you are a day trader, you probably need to watch the fluctuations of a stock at a finer interval. But if you are long term investor, the decision on whether to buy a stock or sell it depends on its longer-term trend. Of course, whether you are buying or selling, once you have made that decision, you probably would best be served to watch daily fluctuations or even minute by minute changes to time the exact moment to initiate your trade order.

Getting back to a business scenario, we see that there are several trends in the style of data analysis that you may need to consider. Data analysis performed for higher management levels require that:

  • the data model must eliminate all irrelevant dimensions.
  • the data model must be increasingly summarized.
  • dashboards and individual charts should have less interactivity focusing only on relevant dimensions that affect the decision.
  • the presentation layer must become increasingly graphical with data presented at a high but focused level to drive home the point they need to make at a glance.
  • the need for tables and matrices decreases and can be distracting like the noise of second by second stock price fluctuations.
  • the presentation should focus on the one or two points it is trying to make without introducing side issues and should fit on a single screen/slide with no scrolling horizontally or vertically.
  • at the highest level, the presentation must stand on its own without support from the data analyst to explain it.

It should be obviously by now that all of these goals cannot be met with a single Data Model much less a single set of reports and dashboards. You must customize the Data Model and the presentation of the data for each audience level. Of course, one way to do this is to base each higher-level Data Model on the data model at a lower level. This method provides continuity. It also provides a single upgrade path as more recent data flows from the lowest and most detailed levels up to the highest and most summarized data levels.

Now one last point. This cannot be easily accomplished by using just the Power BI Desktop version. At the lowest analysis level, Power BI Desktop may be the best way to build the initial data models and ‘play’ with the possible relationships of different dimensions and measures. Depending on the amount of detailed data, this may even require the data to be stored in Analysis Services either locally or within Azure. Keep in mind that calculated columns and measures will update faster in Analysis Services than if stored in local data. But even then, some data analysts prefer to use a data sample in Power BI Desktop to get a ‘feel’ for the data and to explore different visualizations of the data. Only then will they build the final model in Analysis Services with Power BI or Power BI Premium.

While Azure with Power BI Premium might be where your organization ultimately needs to go, don’t be afraid to start small by using Power BI Desktop. Gain acceptance and recognition for what data analysis can provide at each audience level. Build confidence and complexity over time. But get started and do it. It is better to succeed through a series of smaller steps than to fail attempting to immediately implement your grand vision or even worse, to never start at all.

What Use Are OneNote Tags?

What Use Are OneNote Tags?

To those of you who have wondered why there have been no new posts to this blog for many weeks, rest assured that I will be trying to get back on track again. It has been an eventful two years with many job changes that I will not go into here. However, from a technical point of view, I have also used that time to learn more about some of the features of O365 as well as Power BI, particularly related to ADA compliance issues and just overall better design methods for reports and dashboards. I’ll get to all of that eventually. However, first I wanted to tell you about a hidden gem in OneNote. Okay, maybe not hidden so much as underutilized, at least by most people. That hidden gem is tags within OneNote.

Although I have been using OneNote for several years, I have to admit that initially I did not think of tags as more than just decoration inside my OneNote pages. I’ve used the To Do square tag when making lists of things I needed to do. The cool thing about them was that when I first added them to an item in a list, the square would first appear as unchecked and by simply clicking on the square, I could toggle between unchecked, checked, and back again. I also would star important items in a page of notes to indicate the most important points so that I could find them easily enough 6 months (or even 6 weeks) later. The same thought applied to the Question Mark tag which I would use to mark thoughts where I needed to conduct more research or contact others about. I might have used a few other tags over the years, but I really did not think too much about them other than as a visual marker of where certain types of information appeared on a page.

I hadn’t even noticed that you could define your own custom tags, perhaps because I rarely scrolled all of the way to the bottom of the list of available tags. However, last year I discovered this feature. It was the first step on my path of using tags more productively. Let me first show you how easy it is to create a custom tag.

You begin by opening the tag list by clicking on the bottom box on the right side of the tag list which is commonly referred to as the More button. (The Tags group can be found in the Home ribbon of OneNote if you are having trouble finding it.)

Scrolling to the bottom of the list is the option: Custom Tags… Clicking on this option displays the Customize Tags dialog box shown below.

To create a new tag, click on the New Tag button as shown above to display the New Tag dialog.

The first thing you must supply is the Display name or the Tag name if you prefer. This name should be unique amongst the existing list of available tags. This is important because the symbols, font and highlight colors do not have to be unique among all the define tags.

After specifying a name, symbol, font color, and highlight color, click the OK bottom at the bottom the dialog. This action adds your new tag to the top of the list. It also gives it the keyboard shortcut: Ctrl_1. You can move the tags in the list to reflect the tags you use most to appear near the top of the list. You could also order the tags alphabetically, but that is not as practical. Why? Because the further down the list you have to look, the harder they are to find. Secondly, the first 9 tags are given shortcut keys (Ctrl+1 to Ctrl+9) beginning at the top of the list. Note however that the shortcut keys are not really a function of the tag, but of the tag’s position in the list. You can click on a tag in the Customize Tags dialog and then click on the up or down arrow keys to the right of the list to change the tag order. Notice that as you move a tag up or down, the shortcut key for that tag (as well as the one it jumped over) changes to keep the order of the shortcuts fixed from the top of the list.

You can see in the image below that the new tag I created with the name Link which began with the shortcut Ctrl+1 when it was first added was changed to Ctrl+2 when I moved it down to the second position in the list.

When you close the Customize Tags dialog by clicking the OK button, the tag order and the shortcut keys are set as shown in the Customize Tags dialog.

Now to use the tags, you can select the text in the notepad page and select the tag you want from the Tags list, or if the tag is one of the first nine tags from the top, you can use its shortcut key combination such as (Ctrl+2) for the Link tag. You don’t even have to select the text. If you select a tag, it will appear at the beginning of the current line for a single line note or at the beginning of the paragraph for a multi-line note. Even if the information you want to tag is in the middle of a line, the tag appears at the beginning of the line, not just to the left of the selected text. As an example of that, look at the image below in which I selected the site URL and then pressed Ctrl+2 from the keyboard to add the link tag.

Any line or paragraph can be tagged with any one or more of the available tags. If you have more than a single tag, they all appear to the left side of the text. After tagging items on your page, you should easily be able to spot the important information in any page your OneNote notebook.

For a long time, that is all I thought you could do with tags. But that does not showcase the real power of tags. To be fair, I guess the real power of tags does not become important until you have dozens of pages and subpages within multiple tabs within one or more notebooks, much like the notebooks in your office. Let’s say you have 50 to 100 pages of notes within a notebook perhaps representing different projects, different meetings, different notes, etc. How would be begin to find all of your To Do items or all of the links, or all of the phone numbers you have stored? One way of course would be to perform a search on a unique word, portion of a word or even a word phrase. In fact, Search allows you to find a unique string of characters across multiple pages in multiple tabs across multiple notebooks that you have open.

That is pretty powerful, but you must know the unique string you want to search for and there is no way to limit the search to perhaps just the pages within the current tab or to perform the search in such a way as to find all of the links or phone numbers at once. Furthermore, what if you do not remember a unique string or what if the string might appear in multiple places in different contexts? Or what if you wanted to find all the links in the current notebook? In these cases, Search may not be the right tool.

To the right of the tag dropdown list on the Home ribbon is an option called: Find Tags. When you click this option, a panel appears along the right side of the screen titled: Tags Summary which lists all of your tags grouped by tag name as a default (see why tag name is important?). An example of this panel is shown in the next image. To jump to the page for any of these tags, just click on the label associated with each tag in the Tags Summary panel. However, if you have used a large number of tags to tag a large number of items on multiple page and tabs within your notebook or notebooks, this list can be rather large (don’t worry, OneNote will automatically create a scrolling list) and you may want to click on the option at the bottom of this panel to create a summary page which adds another page to the notebook listing all the tags used within the notebook. Unfortunately, you cannot click on the labels associated with the tags here to go to the page where the tag was defined like you can when you click within the Tags Summary panel. But all is not lost. If you hover of any of the tags in this summary list, you will see a small OneNote icon to the left of the entry. Simply slide your cursor over and click on this icon to go to the page referenced page.

Now if you notebook is as active as mine is, the Summary Page of tags will soon become obsolete as you add more content and tags it. You might have noticed the other button at the bottom of Tags Summary panel that says: Refresh Results. You could click on this button. However, beware that if you created a Summary Page, the tags found here will be repeated resulting in many repeated tag references. Not good! If you want to refresh results, you need to first delete the existing Summary Page, click Refresh Results, and then create a new Summary Page.

Another way you can limit the size of the Tags Summary details is to use the Search option found just above the Refresh Results button. This option lets you select the scope of the summary from a page group up through all the notebooks you have open (The default is the current notebook). You can also specify which tags to display based on their age from today’s tags to yesterday’s tags, this week’s tags, last week’s tags, or older tags.

Before I end this blog post, let me give you one last hint on using tags. Suppose you have a notebook with hundreds of individual subpages, pages, and tabs. How can you find the page you want if you know the name of the page but not under which tab it can be found? Why not use a tag to tag the page names themselves. Then when you create a, the section of the summary for that special tag that you use only for page names will create a Table of Tag Summary Contents list with the pages listed alphabetically. Just find the page you want and click on the page name to go to that page.

And if you like generating a Table of Contents of your pages within OneNote, think about creating a Table of References, a Table of Definitions, a Contact List, etc. There is no limit to the imaginative ways you can use tags to organize your OneNotes.

The Power of the Power BI Query Editor

The power of the Query Editor in Power BI is more than that it allows you to load data from a variety of sources, transform that data to meet your analysis needs, create new columns of calculated results and define special views of the data. One of the things that I am impressed with is that it will remember all of the steps you put into getting your data ready for use and it can rerun all those steps the next time you reload raw data from your data source. So before we roll into a new year and begin looking at Power BI Desktop’s big brother,, let’s take a look at the Power BI Desktop Query Editor.

In the past, when I first opened Power BI, I would immediately select the Get Data button from the Home ribbon. What if instead I immediately clicked on the Edit Queries button in the External Data group.

The Query Editor opens its own window with 4 tabs: Home, Transform, Add Column, and View. Looking at the Home ribbon, you should note that many of the most commonly used actions appear on this menu as well as on the other ribbons. At this time, there does not appear to be a way to customize the actions that appear on this and the other ribbon, but as with many things Power BI, that may come in time.

The first think I need to do is to open a data set to work with. When I click on Get Data, I get the same drop down of data source options that I would see from the Home ribbon of the Power BI Desktop screen.

In this case, I am going to load my standard Contoso dataset including the FactSales table with related product and date data. I will reference my local host SQL Server. Of course, you may have to reference a network SQL server. I generally do not enter the database name on the first dialog, but select the database from the second screen. Note that on the second screen, I can just click the down arrow to the left of the database name to open the database and display a list of the tables within it. This is in my mind a lot easier than spelling the name of the database correctly before selecting the tables I want to work with.

Because I am using a SQL Server database, I have the option of either importing the data or using DirectQuery which does not download a copy of the data to my local data model, but rather reads the data directly from the SQL Server instance.

Because I selected 5 tables from the database, the Query Editor list 5 queries along the left side of the screen, a preview of the data in the center and the query properties and any steps applied to the data on the right side of the screen.

I can click on any of the five queries to switch the one that I want to work on. In the above image, I elected the DimDate query. Now suppose that I want to make some changes to this data. Perhaps the first thing I want to do is to eliminate all the fiscal calendar fields from the table. One way to do this is the select the columns I want to remove. I can click on the first column and then while pressing the CTRL key click on each of the other columns I want to remove. I can also click on the first column I want to move and if the columns I want to remove are sequential, I can move to the last column I want to remove while press the SHIFT key and click on the column header of that last column.

Still on the home ribbon of the Query Editor, I can click on the Remove Columns button to get rid of the columns I do not want to keep in my final model. I notice that as I do this, another step appears in the Applied Steps section in the right column.

Next I want to add a new column to the date table that I will use to sort the months chronologically rather than alphabetically. That column will simply consist of the month number (January =1, February = 2, etc.). I can do this by clicking on the Add Column tab and selecting the Add Custom Column from the General group. Note the option Add Index Column. This option allows you to add a sequential number in a new column for each row that either begins with a ‘0’ or a ‘1’. You can also add a custom index that begins with any number and has a custom increment which may be different from ‘1’. This could be an interesting way to add a surrogate key to a table when you have to merge data from two difference sources (maybe domestic and international) into a single table for reporting purposes and you still need create a ‘new’ index because the domestic index and the international index may have overlapping values.

When I choose to add a custom column, the Query Editor displays a dialog to let me define the name for column as well as the formula. All of that is well and good, but the Query Editor provides an easier solution. Instead of clicking the Add Custom column, note that the Add Column ribbon has sections for working with text, numbers and dates. In this case, all I need do is to select the column [DateKey] and then click on the lower portion of the Date button to display the menu of options. If I select Month, I get a second set of options of different calculations related to month. The first option actually creates the new column and calculates the month number while the other options return the first day of the month and the last day of the month as date or the number of days in the month as an integer. That is pretty cool too.

Now the default name for the column is just Month. However, I can easily change that by right clicking on the column header and selecting Rename from the dropdown menu and entering the column name I want.

As I said before, each of these steps have been recorded in the Applied Steps section to the right of the screen. If I make a mistake or want to try something different, all I need do is click the ‘X‘ to the left of the step name to remove that step and then try something else. Remember that I am only working with a preview of the data at this point. I have not downloaded the data yet.

I could proceed with other data changes for this query as well as the other four queries, but I hope you get the idea that you can perform any transformation, create new columns, etc. that you may want. I will come back in the future to discuss some of the other features of the Query Editor that could save you time.

Let’s assume that I am done making changes to my data. It is now time to load the data from SQL Server into my model. From the Home ribbon of the Query Editor, I click the button: Close & Apply.

This will close the Query Editor window and begin loading the data from SQL Server into my local in-memory data model using the transformations in the applied steps of each query. This is the secret to how to minimize the data you bring into your model from a data source. Any excess columns will be removed. You can even get rid of specific rows (like we did in PowerPivot to eliminate some of the product categories when we loaded data many weeks ago). You can also create new columns, rename new or existing columns, and basic structure the data they way you want to use it. All of these transformations will happen as the data is loaded into your local data model. Then each time you refresh your data, these same query steps are performed consistently with each refresh.

Well, it is the beginning of a new year, at least a new calendar year. I think it is time that we move on to the big brother of PowerBI, next time so that we can show how and when each tool should be used and how they can work together.

C’ya next time.

Update to Power BI Desktop

On September 23rd 2015, Microsoft has released an upgrade to the Power BI Desktop that includes 44 new features. If you are following along with my blog entries on Power BI, you might want to download this release before reading my next posting. You can get the new version (it’s free so what are you waiting for) from: The new version number is: 2.27.4163.351.

This article includes details on all of the new features along with helpful images that help explain how to use them.

SharePoint Governance and The Balance

For many, SharePoint Governance is a document, a contract, between the people who support SharePoint and the rest of the organization that sets forth:

  •  policies,
  •  rules,
  • roles,
  • and responsibilities

of the system. Failure to manage these four areas could jeopardize the success of SharePoint in the organization.

 These four areas, like the four legs to a chair, provide a stable platform on which to build an organization’s SharePoint environment. SharePoint Governance can group these four areas into the following four groups within the organization.

  • Operational Management: This group defines the roles and responsibilities of those who are ultimately responsible for the SharePoint portals within the organization. This group can consist of a governance committee or simply a few of the organization’s top executives. They identify the overall features of SharePoint that will be used within the organization. Effectively, this group defines the policies related to SharePoint.
  • Technical Operations: This group defines the technical structures of how SharePoint will be deployed, any software and hardware requirements, specific features to activate, uptime availability, backups, authentication, and which classes of users can access different elements of the portal effectively defining internal and external sites. These activities largely define the rules around the SharePoint implementation.
  • Site and Security Administration: This group is responsible for the creation and destruction of sites as needed along with defining site ownership and the corresponding responsibilities of different user groups within each site or class of sites. They define best practices on defining permissions and provide support on how best to organize site collections. Security within SharePoint is established by the individual’s role within the site.
  • Content Administration: This final group defines the nitty-gritty details of how to load and display content within the site. It is responsible for creating guidelines for the use of content types, workflows, metadata, and various web parts to achieve content goals. They may also help determine life-cycles for content retention policies and policies used to enforce the archiving and deleting of older content. This group identifies and assists users with their responsibilities for building and maintaining sites.

However, failure of SharePoint to succeed because one or more of the legs of that governance chair are not stable should not be indicative of an inherent problem with SharePoint. In fact, failure to create and then follow the governance policies, rules, or recommendations is more of an indication of the failure of the organization. If an organization cannot create a governance document that manages SharePoint usage, that is indicative of a greater potential problem, one in which top management may not support the use of the tool or understand its needs or benefits in the first place. This lack of support could be an early warning sign that the project may not be valued within the organization.

Even with governance for SharePoint or for any other product in place, it remains only a paper (or electronic) document unless management establishes an infrastructure to enforce it. Once policies, rules, roles, or responsibilities start to be bent or ignored in small ways, it is a slippery slope to the point where everyone ignores the governance document and chaos begins to take over. It may not be long before top management begins looking for a new solution, one that will magically cure all the current perceived problems. It may not occur to them that simply enforcing the original governance would alleviate most if not all of the current problems. On the other hand, enforcing strict standards in the name of governance is like putting blinders on a horse that could prevent the organization for discovering that there are better tools and better ways of doing things.

Furthermore, do not interpret governance to limit when or if an organization can switch tools or processes. Switching tools should always be possible, especially if another product with significantly better features or improved functionality becomes available. Governance does not address the issues of when a tool or process becomes obsolete. It merely addresses how to use that tool while it is in use.

At the same time that governance should define specific actions or activities, it should be a living document that can change over time to satisfy new demands. If those demands can be met by making small changes to the product or the way it is used, the overall costs of meeting the organizations needs will be minimized. Thought of in another way, governance is nothing more than a roadmap in which the organization can achieve the maximum benefits from a process or tool while minimizing the costs. It keeps everyone moving in the same direction rather than letting everyone to go off in different directions doing their own thing. Governance that is too strict can strangle an organizations ability to adapt and create new solutions to problems. Governance that is too loose will prevent directed and organized progress toward a goal.

A balanced governance approach can be in everyone’s best interest but can be difficult to obtain.

Pulling It All Together with the Site Aggregator Web Part

We saw last time how I can easily see all the documents in the current site that I last modified or created with the Relevant Documents web part. However, what do I do if I want to see all of my documents in any one of several different sites? Do I have to navigate to each of these sites and open a page with a Relevant Documents web part?

Fortunately, there is an easier way! The Site Aggregator web part allows me to view my documents stored in any number of sites from a single place, sort of.

After reading my last blog article, I’m going to assume you know how to add a web part onto a page in your site. (If you skipped that blog, you can always go back to it at: As with the Relevant Documents web part, the Site Aggregator can be found in the Content Rollup category for both SharePoint 2010 and SharePoint 2013. After adding the web part to the page, it looks something like the following:

At first, I may be puzzled by the text telling me to click on the “Add New Tab” icon. The first thing I should know is that each site that I want to pull documents from will be displayed separately and that I must choose the site I want to view by clicking on a tab/link across the top of the web part. To add a new tab, I need to click on the icon that appears in the top right of the Site Aggregator that looks like a drive icon with a yellow asterisk in its upper right corner.

This button displays a dialog that lets me enter the name of the site that I want to view. For example, the following figure shows a reference to a demonstration school site. Note that the URL does not point to a specific page. Rather it is the URL of the site only. Also notice that the URL must end with a slash ‘/’. The second property in this dialog is the name that will appear in the tab/link across the top of the Site Aggregator.

When I click the Create button, the Site Aggregator shows the contents of all libraries in the selected site and lets me click on the document name to open the document directly or by clicking on the location, to go to the document’s library.

So far, that works pretty much like the relevant documents web part. The feature that makes this web part different is that I can click on the Add New Tab to add another URL to a different site. In fact, I can add several new tabs as shown in the image below which includes separate tabs to view the documents found in each of the individual grade sub-sites for this virtual school.

Notice how the tabs/links can actually require more horizontal space than the size of the page. When this occurs, double angle brackets appear at the beginning and end of the row to allow me to horizontally scroll through the tabs. I can also use the down pointing arrow to the right of the Add New Tab button to open a dropdown menu of the available tabs.

If I open the web part properties as described for the Relevant Documents web part, I will see the properties that I can modify for this web part. As before, I may want to change the title for the web part that appears at the top of the web part.

Two additional unique properties to this web part are in the View and URL groups. The View group has a single property that lets me control the number of characters that appear in the tab/link before ellipses replace the balance of the characters. According to the documentation, this feature can be used to control the number of characters used in the label. I believe that in SharePoint 2010, I must allocate 2 characters of this number to the ellipses to determine the actual number of characters displayed. For example, a value of 10 allows for 8 characters plus the ellipses. In SharePoint 2013, this property seems to be ignored in my test site. But that may just be my site. What do you get?

The URL group prompts me for a character string that it will add to the URL provided when I define a new tab to specify exactly what is returned by the web part. The default string: _layouts/MyInfo.aspx uses a predefined view that displays content from the site library that shows documents that I modified or created.

However, it also appears possible to replace this string with others. For example I could enter the string: _layouts/SiteManager.aspx.

This string opens the Site Content and Structure view which displays all the documents in all the libraries for the site.

Note that I can navigate to other sites as well as the current site by using the leftmost panel and then by selecting different views, quickly determine which documents I have checked out, have modified, are pending approval, or are still in draft mode.

In future months, I may examine some of the other lesser used web parts and explore their use.


Lesser Used Web Parts of SharePoint

This week I’m beginning my summer break from BI and returning to SharePoint to look at some of what I like to call the lesser used web parts. Some of these web parts I will discuss over the next several weeks did not exist in the original SharePoint 2007 (Original for me because that is when I started using SharePoint). Some of the web parts might not have even existed in SharePoint 2010, but only appeared in SharePoint 2013. So depending on which version of SharePoint your site is current on, you may or may not see some of the web parts I will describe. However, I will try to tell you whether the web part existing at least in SharePoint 2010 and/or SharePoint 2013. Some might even appear within different categories of web parts because Microsoft chose to regroup some web parts between 2010 and 2013. I will try to let you know that too. With that in mind, let’s begin with a web part that did exist in both SharePoint 2010 and SharePoint 2013:

Relevant Documents Web Part

Often the number of documents in a site becomes overwhelmingly large and finding the documents I worked on can be quite a challenge. This is especially true of collaboration sites. The Relevant Documents web part, which exists in both SharePoint 2010 and SharePoint 2013, helps me find what I want. Furthermore, my site does not need a custom view or custom page to display the documents relevant to each person who has edit rights to the site. This web part automatically detects the currently logged in user and filters the documents returned by that user. I don’t even need to know in which library to search in because this web part searches across all libraries in the current site (but not subsites). That means that it returns not only documents from the document libraries in the site regardless of the library names, but also items from image libraries and page libraries. Let’s see an example.

The Relevant Documents web part, like all web parts, must be hosted within a page. Therefore, I must first either create the page I want to use or navigate to an existing page.

Next I edit the page. Depending on the version of SharePoint, the Edit this page option may either appear in the Site Actions dropdown menu (2010), the Actions icon (2013), or the Edit button (2013).

I then find a place on the page where I want to add the web part and from the Insert tab, click on Web Part in the Parts group as shown in the following image.

SharePoint then displays three boxes across the top of the page beginning with Categories on the left. Select Content Rollup from the Categories list.

I now see the Relevant Documents web part in the Parts box. Select this web part by clicking on it.

Additional information about the selected web part then appears in the About the part box. To add the web part to my page, I simply click the Add button in the bottom right side of this area.

The following figure shows how this dialog looks in SharePoint 2013. However, the changes in SharePoint 2010 are minimal.

After I add the web part to the page, it automatically displays any documents in the current site that I last modified by default. The theory of this default is that documents I recently modified would be the most likely files I would want to return to edit further.

This web part does have some properties that I may want to tweak. To open the web part properties, hover over the web part title until the dropdown arrow appears on the right side of the header.

Select: Edit Web Part from the dropdown menu. I then need to scroll to the right and possibly up to see the properties panel. This dialog consists of several property groups. The first group: Appearance, is open by default. Here I can change the Title property to change the web part’s displayed title.

The other properties I may want to change can be found the Data group as shown below:

Note that there are separate options to let me see all documents that I created, even after someone else modifies it and documents which I may have checked out that others created and modified. The checkbox to include a link to the folder or list allows me to open the library rather than just opening the document. Finally, I can adjust the number of items shown in the list. However, my recommendation is that for most users, a number from 1 to 100 makes the most sense.

In conclusion, I could create a page on my site with the name My Relevant Documents. Then by using this web part, every contributor to my site can go to that one page to see only the documents that they have added or have been working on.

That’s all for this week. Hope you are having a good summer and next week I will continue with a related web part: Site Aggregator.


Map It For Me, Please

Last week I introduced Power View by creating a simple table and then a chart from that table. This week, I’ll do a quick overview of another visualization within Power View, the ability to display your data on a map.

To begin, I open Excel and build a data model with the data I want to visualize in a map. I need to specify location information using some of the fields in the data model. In theory, I can use anything that identifies where the measure I’m displaying takes place. Ideally, I would like to have latitude and longitude for each fact instance in the fact table, but that is not always possible or even necessary. For example, let’s begin by looking at the relative sales by city from the Contoso dataset.

After opening my Excel spreadsheet and building an appropriate data model, I return to the Excel window and from the Insert tab click the Power View button in the reports section.

This opens a new worksheet as shown below with a blank design area on the left and my field list from my data model on the right.

I then drag the fields I want to use in my data visualization to the Fields box at the bottom of the right panel. For this example, I will drag the CityName field from the Geography dimension and the SalesAmount field from the FactSales table. This gives me the two column table shown below with sorted by the city by default.

To change the visualization, I need to open the Design tab which appears when I click anywhere within the table in my design area. If I had multiple tables, I would have to be sure to click in the table for which I want to change the visualization first. Then from the Switch Visualization group, I select Map.

The default visualization, shown below, displays a bubble for each city that I have data. Each bubble’s size represents the relative sales amount derived from that city.

Because I did not specify a field to use as a group level for color, all of the bubbles initially appear the same color. However, I can easily specify a different color for each country by copying the field RegionCountryName to the Color parameter. This assigns a unique color to the bubbles within a country that is different compared with other countries. At first glance, everything may appear to be okay, but then I noticed a bubble in the southeast portion of the United States that had a different color. Hovering over that bubble, I see information about the bubble including the city name, the country, and the sales amount. In this case, the city is Saint Petersburg. However the country is Russia, not the United States (Saint Petersburg, Florida is perhaps what the map was thinking.) This occurred because the location criteria was only based only on city, not city within the country. In fact, if I zoom into the map further, I find other bubbles that placed the city in the wrong country.

One way to fix this issue is to use a field that has both the city name and country in it. However, you cannot create a new calculated column from within Power View. This type of change must occur in the data model. Therefore, I could return to the data model and open the DimGeography table to create a new concatenated field. This field combines the city and country names into a single new field: City_Country using the following formula:

= [CityName] & “, ” & [RegionCountryName]

The resulting new column appears in the following figure.

If I replace the [CityName] field with the [City_Country] field in the locations box as shown below, it appears at first glance that the problems with incorrectly positioned cities have been solved.

But again if I expand the map, I can find a few cities such as the one shown in the figure below that are not correctly positioned.

Honestly, I have not been able to figure out why a few cities are still displayed incorrectly. However, I have another way I can ‘fix’ the problem. First I turn on the Filters Area which has been turned off to maximize the size of the map.

I then drag the RegionCountryName field from the DimGeograph table over to the Filter panel. This shows me a list of unique values for this field. I can then use the check box to select one or more countries to display on the map at one time. For example, let’s just display the United States.

When I add the filter, the bubble for Cheshire, United Kingdom disappears. As you can see in the following figure. Changing the map background to Road Map Background, I get a more colorful map that might be more suited for a report that appears in color.

However, this is not the only way to filter data. I can also create a slicer in the design area by dragging the field by which I want to select data and dropping it in an empty part of the design area. The figure below shows me dragging the field RegionCountryName to the design area to the right of the map legend. This initially creates a single column table with the values from this field.

Next, without leaving the field, I can go to the Design tab to the right of the Power View tab and select the Slicer button from the Slicer group. This action converts the table into a slicer object that controls all the other objects in the current page.

Now I can filter the map to any country I want to focus on. Typically, selecting a country also zooms into the map to display that country as shown in the following image in which I selected Japan.

Any time that I want to return to the map displaying all the countries again I can click on the small blue ‘eraser’ button in the upper right corner of the slicer table. Note that this button only appears while the mouse is hovering within the slicer.

That’s it for this week. Next time I will look at some other features of Power View. C’ya.