Dealing with Blanks in Your Data Model

Last time, I discussed some reasons why you might want to denormalize your data model when using PowerPivot (https://sharepointmike.wordpress.com/2015/03/21/your-logical-data-model-is-not-normal/). As you may remember, one reason is to make the model easier for the end user to understand. A second reason is that if you denormalize the data outside of PowerPivot, you can substantially decrease the model size by eliminating many if not most of the lookup tables and just adding the lookup information you need in the entity table such as the Product table. I will talk more about denormalizing the data outside of PowerPivot, or at least during the process of importing the data next time. This time, I want to look at another issue that sometimes occurs in your data that could make your data harder to interpret by your end-users.

I’m talking about blank or null fields in the data. Often times a table will have one or more columns which are optional. Take for example a product table like the one we are using from Contoso. Not every product may be sold by color. Some products may not belong to a style or a class. Some products may not have a measured size or a weight. In some databases, the corresponding lookup tables have a special entry for no color, no style, or no class so that every product can point to one and only one value in the lookup. However, you may have a lookup table that only contains specific values. If a product does not exactly match one of those values, the reference from the product table to the lookup table may be left blank. When the user sees a blank value, they may wonder if the user just forgot to select a value or if no value from the lookup table applied. In other words, they do not really know the reason why the field is blank.

Suppose we start with the data model using Contoso data in the following figure. Note to anyone trying to reproduce this example, I went into the Product table ahead of time and removed the ClassID and ClassName from all products in which the color was ‘Blue’ just to create some records with blank values.

After bringing this data into an Excel PowerPivot data model, I create a simple pivot table to show the sales amount and sales counts by class. Notice that the three product classes that are defined include:

  • Economy
  • Regular
  • Deluxe

But you can see from the report that 87,597 of the over 2 million sales records had no class defined. From a user’s viewpoint, they do not know if someone forgot to enter the class for some of the products or if the definition of class does not apply to some products.

As you build your data model in Excel, you can fix this problem by replacing the blank values with something like the string: “[No Class]”. The difficulty is, however, after you load your data into the PowerPivot data model, you cannot edit individual cells. You can, on the other hand, create a new calculated column.

Since I want the new calculated column to have the name ClassName and because I cannot have two columns in the same table with the same name, I begin by renaming the existing ClassName column to ClassNameOriginal. The actual new name I use does not matter as long as it is different. Also note that changing the column names in the Excel data model has absolutely no effect on the name of the column in the original data source.

After renaming the original ClassName column, I add a new column to the end of the table called ClassName. For the calculation to replace the blanks from the original column, I need to know if those blanks are Null values or if they contain an empty string. If the original column used empty strings when a class value was not defined, I can use the following DAX expression to create my new class name value.

= IF(([ClassNameOriginal]=””),”[No Class]”, [ClassNameOriginal])

This expression uses a simple IF() function to see if the value in the column ClassNameOriginal contains the empty string. If it does, the function outputs the string: [No Class]. Otherwise, the function outputs the current row value of the column ClassNameOriginal. This is shown in the following figure.

On the other hand, if the original ClassName column uses a NULL value when there is no corresponding class, the expression needed here must be defined with the ISBLANK() function to test for NULL values as in:

= IF(ISBLANK([ClassNameOriginal]),”[No Class]”, [ClassNameOriginal])

Whichever method is appropriate for your table, you now have a column that you can use for the PivotTable. Before proceeding however, be sure to go back and hide the original class name column: ClassNameOriginal so as not to confuse users with two similar columns that they could use as dimensions in their tables.

Since I still have my original pivot table open, I can just switch over to the Pivot table and it should automatically update to show that the blank class now has the name [No Class]. (Which goes to show you that having no class is better than just being a blank. J )

That’s it for this week. By the time you read this I will be at Code Camp Orlando giving my presentation: Calculated Columns, Measures, and KPIs, Oh My!

C’ya next time when I will show you how to use the query designer while loading data to the PowerPivot model to fix some of the issues from this week and last week.

Your Logical Data Model is not Normal

If you have been reading this blog over the last several years, you know that I have been a strong supporter of using PowerPivot and the Tabular model to perform data analysis, even for power users, not just DBAs. What you may not have realized is that I’ve been secretly teaching you a little about data modeling. What is data modeling? It is really nothing more than the collection of all the data in tables and the relationships between those tables in a database. Did you know that there are at least two major ‘types’ of data models when it comes to how your structure your data into tables? Each one serves a different purpose. Therefore, it can reasonably be argued that neither one is more correct than the other. But you need to understand when to use each type. That is what I want to discuss today.

Most DBAs and developers who work with databases like SQL Server, Oracle, and many common database are comfortable with the relational model for creating tables and defining the relationships that connect them. In fact, they immediately start to normalize a denomalized database in their heads within seconds of seeing the data schema. The relational model relies on the application of the rules of data normalization introduced by Edgar F. Codd in 1970. In essence, the goal of the rules of data normalization is to minimize the data redundancy which also has the effect of decreasing the overall size of the database while at the same time making it easier to maintain information that would otherwise be repeated through many records.

There are three primary rules that are used to determine whether a database, a collection of tables, has been normalized. These rules are:

First Normal Form: No two rows of data may contain repeating data or groups of data. Such repeating data must be split into a separate but connected table. For example, a sales order may contain the purchase of one or more items. Because the number of items included in a sales order is not predefined, it must be split into a separate table with one row for each item on the sales order. These two tables are then typically connected by an order id.

Second Normal Form: This rule only applies to tables which have a compound primary index, an index built from two or more fields. In this rule, all other fields in the table must depend on the entire compound index value, not only a portion of it. A possible example of this might be a table that includes students at a school in which the primary index combined the school name (or id) along with the student name (or id). Imagine that the table also included the address and phone number of the school. This information is not depended on the combination of the school and the student. It only depends on the school. Therefore, this violation of the second normal form requires that the data related only to the school be split into a second table that includes only school information.

Third Normal Form: This rule requires that every field not part of the primary index be depended on the primary index. Going back to my favorite Contoso database, one could argue that in the FactSales table, the Sales Amount field is redundant since this table also includes the sales quantity, sales price, and any sales discounts or returns. Why is it redundant? Because it can be calculated based on the other values of other columns in the record. Therefore, to fully achieve, third normal form, this field should be removed.

While there are other special case rules that can be applied to normalization of a database, most DBAs will be satisfied with a database that satisfies these three rules. They will then build the tables in their database corresponding to these rules and thus create the physical data model. It is called the physical data model because it defines the physical schema of the tables and the relationships between them.

However, business users of the data don’t look at the database that way. In fact most business users would be baffled by the large number of tables required and the relationships between them. In fact, they will not understand why they need to combine data from a half dozen to a dozen tables just to answer a single query question. As an example, let’s look at how a business user might think of just the products within the Contoso database.

The figure below shows the normalized data model for products and their subcategories and categories. As you can see from the schema, the only important piece of information in the subcategories table is the name of the subcategory. Similarly, the only important piece of information in the categories table is the name of the category.

Most business users would not think of category and subcategory names as part of separate tables, but as attributes of the product itself. In fact they would think of product information more like the following figure.

Imagine a database model with dozens of normalized tables and then try to envision how the typical business user sees the data with perhaps only a half dozen tables after denormalization.

In PowerPivot, we can address this issue in one of two ways. The first way would be to add the category and subcategory names to the product table in SQL Server before loading the data into PowerPivot. This would essentially make the physical data model the same as the logical data model used by the end-users. However, this data model would no longer be normalized.

Physically denomalizing the data tables is not the only solution. I could, as shown before in https://sharepointmike.wordpress.com/2012/07/21/power-pivot-hierarchies-part-2-of-powerpivot-2012-series/, build two calculated fields in the product table that use the RELATED() function to get the category and subcategory names and display them in the product table. I could then hide the two tables, dimCategory and dimSubcategory, from the user so that they would see a structure similar to their expected denormalized logical data model even though the data still is physically stored in a normalized structure.

The advantage of making the changes to the data model outside of PowerPivot is that it effectively reduces the amount of data that PowerPivot must store in memory thus potentially allowing a larger model to be created. The advantage of making the changes inside PowerPivot by using the RELATED() function and then hiding the technical tables that are still needed in the model but do not need to be seen by the client is that it preserves the sanity of those DBAs who cringe every time someone tries to their denormalize their data. Both methods will allow me to create similar Pivot tables in my Excel spreadsheet. Thus both methods can be considered correct.

The thing to remember is this. Data Normalization was developed to make data storage and maintenance easier and more efficient. However, data analysis and reporting often requires a fair amount of data denormalization. This is especially true when performing data analysis using reports, pivot tables and charts, and even third party data analysis tools. In fact, the logical view of the data schema aids in the performance of most data analysis. Just remember that your logical model is not a normalized model.

C’ya next time.

It’s Only the Role I’m Playing

This week I’m going to return to my favorite sample database, Contoso. If you remember, Contoso consists of sales data for a company that produces several different lines of customer electronics, from computers to phones to TVs. The data spans several years of sales. In past examples, I related the FactSales table (the table containing all of the sales records) to several tables including Channel (DimChannel), Date (DimDate), Product (DimProduct) and Product Sub-Category (DimProductSubCategory). In fact, the data source pre-defined these relations so that when I imported the data into my PowerPivot model in Excel, these relationships appeared by default as shown in the following figure.

Visually, I could switch to the Diagram view of the data model to see these relationships represented by solid lines connecting these five tables as shown below.

However, suppose I have additional information in my FactSales table. The DateKey field that I have used in the past identifies the sale date for each of the sales records. In the real world, there may be additional dates associated with each sale. For example, it is not hard to imagine that each sale would also have a Delivery Date and an Invoice Due Date. Now I might ask, does it make sense to report on sales based on the sale date or does it make more sense to report on sales based on the invoice due date or even the date of payment. Well, you might say, ‘That depends on who is asking the question.’ Very true. My Marketing/Sales manager might want to see sales by the sales date. He or she does not care about when the invoice is paid off. They just need to know if they hit their sales quotas for each month. Therefore, they might need a report that uses the default relationship and looks something like the following:

On the other hand, the Chief Financial Officer is not as concerned about when the sale took place, but when the invoice for the sale is due because only then is the income truly realized for the company. Thus the above report does not meet their needs.

Now suppose that I had a few additional columns, as mentioned earlier, that told me some other dates such as when the items were delivered and/or when the invoice for the sale was due. Let’s further suppose that these columns exist in the FactSales table and might appear like the following figure.

With these columns in my FactSales table, I would want to create relations between them and my date table (DimDate) so that I could generate reports using them. The following figure shows the Create Relationship screen in which I define a new relationship between the delivery date (DeliveryDate) column in FactSales and the date (DateKey) column in DimDate. When I click the Create button, the Power Pivot engine creates a second relationship between these two tables. (Remember the first relationship was between the sales date (Datekey) column in FactSales and the (Datekey) column in DimDate.

In a similar fashion, I create a third relation between these two tables to connect the invoice due date (InvDue) column in FactSales and the date (Datekey) column in DimDate. For the purposes of this demo, I am going to stop there. However, I could create additional relationships between any other date fields in the FactSales table and the date (DateKey) column in DimDate. Switching to the Diagram view, I could now see something like the following between these two tables.

Notice that there are three lines between FactSales and DimDate. One of these lines is solid and the other two lines are dashed. You can only have one active relationship between any two tables. That active relationship is represented by the solid line. The other two relations are inactive at the moment. These three relationships are collectively called role playing relations and DimDate is a role playing dimension because only one relation at a time can be active from the DimDate dimension. Think of it this way, the date column (DateKey) in DimDate can play one of three different roles. It can either play the part of the sales date, the delivery date, or the Invoice date. However, it can only play one role at a time. Unless all three dates were exactly the same, I have to choose which role I want DateKey in DimDate to play with the FactSales table by selecting one of the relationships.

The initial data import associate the DateKey column in DimDate to the DateKey value in FactSales which represented the date the item was sold. As long as the report requested wants to allocate sales to the sales date, I need to do nothing other than generate the Pivot Table report as shown below.

However, if I now have to generate a report for my CFO, I would have to go into the model and change the role that DateKey in DimDate plays. I might try to simply right click on the dashed line representing the connection between the DateKey in DimDate with the InvDue column in FactSales and select: Mark as Active.

This action would generate the following error message.

The problem is clearly stated. You can only have a single active relationship between the two referenced tables. Therefore, I must first right click on the current active relationship to deactivate it before activating a different relationship between the two tables.

Once the original relationship is deactivated, I can go back and activate the new relationship between DateKey and InvDue.

Knowing that only one role (relationship) can be active at a time, I am a little surprised that Microsoft did not automatically deactivate the original relationship when a new one is set active. However, like many things, they did not ask me. Plus they may have had other reasons for not automatically deactivating the current relationship. In any case, after making this change and returning to my pivot table (without making any changes to the pivot table itself), the data automatically updates using the new relationship to show the sum of sales by month of the invoice due date rather than the sales date.

If you examine the sales totals for either the years or individual months, you can quickly see that this role change for the DimDate dimension makes a significant change in the sales numbers reported each month.

That’s all for this week. I hope that now you have a better idea how you can use a single dimension to play different roles. (You could also have added the DimDate table three times to the model, once for each date column in FactSales so that each instance of the DimDate table could have a single relationship to FactSales. Then by changing which dimension you select to display in the Pivot Table, you can achieve essentially the same result. This may be easier for the end-user who does not have access to the data model to make the changes listed above. However, your power-users may prefer a simpler model with fewer tables and can change the role played by those dimensions as needed.)

C’ya next time.

When Plain Text Isn’t

One of the problems many beginner SharePoint content editors have is pasting text into SharePoint from other sources. The problem has to do with the unseen or hidden information embedded in the text that formats the text in these other products.

Take for example a typical passage of text in Microsoft Word as shown below:

If I were to simply copy this text from Microsoft Word into SharePoint, the result might at first look like a fairly good approximation of the original text as displayed in Word as shown in the following image.

 
However, when I look at the HTML of this text, I see that it is littered with embedded commands that would make it very difficult to edit by just looking at the visible portion of the text. The problem is that when I select text that is visible to me, I may not include all of the tags that wrapped around that text. Furthermore, if I were to try to change the style of the selected text, it may not appear to change if my selection included the original formatting tags that I want to replace, but are left in the code. The selection of the text may not even grab the embedded start and end tags, but only one or the other resulting in tags that are not property nested. The browser then struggles to interpret exactly what I want to do and may add additional but superfluous <div> tags to try to isolate the problem. While I could directly edit the HTML to fix these issues, I would not recommend this to the typical content editor unless I was very sure that they were familiar with HTML. In any case, the challenges of editing formatted text directly may be more than I typically want to handle.

One solution that I have been told to use in the past is to copy the text first to NotePad. NotePad does not support most of the special formatting features of Microsoft Word and therefore it strips out these embedded tags from the text. The following image shows the same text copied first to NotePad and then copied to SharePoint.

The text looks fairly clean of any special formatting. I might think that I can now easily go into this text and begin changing the formatting of specific portions of the text to match what I had in Microsoft Word. For the most part, I would be safe doing this. In fact, I am much better off than if I pasted the Microsoft Word text directly onto the SharePoint page. However, looking at the HTML of even this page, I can see that some special formatting still exists although it is not nearly as bad as the HTML pasted directly from the original Microsoft Word text.

Another option has been to try to remove the formatting from the text after pasting it on the page. To do this, I would first select all of the text that I just pasted and then open the Format Text ribbon as shown in the image to the right and select the icon that looks like the letter ‘A’ with an eraser to its lower right. The text that appears when I hover over this button says that it will clear all formatting from the selection leaving only plain text. While that sounds promising, in actual use, the results are less than I hoped for as the image below shows. Notice that the HTML, while starting to look fairly clean with only the actual text, still has a lot of formatting tags, specifically <div> and <span> tags.
030615_1516_WhenPlainTe7.png

Fortunately, in SharePoint 2010 there is another option, one that very few people find. Recognizing the problem with Copy and Paste, a method used by many people, Microsoft added a second paste option. When you are ready to paste your text in SharePoint, rather than just pressing the CTRL-V to paste the text that you have on the clipboard to the current cursor position, you can click on the lower half of the Paste button in the Format Text ribbon which happens to be the leftmost button on this ribbon. Clicking on the lower half of the button displays a submenu that includes the two options shown in the image to the right. The one that I am interested in is the option: Paste plaintext.

This time when I look at the resulting HTML, I see a much cleaner version of the text. The only special formatting left is the <BR> tags used to end paragraphs and create line breaks. The resulting text should be easy to maintain in SharePoint with the formatting tools in the Format Text ribbon.

In addition to the methods described here, there are many other text editing tools that allow you to either cut and paste text from your favorite editor or may even serve as an alternative text editor. I cannot attempt to evaluate them all here. Rather I will mention a few that I’ve heard of but leave it up to you to test whether they serve your needs. Two that come to mind are PureText for the PC which is a free download and Flycut for the Mac.

If you insist on using a browser other than IE (remembering that SharePoint was designed around integration with Microsoft products, not products from other companies), you might try one or more of the browser extensions for FireFox or Chrome that support copy and paste of just text. Just be aware that while some of these may work fine to copy text from one web page to another, they may not work fine with other applications such as Microsoft Word.

Finally, FireFox and Chrome allows you to paste just text by using the key combination CTRL+SHIFT+V. Similarly, the Macintosh uses the keys: Shift + OPTION + COMMAND + V.

If all of this seems like too much of a bother, remember that you can always create your text directly in SharePoint rather than starting from another text tool and then having to copy and paste that text into SharePoint. The SharePoint interface for entering text is very similar to Microsoft Word, or perhaps you use WordPad, and provides a good WYSIWYG interface for entering and formatting your text. Entering and formatting text directly within SharePoint practically guarantees that you will not have future issues editing the text.

That’s it for this week. C’ya next time.

Readability

Readability, that word refers to both the visual and the content of your text. To be readable, your text must first be visually easy to read. It may surprise you know that text on a screen is more difficult to read than the same text in print. In fact, printed text often is created using a serif style font such as Times Roman. The tiny crossbars on the letters of a serif font help the eye move horizontal across a line of text aiding in the ability to read more rapidly text in paragraphs and pages. To bring emphasis to titles and headings, a sans serif font such as Arial with its simpler straight lines and curves tend to make your eyes hesitate just a bit which helps titles and headings stand out.

Surprisingly, this should not be the way you format text for display on a screen. In fact, because it is more difficult to read text on a lighted screen, it is generally better to use an Arial style sans serif font to help make the letters clearer to read. Perhaps some of this is related to perceived character spacing since the straighter characters do not quite approach each other as much as do the serifs in a font such as Times Roman.

Pay attention to text size. Text that is too small or too large can make text more difficult to read. The problem here is multiplied by the different form factors that web pages, e-books, etc. have to deal with ranging from 24-inch desktop computer screens to 5-inch smartphone screens. Of course most electronic devices will allow the user to zoom in or zoom out of the text to effectively resize the text to their liking, but you may want to start with a font size that will make your text easy to read by the most number of users without manual adjustments.

Consistency in the use of fonts is also important. Once a font style is selected, it should be used for all content on all pages of a website. It can be quite annoying to deal with a different font style as one goes from one page to another (or even worse, one paragraph to another). This detracts from the sense of continuity between the different portions of the content. Sure you can use italics or special fonts for emphasis of small text segments. However, even putting an entire paragraph in a different font style can make it look like it does not belong.

Of course, all rules have exceptions. For example, you may place material quoted from another person or source in italics, or a different font style, or even a different color to purposefully bring emphasis to it. However, be careful of using type styles such as underlined text or certain colors of text in places where the user might misinterpret the text as representing a hyperlink.

Font colors should be carefully selected considering the background color to give the reader a high contrast. The greatest contrast is black text on white background or vice versa. However even here a white background is preferred for content read in normal lighting conditions and a black background is highly preferred for text read in the dark. In fact, some programs including some e-book readers have the ability to automatically flip the text and background colors based on the ambient light level. For example, my iPad’s iBook reader does that.

On the other hand, using a light blue text on a medium blue background may not provide enough contrast. A related problem I have seen is placing darker text on a black background such as a medium to dark blue or red on black. Also, some people perceive colored text on a colored background as being smaller than the equivalent black text on a white background. This is especially true of lighter text on a darker background.

Another consideration is the width of the text column. Generally web pages use two or more columns of text so that the reader can more easily move their eyes across the line of text and then quickly find their way down to the next line. The wider the text column, the more difficult this is.

Related to this is the size of the paragraphs. Longer paragraphs can feel exhausting to read compared to shorter paragraphs. Also, the use of shorter paragraphs forces the writer to condense their ideas into smaller more easily digested content nuggets.

Long web pages should be minimized. A user should not have to scroll down the equivalent of more than two or three screens to read the entire page. I’ve seen web pages that require the user to scroll and scroll and scroll to get to the bottom. Do you really think that users will ever find the content at the bottom of that page? Of course not. There is even good argument for a single article to be continued on another page after it begins to get too long. Don’t be afraid to break articles into multiple sub-topics that are then linked through multiple pages.

In addition to all the visual aspects of making your content readable, consider also the grade level of your writing and whether it is appropriate for your audience. Microsoft Word can analyze your text and report on its readability. To turn on this feature, open the File drop-down menu and select Options. In the dialog that appears, click on Proofing in the left column. Locate the section on the right side of the dialog: When correcting spelling and grammar in Word. Click the check box: Show Readability statistics. Now when you create text in Word, you can click the icon in the lower left status bar to the right of the word count. The resulting dialog box will give you readability statistics for your text including number of sentences in a paragraph and the average number of words in each sentence. At the bottom of the dialog is the Flesch-Kincaid Grade Level. Try to keep this number between 8 and 10. This document is an 8.7.

Finally, don’t try to eliminate all white space on a page. White space gives your eyes a chance to rest. A page completely filled with text and no white space between paragraphs or columns or around images is very hard to read.

C’ya next time.

ALERT! ALERT! ALERT!

Last time we talked about following a document. While the ability to follow a document is relatively new in SharePoint, alerts have been around at least since I started using SharePoint 2007. Alerts are more narrowly focused than the social tool: Following. By that I mean, alerts are available only to track changes to documents (or items in a list). You cannot create an alert on a user or tag. However, alerts provide some additional flexibility on what types of changes you want to track and when you want to hear about them. So let’s dive into alerts.

If I were to open any library or list, I can click on the Library (List) tab or the Document (Item) tab. In both ribbons, there is an icon with the name Alert Me in a group called Share & Track. Does Microsoft duplicate this feature on both ribbons for some reason? Actually, yes. The Alert Me icon in the Library or List ribbon is meant if you want to create an alert to changes to any document in the entire library or list. This feature is useful if you need to monitor whether others are adding, editing or removing documents or list items. On the other hand the Alert Me icon in the Document or Item tab can be used to monitor individual documents or items for changes. So the first thing I have to decide is whether I want to track changes to the overall library or list or to an individual document or item.

For the balance of this discussion, I will assume that I want to monitor changes to an individual document in a library. However, the technique that I will show here is essentially the same for monitoring changes to the entire library, an entire list, or just a single item in a list. I mention the differences at the end of this article.

If I want to create an alert, I need to select the document first by either clicking on a non-hyperlinked column within the document or preferably by clicking the check box in the first column of the document row.

Note, you may think that you can create an alert on multiple documents by selecting their individual checkboxes before continuing. Unfortunately, if you select more than one document, the Alert Me option in the Document ribbon becomes disabled. You can only create alerts individually on documents.

The image below shows the Alert me icon in the Share & Track group. I have already clicked on the lower half of the icon (where the down pointing triangle appears) to open the dropdown menu of options. In this case, I want to select the first (or default) option for this icon to set an alert on the currently selected document.

Next SharePoint opens a dialog which is rather long. Therefore, I will have to explain first the top half and then the bottom half. In the following image, you see the top half of the New Alert dialog. The first thing you need to do is to create an alert title. Because you can create many alerts on different documents in different libraries in different sites, you want to carefully name your alert so that when you manage your alerts (note the second option in the above dropdown) you can easily identify the alert that you may want to change or delete.

A side note on managing your alerts. You can only manage alerts within a single site at a time. You cannot see all of your alerts within a site collection or a SharePoint farm, at least not with the built-in tools. There are some 3rd party scripts that will help you see and delete alerts across all your sites. I may discuss this in a future blog.

Although I’m not going to go into detail here about naming conventions (that is almost like talking about religion or politics), you may want to consider the following factors in defining your alert name:

  • The name of the site/library
  • The name of the document
  • The type of change you want to have the alert monitor.
  • The frequency of the alert reporting

I will show you the latter two factors when we get to the second half of the dialog in just a moment.

The second thing you may need to do is define to whom you want to send the alert. By default, most users want to receive the alert themselves. In fact, unless you have owner rights to a site (as I do), you can only create alerts that you receive and you will not be able to enter users for this property. On the other hand, site owners can create a list of users separated by semi-colons to specify who should receive an alert.

In addition, you may or may not have the ability to send the alerts by E-mail or text message depending on how your SharePoint administrator has set up your system. I’ll assume that e-mail notification will be on for most users and create my alert that way.

Moving to the second half of the dialog, the next property of the alert I can set is to define which types of changes I want to see in my alert. Read these options carefully. Only the first option, Anything Changes, will report changes that I myself made to the document along with changes from others. I may not want to see my changes since I already should remember them. Therefore, the other three options become more interesting. The second option shows me changes that anyone but me made to any document. Again, I may not care so much about changes made to other people’s documents, but only documents that I added to the library. In that case, I might select the third option. Perhaps even that is too much information. Perhaps I want to know when someone else comes in after I have made changes to a document, whether I originally created that document or not, and may have made changes on top of my changes. Then I would select the last option in this group.

The last option I can set for a document alert is when I want the alert sent. I could ask to see the alert immediately (or at least within a few seconds). Unless knowing about the change is critical, this may be more annoying that it is worth. Perhaps I would prefer a daily summary of all changes made to the document. Sure that may mean that I may not be made aware of a change for up to 24 hours after a change was made, but that may be enough. Note that if I select this option, I will have to select what time of day I want to receive that alert. In the figure above, I selected to receive the alert at 1:00 PM.

Finally, I can choose to receive a weekly summary for the alert. In that case, I must define not only the time that I want the alert sent, but also the day of the week I want. For example, I might want to see my weekly alerts first thing Monday morning.

When I click OK at the bottom of the dialog, SharePoint saves my alert definition and begins monitoring for the changes I asked for. If any changes occur to the document in the current reporting period, I will receive an e-mail detailing the changes. If no changes occur, no e-mail is sent.

There is one significant difference when creating an alert on an entire library or list, I can specify whether I only want to see:

  • when new documents are added to the library,
  • when existing documents are modified in the library,
  • when existing documents are deleted from the library,
  • or all of the above.

Note, it is not possible with a single library alert to see only additions and deletions to the library. Either I have to deal with receiving a single alert with all changes to the library or I can create two alerts, one for additions and one for deletions.

So as promised, why would I use an alert over following or vice versa? First, I like getting email alerts about changes and not having to go to my Newsfeed page. Also I like the fact that I can customize the type of change to the document or the library that I’m interested in and not have to wade through changes that do not interest me. On the other hand, alerts do not let me following people or tags to let me know what else individuals are doing or where other similar content may be. I can also follow documents and libraries without having to implement e-mail or SMS services which alerts require.

So I hope now you know how to decide whether you want to follow objects in SharePoint or to receive alerts. Each has their place if you use them wisely.

C’ya next time.

Someone Is Following Me

So last time I showed you how to share access to documents in your SharePoint document library. If you are only allowing others to view the documents, you may not have any additional concerns. However, if you grant them edit access, you might be interested in knowing if or when they make changes to one of the documents. More importantly for them, they may want to follow your document to know when you or someone else has made a change to it. One way that you (or they) can do this is to follow the document. While today’s discussion will focus on following documents, you can also follow sites, tags, and people.

If after you read this document, you do not see the buttons to follow documents (or sites or people) contact your SharePoint administrator. They may not have turned on this capability in Central Administration. Note that your SharePoint Administrator can also limit the number of people, documents, or sites that can be followed.

The following figure shows a document library with one of the documents selected by clicking on the ellipses to show the additional features dialog.

By clicking the FOLLOW command at the bottom of the dialog, I can receive notices on my Newsfeed page when this document is changed, even if only the properties of the document have been changed such as the document name or other metadata value. After clicking FOLLOW, SharePoint immediately confirms that I am following this document by temporarily displaying a message box in the upper right corner of the screen.

To check my newsfeed for updates to anything I am following, click the Newsfeed button on the My apps dialog (click the grid to the immediate left of the Office 365 banner).

In the center of the Newsfeed page, you should see a list of the most recent activity in the items that you are tracking. In this case, there is only one item showing. However, after marking several documents, people, sites, and tags to follow, this list can be rather large. Notice that in this case, the text tells me the name of the person who acted on the document, what they did, and the name of the document. If I hover over an entry, an ‘X‘ appears on the right side and clicking this icon will remove the item from my feeds list. They will also automatically be removed after several days. However, since this information is only stored in cache, if there is a reboot or an iisreset, the list can be truncated sooner.

On the right side of the Newsfeed page, you can see a summary of the number of people, sites, documents, and tags that you are following. If you click on any of these (that have a value greater than 0), SharePoint shows a list of the items you are following.

If I were to click on the number ‘2‘ above ‘documents’ above, I can see information about the two documents that I am following. I can click on a document name and SharePoint assumes that I want to open the document. If the document happens to be a Word document, Word Online opens the document in view mode. Of course, I can choose to edit the document either online or on my local machine by clicking one of the options in the Edit Document dropdown. I can also stop following a document by clicking the Stop following link beneath each document name and address. Similar options exist for displaying and following people, sites and tags.

If I were to click on the Stop following link, SharePoint gives me a chance to confirm that I really want to stop following that item or I can cancel from an accidental click on the link.

Before ending for today, I want to clear up something that may be confusing as you create your sites.  You may have created a team site to work on in SharePoint Online and you may see a web part with the title Newsfeed in the lower left of the default page. (You can see this in my January 24th 2015 post http://bit.ly/1EkTWoa).  Do not confuse this Newsfeed web part with the Newsfeed app I referenced above in your My Apps dialog.  They are not the same.  In fact, the team site Newsfeed is actually more of a traditional Internet newsfeed in which you can post information pertinent to the site and allow others to respond or comment on it.  Do not look here for references to sites, people, documents, or tags that you are following.

That’s all for this week. Next week I’ll close this mini-series with a review of using alerts as an alternative to following a document and explore some reasons why I might choose to follow some documents while preferring to receive alerts on others.

C’ya.

Follow

Get every new post delivered to your Inbox.

Join 128 other followers