Are Virtual Schools the Answer To our School Budget Problems?

I recently ran across a study by Ambient Insight that predicts that by 2015, preK-12 academic institutions such as your local public school districts will be spending nearly $5 billion on self-paced electronic learning tools.  They also projected that enrollment in physical classrooms would decline by 4.2% during this same time period. They say that this is largely due to state-run virtual schools targeting not just rural students who may not have the full range of course offerings that urban schools have, but that now all schools are looking to provide core curriculum and credit recovery as a way to cut costs.  (Their full report can be found at: in case you are interested.)

Shortly afterwards, I also found an article from the New York Times from the beginning of the year that talked about Florida Virtual Schools (  The point of this article was that Florida Virtual Schools ( is providing some school districts such as Miami-Dade a way to use virtual computer labs to teach courses while getting around the state-mandated class-size restrictions.  By putting up to 35-40 students in a lab, they can teach the same class with the use of a facilitator that would otherwise be capped at 25 students in a traditional high-school class with an instructor.

Now I for one like the concept of self-paced learning.  In fact, back in the stone ages when I went to high school (pre-calculator days) several of my classmates and I did a several advanced science classes as self-paced learning.  Of course it was different then.  We did not do it on-line with computers.  Rather, we had traditional textbooks.  We were given tests by the proctoring teachers at the end of each chapter and we held weekly labs either after school or on a Saturday morning.  My main point however as that there was no traditional lecture.  For us, it was a way to take additional advanced science classes when such classes were not offered during the regular day school program.  And yes, these courses did count toward our official transcripts.

While I liked the self-paced classes, I suppose they are not for everyone.  You have to be really focused and committed to such a learning model.  I believe that combined with increased use of on-line collaboration tools, that on-line courses could increase the actual learning of students because they allow students to take a more active role in their education and their future.  Also the virtual environment gives them more flexibility to pursue personal interests that may lead eventually to careers.

On the other hand, many people have argued that the role of public education is not just educating our next generation, but also just as importantly to provide custodial care for children while both parents (or even single parents) are working (See Therese Mageau in The Journal, Feb 2011).  It would not be realistic to expect (on several levels) students to be home alone and to sit in front of a computer all day to take their classes.  The custodial aspect is needed in today’s world.  Thus the concept of virtual schools in which students still attend a physical building but take classes in a large lab type environment in which a facilitator acts more as a custodial guardian while students work on their classes on individual computers.  Within a lab, you could even have each student working on a different class or be at a different pace within the same class since the facilitator is just there to monitor the class and resolve any computer issues, not to lecture or actually teach the material.

While the above approach might work for the advanced studies students in high schools, especially in those cases in which a student wants to take a class for which there are not enough students in their school to justify a teacher for that topic, it may not be the best approach for core curriculum classes.  In these cases, the virtual class could still be used, but modified to include a qualified teacher as the facilitator who can combine some actual lecturing with individual problem solving and personal one-on-one help to augment the computer-based training classes provided. A single instructor could even teach multiple classes by using telepresence technology with facilitators at the remote locations.

A lot of parents don’t agree with the virtual school method and believe that they are sending their children to a school where they can be taught by a physical teacher standing in front of the room.  However, the economics of public education today puts that model in jeopardy as funds are cut every year and districts struggle to find new ways to meet class-size restrictions.  Creative solutions such as virtual schools may be part of that solution.  In addition to Florida Virtual Schools, which was founded in 1997, many local school districts, such as Orange County Public Schools are beginning to explore virtual school options even within their districts.  Each district may have a slightly different twist on how they implement their virtual school, but I have seen a growing interest in providing at least some courses via a virtual school model.

I believe that no education model will last forever.  Just like the one-room schoolhouse gave way to larger multi-grade school buildings and eventually separate elementary, middle, and high schools, virtual schools may just be the next step in the evolutionary path of how we educate the next generation.  It may take several different attempts with different models to find one that works best or it may just continue to be a local decision.  The one thing I’m sure of though is that change will occur.

(Disclaimer: The views in this blog are strictly my own and those of the articles referenced and not necessarily the views of Orange County Public Schools where I currently work.)


Using the Chart Web Part with Web Part Data Sources

Hope all of you from the USA had a great Thanksgiving. 

Last week we used a Chart web part with a list data source to show the basic method of creating charts within SharePoint 2010.  This week, we will take a similar list displayed using a list web part as the data source.  Furthermore, just to make it a bit more interesting, we will connect the primary data list to a lookup table to filter the displayed list to show that the filtering can be passed down to the chart as well.  So let’s get started.

If you remember the sample data from last week, I logged the registrations and actual attendance at our monthly open lab sessions for the last year.  Now, just imagine have multiple years of data instead of just one year.  That is the scenario that I’m going to use here.  To begin, I create a separate SharePoint list called: LabYear which contains only a single column, the year numbers for the last four years. 

Next I took the list I used last week and added a column for the year so I could store multiple years of data.  Rather than just using a single line of text for the year (I’m not calculating anything using the year so it does not need to be a numeric value), I created a lookup column that pointed to the other list.  The only column in the lookup table is the LabYear column and fortunately it is the only thing I need. 


At the bottom of the column settings is a new setting specifically for lookup fields that define the relationship between the two tables.  To those of you familiar with referential integrity between tables in a database such as SQL Server, this new setting lets you create a referential integrity link between the current table and the lookup table.  Click the checkbox to Enforce relationship behavior.  By default, SharePoint creates a Restrict delete relationship.  This means that it will not allow anyone to delete a record (item) from the lookup table if the value in the linked column is used in one or more records (items) in the detail table.  The Cascade delete option means that if someone deletes a record (item) in the lookup table, SharePoint will automatically delete all records (items) in the detail table with a matching value.  This allows us to work with related SharePoint lists more like we work with related tables in a database.

When you click OK to close the column settings page, you may see the following dialog which tells you that in order to enforce a relationship between tables, you must index the current column.  This feature is needed for performance reasons to make it easier to find matching values in the current table.  Click OK to index the column.

You should now be able to create a new page and place both lists on the page using the automatically generated web part SharePoint creates for any list.  The following figure shows the two lists.  But other than determining what values we can use for the Year column in the Open Lab Attendance table, there is nothing connecting the LabYear web part to the Open Lab Attendance web part.  To filter the Open Lab Attendance web part to show only the items from a selected year, we need to create a connection between the two web parts.

Before we can create a connection between the web parts, you need to edit the page (if you are not still in edit mode), by using the dropdown menu for the web part found on the right side of the web part title bar.  Let’s begin from the lookup list, LabYear.

Starting from the lookup table, edit the web part and then use the dropdown menu again and select Connections which now appears.  The fly-out menu has several options on how to connect the current list to other web parts.  Since we want to send the selected year from LabYear to the Open Lab Attendance web part, select Send Row of Data to, and then select from the next fly-out Open Lab Attendance as shown in the following figure.

SharePoint now opens a dialog to help you define the connection between these two web parts.  In the first panel, you need to select the connection type from the Open Lab Attendance web part’s point of view.  The connection type should be: Get Filter Values From because you want to get the year from LabYear.

On the second page of the configuration dialog, you need to define the fields for both the Provider and Consumer.  The Provider in this case is the field in LabYear and the Consumer is the field in Open Lab Attendance.  In both cases, the dropdown list displays only the fields in the corresponding list.  The following figure shows the completion of the page.

After clicking Finish, you will notice that the LabYear web part now has a new column named Select with a diagonal double arrow.  By default the first item in the table is selected (notice the filled-in arrow heads).  However, since I did not yet enter data for 2008, the Open Lab Attendance web part does not show any data.

However, if I click on one of the other values where I do have data, you will see the attendance data for that year in the Open Lab Attendance web part.  This is one way in which you can easily let the user filter the data they want to see on the final page.  The following figure shows the result of selecting 2010.

You can now begin creating the chart definition using the techniques discussed in last week’s blog:  I will not go through all of the steps again on how to define a chart.  If you need a refresher, please refer back to last week’s blog.  Rather I will start at the point where I need to choose a data source.  This time, I will select Connect to another Web Part as shown in the following figure.

When you click Next, you must select which web part you want to use as your data source for the chart.  In the dropdown menu, you see references to both the LabYear and Open Lab Attendance web parts.  The following figure shows that I need to select the Open Lab Attendance web part which has the detailed attendance data I want to chart.

Step 3 lets you choose the source data format.  For our current example, the only option is Table so select it and move on the next page of the data source wizard.

Next you need to define which columns you want to chart.  In the following figure, you can see the definition of the first data series to be created on the chart.  In this case, I selected Enrollment as the Y-axis field and Month as the X-axis field.

But wait a minute, Month does not appear in the dropdown list.  The trick here is that when you create a custom list, SharePoint populates that list with an initial default column named Title.  You cannot delete this column, but you can rename it to something that makes sense in the context of your custom list.  That is exactly what happened here because I renamed the original Title column as Month.  However, in the dropdown list to select fields, you still see the original column name, Title, which SharePoint continues to use internally.  Therefore, the following figure shows me selecting the field Title even though it will display Month in the resulting chart.

 I also set the name of the series (rather than default) so that the series name can appear in the legend.  You can also customize other properties of the series, but this should be enough to get started.

Using the Series dropdown, You can then define a second series to be included on the chart.  In this case, I will choose Attendance for the Y-axis.  Again you can customize other properties of this series and/or any other component of the chart.

When you finish customizing your chart, you may be surprised by the chart that appears on the web page.  It may look similar to the one below.  It may not be obvious, but this chart plots all of the data in the Open Lab Attendance list, not just the data for the selected year.  What happened?

The problem is simply a matter of timing.  The filter criteria needs to be refreshed to limit the data displayed in the chart.  You can do this simply by clicking on one of the other years and then if desired, clicking back on the year that you wanted.  This will reset the data that the Open Lab Attendance web part passes to the chart web part.  The following figure shows an example after selecting the year 2010.

That’s it for this example.  Next time, we will look at using External Data sources with the Chart web part.

C’ya then.

Single Version of the Truth?

Have you heard the following phrase at your organization, “We need a single version of the truth around here”?  Sure you have.  It is often used as one of the reasons why organizations invest the time and effort building enterprise data warehouses.  All organizations have information created by different parts of the organization.  There could be financial information, manufacturing information, sales information, marketing information, productivity information, customer satisfaction information, employee performance information and information from many, many more areas.  The problem is that often the information resides in more than one place in the organization.  Sometimes the information is updated regularly.  Sometimes it is not.  Sometimes the data is detailed.  Sometimes the department only has access to summary data.  In any case, the sad fact is that reports generated by one group could be very different from reports generated by another group simply because they used a different version of the data or they transformed it differently.  Then there is the skill level of the person gathering the data and how well they understand the data schema.  Finally, if everyone who needs to use the data has to determine how to gather and transform the data they need for their report, they all would need the skills to know how to access different data sources.  They would need to know the data schemas in those data sources, and they would need to know to transform data in the languages of those individual data sources.

Organizations use many of these arguments to justify the creation of a single team designated to create the Enterprise Data Warehouse and to populate it with the most recent data from all data sources.  By centralizing the data gathering function, users can be assured that the data is collected and transformed in exactly the same way each time it is updated.  They can write procedures to correct invalid data, account for missing data, and dedup data that may appear in multiple sources.  Even if the data comes from different sources such as a Oracle database, SAP, SQL Server, MySQL, or some other source, this central team can focus and tune the skills needed to create the extract and transform functions.  Once the data is centralized, users from any department in the organization can use simple reporting tools or even end-user analysis tools like Excel and PowerPivot tables to analyze and display their data.

After putting a system like this into place at an organization, it is often argued that no one outside of the data warehouse team should ever need to access the raw data to perform their analysis or reports.  They argue that it the responsibility of the data warehouse team to pull data from the raw systems and generate the sanitized and homogenized data store for everyone else to use.  Because the end user community only has to learn how to access a single system instead of potentially dozens, they argue that this reduces the training time for these users and lets them become productive analyzing the data much sooner.

But what if the data, although it is gathered consistently the same way each time, is not correctly transformed when entered into the common data store?  What if the data dedup is not quite right?  What if older data overwrites newer data by accident?  If other groups are then discouraged from gathering the raw data themselves and transforming it, errors can remain hidden for a long time.  Depending on the severity of the error, management decisions based on this data could adversely affect the future of the organization.  A little extreme?  Perhaps, but the point is that any centralized system that is never challenged and taken as an absolute truth can harbor incorrect data that will come back to haunt you later.

In addition, some people in the industry recognize this problem and maintain that getting data to the masses for analysis cannot wait until the data is perfectly cleansed because that could take too long reducing or completely eliminating the time needed to affect the change for which the data may indicate a need.  But could the ‘I need the data and I need it now’ approach of management result in decisions with negative impacts on the organization?  The question is how does the organization account for known data issues?  Is a little bad data worth the time saved to make a decision now?  Even if the data warehouse moves forward with these known data issues because some data is better than no data, what can we, as IT professionals do?

While it is easy to say you can be more careful with the initial data extract and transformation processes, the simple truth is that no matter how careful we may be, errors do and will occur. On the other hand, isn’t it possible to treat the entire data handling process using an approach similar to that used for systems design.  What I’m talking about here is an iterative approach to data validation and cleansing.

Let’s take as a given that everyone involved in the data extract and transform does their best to accurately move the data into the data warehouse.  However, once the data is in the data warehouse it should never be assumed that the processes used are perfect and do not need to be re-evaluated.  While developers outside of the data warehouse team should not be spending all of their time reproducing the same data structures as the data warehouse team, discouraging them from pulling their own data from the raw data sources occasionally to validate the data they are using can be a mistake.  This is especially true when they ‘think’ something in the data is not making sense.  Just the same, they should not be allowed to create a permanent alternative to the data warehouse thus duplicating the effort.

The bottom line message is that healthy skepticism should never be discouraged.  On the other hand, creation of alternative data warehouses is a waste of resources and is a direct violation of the goal for a single source of the truth.  Deciding how much questioning of the accuracy of the data warehouse is a difficult tight rope to walk.  Leaning too much one way or the other can be dangerous.  In any case, a single version of the truth should always be the goal, but it should not be accomplished at the risk of silencing all voices that ask for proof that the truth is correctly represented by that one voice.

See you later this week for more on working with the SharePoint Chart Web Part.

Adding Charts to Your SharePoint Pages

People have always used charts to help others understand tabular data and to help illustrate trends that may not be as obvious when just looking at the numbers.  In SharePoint 2010, you can add a variety of different charts directly to your pages.  In this blog post, I’ll show you how to get started with the Chart Web Part and SharePoint Lists and in future posts I’ll look at charting other data sources.

To begin, navigate to the site where you want to display your chart, create a custom list and add your data.  Of course there are many ways you can get that SharePoint list.  You could enter the data manually, copy the columns and rows of another data source, import an Excel spreadsheet, or maybe even use an InfoPath form to display form fields as metadata in the form library.  I’m going to start with a pre-defined table that tracks attendance for our SharePoint Open Labs which we hold every month.  We allow people to register ahead of each lab.  However, some of those who register do not show up and some of the people who show up did not register.  Suppose we wanted to compare the number of people who register to the number of people who attend each month.  Perhaps it would also be nice to know how attendance varies over the year.

After creating the data list, either open an existing page where you want to insert the chart to plot this data or create a new page.  With the page in Edit mode, select the area/zone where you want to place the chart and open the Insert ribbon from the Editing Tools group as shown in the following figure. 

The leftmost column displays the web part categories available to us.  Select the Business Data category and click on Chart Web Part in the Web Parts column to the right to select it.  When you click the Add button, the web part is added with a placeholder image as shown in the following figure.

Users with edit rights will see two options at the top of the chart: Data & Appearance and Advanced Properties.   Most of the properties you need to edit to create your chart are available through these two links.

Clicking the Data & Appearance link first, you may see a message box asking you to save your changes before continuing.  Click OK.   A page appears that further divides the available options into two groups.

The first, Customize Your Chart, helps you to define the appearance of your chart.  The second, Connect Chart to Data, lets you define where to find the data source for the chart.  Even though you have not associated a specific data source with the chart yet, you can define the properties that will control how the chart looks.  Clicking Customize Your Chart displays the following dialog.

The first step of this dialog lets you select the type of chart you want to create.  The Chart Type Categories on the left show 15 different chart categories.  Within each category you will find a selection of both 2D and 3D charts providing even more variety in your chart selection.  Although not shown here, I will click the Bar category which displays the bars horizontally and select the simple Bar template.

In Step 2 of the dialog, you can customize the overall appearance of the chart by select from one of the predefined themes, drawing styles for the bars (in this case), transparency, size and chart file format.  Each time you make a change, you can see a sample chart on the right using your selection.  If you find the Auto Preview annoying, you can turn it off and using Chart Preview button to refresh the example chart on demand.  The following figure shows the initial selections I made for my chart.  You are not locked into these or any other selections.  You can return to the properties at any time and change any of the chart properties.  So go ahead and experiment.

Step 3 is actually divided into four individual dialogs because there are so many properties when defining chart elements.  The first page of this step lets you define a title for the chart and the legend.  In addition to defining the text to be displayed, you can define the font, the font color and the position of the text.  The circle of dots lets you select the position of the text relative to the body of the chart.

On the second page of Step 3, you can define how to display the x and y-axis.   The flexibility here is tremendous as shown by the following figure which only shows the properties for the x-axis.  A similar set of properties define the appearance of the y-axis.  You can also choose whether to display grid lines for each axis as well as define starting points, ranges, and logarithmic scales if the default property settings are not adequate.

Page 3 of Step 3 lets you determine whether to display labels for each data point and data markers.  In this example, the addition of the data labels will make it easier to determine the exact value of each bar.  Again you have control over the font, font color and format of the labels displayed.  You can even fine-tune the positioning of the labels if you do not like their default.

The final page of Step 3 lets you define hyperlinks and tooltips to each of the data series, legend, and labels.  Although I did not do it for this example, you might use the hyperlink or tooltip to display details about each data point in the series such as the name of the people who attended the Open Lab.

If you have been watching the example chart on the right of the dialog, you have seen the progression in the appearance of the chart, but we still do not have any real data associated with the chart.  In fact, when you click the Finish button on this page, the dialog will close and the formatted placeholder chart example appears on the page as shown below:

Next we need to associate data with our chart.  Click the Data & Appearance link one more time to display the first page of this wizard.  This time, click on the link: Connect Chart to Data

As shown in the next figure, you will have four choices for your data source for the current chart.  The first choice lets you connect to another web part for the data.  Since SharePoint 2007, some web parts can act as data providers to other web parts and some can consume data.  In this case, you must have at least one web part, such as a list, displayed on the same page as the Chart web part to act as the data provider. 

The second option, and the one we will use here, connects directly to a SharePoint list which does not have to be displayed.  We can then select columns from that list to use in the chart.

The third option uses the Business Data Catalog.  In previous entries I showed how to create external content types and then used those content types to define external lists and columns in custom lists.  As you might suspect, you can also consume the data from those external content types directly in a chart to graphically display data using the Chart web part.

The last option lets you work with data supplied through Excel Services.  Since Excel Services works with tables within and Excel spreadsheet in which each column has a column name, we can reference the columns we want to display using the Chart web part.  Of course, you could also create the chart within Excel and publish it along with the rest of the Excel data to SharePoint.  The primary difference being who defines the chart.  Does the Excel spreadsheet creator define the chart when they create the Excel spreadsheet or do you want other content creators to ‘consume’ the data from the Excel spreadsheet in their own chart.

The next step in defining our data source after choosing a SharePoint list is to select the site that contains the list as well as the list itself.  Note that libraries are included in the dropdown of the available lists (see figure below).  Remember that libraries are just a special form of list that includes a document, image, or page along with the other metadata about the item.

Step 3 in defining the data source shows a preview of the data. Often the entire list contains more records than you may want to plot in the chart so you also have the option to define filters to limit the data included in the chart.

The last step in defining the data source for the bar chart lets you include one or more data series.  Each data series is represented by a different data field in the list.  Other fields in the list might represent labels, hyperlinks, or tooltips.  Data in one field can also be grouped by using values in another field.   While two data series are created for the default chart, you can remove the second data series by selecting it from the dropdown under Series and then clicking the minus (-) button.  You can also add additional data series by clicking the plus (+) sign, naming the series and then associating it with a                                                          field.

 When you click the Finish button on the last page of the dialog, you will be rewarded with the image of your new chart as shown in the following figure.

But you don’t have to stop there.  If you edit Chart web part properties directly (as you would for any other web part), you can change the chrome settings.  You can even change the chart border properties (although you can also change those properties by selecting the Advanced Properties link when editing the page.

Perhaps you are asking whether everyone can see the two links above the chart allowing them to make changes to your carefully constructed chart.  Fortunately the answer to that concern is that only users with modify content or edit rights can see these links and thus make changes to the chart definition.  Users that can only view your pages will see your final chart as shown in the following and last figure of this post.

In a future blog post, I will illustrate some of the other data options you can use with the Chart web part.  See you then.

The SUBTOTAL Function Can Do That?

One often overlooked function in Excel is SUBTOTAL().  This function can not only add a series of number together, but it can perform any of the typical statistical aggregates.  Suppose you have a simple table of test scores by individual student as shown in the following figure:

You are probably familiar with the AVERAGE(), SUM(), MIN(), and MAX() functions, but did you know that you could do everything these functions do with a single function and more?  First let me list the possible values for the first argument of the SUBTOTAL() function along with the corresponding individual functions.

SUBTOTAL Function Regular Function
=SUBTOTAL(1,ref1,[ref2],…) =AVERAGE(number1,[number2],…)
=SUBTOTAL(2,ref1,[ref2],…) =COUNT(number1,[number2],…)
=SUBTOTAL(3,ref1,[ref2],…) =COUNTA(number1,[number2],…)
=SUBTOTAL(4,ref1,[ref2],…) =MAX(number1,[number2],…)
=SUBTOTAL(5,ref1,[ref2],…) =MIN(number1,[number2],…)
=SUBTOTAL(6,ref1,[ref2],…) =PRODUCT(number1,[number2],…)
=SUBTOTAL(7,ref1,[ref2],…) =STDEV(number1,[number2],…)
=SUBTOTAL(8,ref1,[ref2],…) =STDEVP(number1,[number2],…)
=SUBTOTAL(9,ref1,[ref2],…) =SUM(number1,[number2],…)
=SUBTOTAL(9,ref1,[ref2],…) =VAR(number1,[number2],…)
=SUBTOTAL(9,ref1,[ref2],…) =VARP(number1,[number2],…)

 (Of course you can include references to individual cells or cell ranges in these formulas.  Cell ranges are defined by the cell address of opposite corners of the grid separated by a colon as in an individual column: B2:B5 or a row: B2:D2 or an area made of rows and columns (B2:D5).) 

By default, the above function values listed in the above table include all rows and or columns in it calculation, even hidden rows or columns.  But what if you wanted to just perform the function on rows and columns that were visible?  Simple.  Just add 100 to the first argument.  In other words, to calculate the average of only the rows in the table shown above, use =SUBTOTAL(101,B2:B5).  This ability to ignore hidden rows and columns is not a an ability of the regular functions such as =AVERAGE() which always includes all rows and column whether they are hidden or not.  (See the figure below in which row 3 has been hidden.)

Another interesting feature of the SUBTOTAL() function is that it ignores cells that contain formulas in the included range.  The following figure shows the grades for two small classes.  (Of course you could imagine a larger number of classes with more students in each.)  As you can see in the following figure, we can easily use SUBTOTAL to calculate sums, averages, counts, variances, etc across the entire column range while correctly skipping formula, label and blank columns which the regular formulas do not skip resulting in incorrect results shown in the cells in the bottom left of the figure.

Hope this carries you need for ‘cool’ Excel technology while I talk about some SharePoint issues over the next few weeks.

Does Communication Matter in IT?

How well do you communicate?  I don’t mean just how well can you talk to one of your other programming buddies about the latest new trick you just came across or you latest achievement in Worlds of Warcraft.  I’m talking about how well you can talk with other people who are not technical.  Are you shy?  Do you feel awkward? Don’t worry.  It seems to be a common thing among many programmers.  While I’m not about to go into possible reasons why that may be so, (mainly because I don’t have a clue) I will talk here about some reasons why you need to work harder at communication and provide a few ways to help you do that.

In the book, “Does IT Matter” written by Nicholas Carr a few years ago, Nicholas makes the observations (somewhat paraphrased):


“As IT becomes more integrated into an organization, mitigating risk takes priority over innovation.”


In mature IT organizations, reducing costs takes priority over making investments.”


Does that sound like your organization?  Especially over the last few years, has your IT department been cut, staff reduced, and projects cut back or totally eliminated?  Is spending time and money investigating new abilities and maybe even prototyping some of them a thing of the past?  Is that the right direction?  Does your organization management look at IT as just another cost center?  Or have these changes just been a short-term fix that may risk longer term benefits that your competitors may take advantage of?

IT is no longer in the basement of your company.  Rather it should be an equal partner with other business groups in leading your organization to take advantage of even bad times such as the past few years.  Some might even venture the opinion that during bad times it is just as important for companies to consolidate around their core practices as it is for them to push their R&D groups and even their IT departments to gear up for the next ‘big’ thing which is sure to be just around the corner when the current recession ends.

How can IT help the organization prepare for that turn-around if they are just task focused and not people focused or business issue focused?  How can IT help other departments take advantage of new technology that may give the organization an edge over their competitors when business picks up?  Is IT just committed to their existing ideas or are they talking with other departments to discover new ways to help those departments save money with more effective use of existing or new technology.  Can your IT staff even talk to managers in other departments to ‘sell’ new ways of doing things that may be more efficient without drowning the non-technical business managers in a sea of information technology technobabble?

In order for IT to show the value of its work, its software, and its solutions to other departments, it must develop leaders who can effectively communicate with other departments.  One of the things I learned years ago from a fellow Visual FoxPro consultant was the key to being successful in any consulting engagement.  That key was to listen carefully to the customer to uncover what their pain points were.  Then develop a solution to remove or at least reduce those pain points.  Never over sell the solution, but always perform above the user’s expectations.  Keep the process simple.  In fact, it must appear to be simpler than their current process or you will have a harder time to sell the benefits no matter how great they are.

So how do you do this?  Get your people out from behind their desks and away from their cubes.  Take them to meetings.  Encourage them to talk with the customers about your services to find out what works and what does not work.  Check back with prior clients to see how things are going and whether there is anything else you can do to help them.  If your department has a periodic newsletter, encourage co-workers to write about their recent work for other customers and how it saved time, money or both.  If you get a chance to make a presentation about a project to the rest of the company, take the opportunity to show how your department has helped another department’s bottom line.  If you can write a article for a trade journal or speak at a conference (some topics may require prior company approval), do so.  But no matter what communication channel you use, be aware of the level of technical details you use so you do not put your listeners to sleep or come across as an arrogant know-it-all.  Use every user problem as an opportunity to ‘train’ that user a little more about your product or system.

The bottom line is that it is up to you to determine whether IT matters to the rest of the organization.

Create Your Own Custom List For Sorting Pivot Rows and Columns

Last week I showed how you can use a custom list to order the months in a year for rows in a pivot table.  Perhaps you already guessed that the same technique works for column labels as well.  But did any of you try to sort using the 3 character month names rather than the full month names?  If you did, you found that it did not work.  Were you surprised and why?  The answer is that the custom sort list on works on cells that have exactly the same values.  Since the original data displayed the full month names, all Excel can do is match the cell values with the values in the list to define the sort order.  It does not know that August is the same as Aug or that Dec is the same as December.  Sort of makes sense now doesn’t it?

So what if you have to sort data that is not in one of the existing custom lists.  After all, Excel only includes 4 custom lists out of the box, two for days of the week and two for month names.  What if you have other labels that you want to sort in a custom order other than alphabetical?  A recent example is a pivot table I created using some school data.  (Before anyone thinks that I’m revealing confidential school/student data, just stop.  The data in this example as well as the example has been totally revised.)  In the figure below, I’m showing the first few rows of absence data collected by teachers of a fictitious elementary school. 


The first six columns were ‘loaded’ from an application that collects absences from each teacher for each grade for each day.  The remaining columns were calculated using the following formulas:

Month =MONTH(A2)
Year =YEAR(A2)
WeekNumber =WEEKNUM(A2)
MonthLabel =TEXT(A2,”mmmm”)


These additional columns will allow us to ‘group’ data by date ranges in the pivot table using natural groupings such as month, week, or week number.  You may be familiar with all of these formulas except for the last one.  The TEXT() function allows you to display different parts of a date/time either as integer or character data.  When we reference a date cell and use a parameter like “mmmm”, the function return the full month name of the date.  On the other hand, a parameter like “mmm” will return the 3-letter abbreviation of the month name.

Given, these columns, you can select all of the data, including the header row, and select Pivot Table from the Insert ribbon.  Displaying the new pivot table in a new worksheet, I proceeded to define my pivot table. 

I dragged the Absences field to the Values area and accepted the default which sums the absences.  Right off the top, I now know how many absences there were in the entire time period for all grades at the school.

I then added Grade to the Row area and under grade I added LastName because the natural hierarchy is to have teachers within a grade, not grades within a teacher. 

Finally I added MonthLable, one of my calculated columns to the column area.  The resulting pivot table which Excel immediately calculated is shown in the following figure.


This report would be great and I could print it and say I was done except that the grades appear to be in a random order.  Of course, the order is not random.  It is alphabetical.  However, alphabetical is not the natural order of grades in an elementary school.  So how can I change the order of the rows to display the grades in their natural order? 

The problem is that the available custom lists do not include a list that orders elementary school grade names.  The natural question is whether you can add a new custom list to the four existing ones.  Fortunately, the answer is yes and it only requires a few steps.

If you are using Microsoft Office 2010, click on the File tab above the ribbons in Excel. In the Back Stage area (which is really what the page that appears is called, select Options from the left menu.  Excel has many options.  More options than can appear in a single dialog.  So Microsoft developed a paged version of the Excel options where each page is identified by a an option group name shown in the column on the left of the Options page as shown below.

Select the Advanced group of option and scroll down through the options if necessary until you find one that reads: Create lists for use in sorts and fill sequences.  Click the button after this label with the caption: Edit Custom Lists…

This displays the Custom List dialog box that displays two lists controls along with several buttons and a text box.  Initially, the Custom Lists displays the four lists we saw last week, but it also has an option at the top that reads: NEW LIST.  If this list option is selected, the second list box should appear empty because if you are creating a new list, it needs to begin with an empty list.  In this second list, type in each value you want in the list in the order that you want sorted items to appear.  Each value must be entered on a separate line with no blank lines between values. 


When you have finished entering all of the list values, simply click Add to create the new list.  The new list now appears at the bottom of the first list while the value continues to display in the second list.  You can edit the values, delete values, or change values and click Add a second time to save any changes you made as shown in the figure below.  Do not use the Delete button to delete individual values.  This button is meant to delete the entire selected list.  If you have no additional changes, clicking OK closes the dialog.


Now we can use our new list just like we used the default custom lists last week.  Go to the pivot table and click the dropdown button to the immediate right of Row Labels.  Because there is more than one field in the row labels area, you must first select which field you want to customize the sort order.  For this example, I needed to make sure that Grade was selected.  In the dropdown, select More Sort Options.


In the sort dialog, define whether you want to use the custom list sorted ascending or descending.  For this example, I will select Ascending because I want the lower grades at the top of the list.  Make sure that the dropdown beneath the sort direction continues to point the field that you want to sort, Grade in this case. 


Click the More Options button to display the dialog that lets you choose which custom list you want to use.  In the More Sort Options dialog shown below, deselect the AutoSort option and then select the custom sort list you want to use.  The figure below shows the grade sort beginning with Pre-K selected as the custom list to apply to the row field.

Click OK to complete your selection and close the More Sort Options dialog.  You will also have to click OK on the Sort dialog.  Returning now to the pivot table, you should see that the grades have been resorted based on the order defined by the custom list.  It is that easy.  You can perform this same procedure for any number of rows fields, or column fields, or both.  The column below shows the resorted pivot table.

After creating the first pivot table, you can go back to the data and create a second pivot table, perhaps using different row or column values such as the one shown below which added a day-of-the-week field and then used a custom list order to insure that the days are displayed chronologically across the top of the pivot table.


Next time for our weekly technical discussion, I will switch over to SharePoint 2010 to look at the new Chart web part.  See you then.

To Delegate or Not To Delegate

This week I want a personal road to talk a little about delegation.  Specifically delegation related to managing your team at work. Five years ago, I was given the responsibility of a small team to develop a new portal for our organization using SharePoint.  SharePoint was a new tool for all of us and we spent much of the first year just learning how the platform worked.  To make matters more challenging, much of our team was new not only to SharePoint, but they were also new to our organization.  I had no senior leads to help work with them.  (I still don’t.)  It was very much the blind leading the vision impaired.  Somehow, we made it through the early years deploying our Internet, intranet, school public-facing sites, and most recently collaboration.  While we had some consultant help for the first 6 months, we have been on our own since the summer of 2007 to the point where now we support over 4300 sites.  We did some things right and some things wrong, but overall, I proud of what our team has accomplished.

With that said, the hardest thing for me to learn as I moved into management was to delegate more and more of the work to my team.  I was use to being solely responsible for the tasks given to me.  I don’t think that is anything unusual.  In fact, I suspect that most programmers need to feel in control of the work they are responsible for.  Giving up some of the control is hard.  On the other hand, it is the only way to get major projects done.  So here are some of the ‘secrets’ I’ve learned over the years.

Keep your entire team informed about the direction you are going in, even if that involves multiple projects.  Make sure they see the big picture, not just the code they are working on this week. While you need to be open about the path taken to get to the destination, you can never compromise on the destination itself.

Encourage interaction between the team members.  Just because one team member has been assigned a task should not preempt another team member from making suggestions
or even questioning why a process is being done in a certain way.  This means making sure that all team members are open to talking about their work with other team members. Open discussions should not be limited to a weekly summary at a team meeting but should occur throughout the week if they encounter a problem that they cannot solve on their own.

Find out what each of your team members is good at doing and try to allocate tasks so they get tasks that they enjoy and in which can excel.  With any team, individuals will naturally begin to specialize in slightly different aspects of the projects your team gets assigned.  Some of this might be due to natural abilities, but I believe some is also due to a need to differentiate themselves from their other co-workers so that they can be the expert in at least a few areas where other team members are not as strong.

Provide a feeling of participation to all team members by inviting different members to project meetings based on the topic to be covered and the work that team member is doing. If they are so inclined, let them make part of the presentation at the meeting.  This may take time especially for a team member who prefers to work by him/herself behind the scenes.  Start with little things, but then get them involved more and more until you can trust them to work with clients on their own without your presence.

Encourage your team to learn new things, especially things outside of their area of expertise.  Sometimes a good developer can also be a good trainer, or documentation writer, or a presenter.  They may have a knack for working with clients to come to agreement on needs and wants of projects.

Training money these days is often scarce.  So encourage your team to look for other ways
to improve their skills (or as we say, sharpen their saws).  Local conferences can be a lot cheaper than conferences in another state especially if there is no travel expense and maybe no lodging.  On-line training courses can be significantly cheaper than in-person training.  Webinars, local user groups, books and trade magazines can fill in the gaps as well. The key is to make sure that any of these resources are used and that the training is something that the team member can put into practice immediately.

Does that mean that the manager can delegate everything and let the department run on autopilot?  No. Ultimately, you are still responsible for the work done or not done by your team.  However, the Information Technology manager who is not familiar enough with the technologies used by his/her group so that they can pitch in and help during heavy workloads is quickly perceived as just a boss and not a leader.  This does not mean that they need to be an expert in any of the technologies used by their people.  However they must have at least a fundamental understanding of how the technologies work so they can be a resource when the team is stuck on an issue or is divided over the approach to solving a problem or just has too much to do.  Of course, maybe my perception on this is due to the fact that I am still trying to home-grow that team lead that I never had.

Ultimately, I’ve learned that to some extent, you need to let your team find its own path.  With a clear understanding of what needs to be done and with some help from me, I think we have moved closer to utilizing each team member’s strengths. Are we 100% of the way there yet?  No, but we are getting close.  If that means giving up some of the tasks that I use to like to do, well, I can tell you that there can also be a great deal of satisfaction in seeing your team succeed.

So if I delegate all of the work I use to do to my team, what do they need me for?  Right now I like to think of myself as the SharePoint evangelist within our organization.  I look for and prototype new ways that departments can use SharePoint and some of our other tools.  I’ve talked to departments about using metadata to create document libraries that are easy to navigate without a ‘rats nest’ of folders.  I’ve talked to other departments about digitalizing forms so they might be able to go paperless.  Recently, I’ve been working with a few groups trying to combine SharePoint surveys with Excel Pivot tables to analyze the results to both prove suspected relationships and hopefully to uncover new ones.  While I am doing all of this, I know the rest of my team will handle the rest of the details all because I learned how to delegate more.

Sorting Months in my Contoso Pivot Table Example

Before I begin, I’d like to say ‘Thanks’ for any of you who came to SQL Saturday BI Edition in Tampa yesterday.  It was a very successful event.  This  could be the beginning of a new branch of SQL Saturday events  dedicated specifically to BI topics.

Because I was out of town yesterday and did not get a chance to create a post, I’m going to at least give you all a quick tip when working with Pivot tables (or PowerPivot) in Excel.  I’ve been doing a PowerPivot presentation using the Contoso data set now for a few SQL Saturdays and one of the problems that someone asked me about was the sorting of months.  I had created a simple pivot table with product categories as columns and years as rows.  That part looked fine as shown in the figure below.

Then I added the CalendarMonthLabel field.  It did add the calendar month names as  shown in the next figure, but the problem was that the months sorted alphabetically.  Honestly, I had not noticed during the presentation.  So now that someone called me on it I started to look for a solution.  I could use the CalendarMonth field which represented the data as a numeric string such as 200706 for June.  That is great for sorting purposes, but it is also unacceptable for presenting data to the average person.

So the problem was how to display the months in chronological order rather than alphabetical order. It took awhile, but then I remembered an Excel solution in which you can define custom lists that can be used when filling in cells.  For example, when you enter January in a cell, you can then select the handle on the bottom right corner and drag horizontally or vertically to add sequential month names in each cell that you drag through.  So I started thinking whether I could somehow use that ordered list in my pivot table as well.  After clicking around a bit, I round a solution.

First, click the arrow for the dropdown menu to the right of Row Labels.  This displays the menu shown in the figure below that I had used many times before for sorting and filtering the field values in my pivot table rows and column.

Warning: Before doing anything, make sure you select CalendarMonthLabel from the dropdown at the top of the dialog.  This dialog only appears when you have more than one field in the row hierarchy.

This rest of the menu includes options that let you change the sort direction (ascending or descending), and it also allows you to define a filter by select which row values you want to display.  Sort of hidden between these two is an option that says: More sort options.

Click this option displays a Sort dialog for the currently expanded field, in this case, the CalendarMonthLabel.  By default, the sort order is Data source order.  Obviously, this option is not going to give us what we want.  The second option in the list allows you to manually set the order by dragging the items to rearrange them.  Whenever you can do something automatically rather than manually, you want to go with automatic.  So is there an automatic way to get the months sorted?  Actually, you want to select the third option, Ascending (yes, you could also use Descending if you want to show the most recent months first) and then click the More Options button in the lower left of the dialog as shown in the following image.

This brings up another dialog in which the default: Sort automatically every time the report is updated is selected.  Deselect this option to activate the next option.  This option, First key sort order, displays a dropdown which when opened displays various ways to ‘sort’ day-of-the-week and month names.  For this example, I’m going to select the full month names and click OK.

You will need to click OK to exit the sort dialog as well.

Now the months in each year are sorted chronologically rather than by date which will make the consumers of this pivot table a lot happier.

That’s it for today.  Hopefully this week I’ll get back to my normal schedule.  See you next time.

Life-Long Learners

I use to teach some evening classes at the local community college.  Actually, I’ve been teaching evening classes at a variety of places since around 1980.  But in teaching college students, it became obvious to me that there were two types of students (ok, this is not a binary joke that begins with: There are 10 types of computers students, those that get it and those that don’t.)

The first type of student was very enthusiastic about learning whatever technology I happened to be teaching that semester.  These were the students who always read the textbook material before class, paid attention in class, and turned in homework not only on time, but which also went beyond the requirements of the question with additional features and functions.

The second type of student would come into class with no idea of what was going to be covered that night, played solitaire on their computers during the lecture and turned in their homework barely on time or perhaps a little late with the minimum of features and functionality to get a passing grade.  (You may argue that there was a third type that never read the book or did the homework and missed every other class, but I submit to you that they never graduated or at least not within the IT curriculum.)

But what made one group so different from the other?  Why did some students no only excel but appear to want to go further into each topic while other students spent more time typing to figure out the minimal amount of work that would still get them a passing grade and eventually a diploma.  After years of seeing this same pattern, I think the answer involves a curiosity or desire to learn as much as possible, a need to understand how things work and why, and a passion for making a difference with what they have learned.  Typically this went beyond the mere desire to get a job in the latest new technology or in a field they perceived as making a ‘great’ salary.  Rather, this high performance group can be classified better by their desire to continually re-invent themselves every few years to keep up with the changes in technology.  I suspect it also keeps them motivated by continually challenging them. 

Eventually that other  finds themselves working at a company or organization which is barely getting by, where innovation is absent, the excitement of what they are doing is absent and where staying the course become the norm.  Eventually these people drift into other careers either out of boredom or out of layoffs as those companies shrink or go out of business.

It was because of this realization that I would often tell my classes that if they think they can go to school for a few years, graduate, and then get a ‘cool’ job where they will never ever have to pick up another book again, they were kidding themselves and wasting their own money or the money of their parents.  Technology and specifically computer technology is changing at a rapid pace that requires successful people to continually learn new things. I’m not implying that you should jump from one technology to another every month or even every year (although I know some changes appear to come out that frequently).  However, I would have you consider a continuous growth in your skills in your current technology of choice along with an eye on trends that may lead to a major change in your career path every 5 to 10 years.

What do I mean by continuous growth?  Well there are books, on-line webinars and training videos, conferences and even local user groups.  For example, this Saturday is a SQL Saturday event in Tampa FL ( focusing on BI (Business Intelligence if you haven’t been keeping up with the ever-changing acronyms).  In fact, there are a couple of other SQL Saturday events around the country on the same day.  These events offer a full day of training typically for free.  Can’t beat that price!

Who do I mean by an eye on long term trends?  Take the example of the growth of the Internet since the mid 1990’s, or of web application development vs COBOL or FORTRAN applications on mainframes, or more recently the explosion of SharePoint as an office tool platform since 2007.  Most recently, the latest ‘bright shiny object’ is BI, especially with all the new capabilities offered through SQL Server 2012, SharePoint 2010, and PowerPivot for Excel and SharePoint.

So are you spending at least some time with or without your organization’s support learning these new things?  If so, I salute you.  You are definitely a life-long learner.

BTW: I may not find time for a technical blog entry this weekend as I will be at the Tampa SQL Saturday event where I will be conducting a session on using External Data Content Types in SharePoint 2010 (  On the other hand, maybe I’ll do a review of the event.  Follow us on Twitter with the hashtag: #SQLSAT86.