It Is Not About The Metadata!

A while back I was at a meeting in which the topic of discussion was how hard it was to find any documents on the SharePoint portal.  The site collection in question was created four years ago and while the initial structure of the site made sense aligning itself along department and function boundaries, things have changed since then.  More importantly, many of the original site owners have moved on and new site owners were never selected.  That was problem #1.

Users of the site were encouraged to place documents on the site.  However, they treated the site much like they would a network share dumping everything on the site in a series of nested folders within folders within folders within folders…  Well, you get the idea.  The site nesting was not much better.  As a result, getting to any file was a complex series of branching that could easily confuse the person looking for a file.

Problem #3 is related to problem #1.  Obsolete files and information from a year, two years, and even three years ago still sits on the site and has not been updated.  I say that this problem is related to problem #1 because when a site does not have an active owner, no one looks at the content of the site with a critical eye as to what should be there and whether the information has been kept up to date.

None of these problems can be solved by magic.  If only that were true.  Switch to a new software product will never fix a problem if the users continue to try to work and do things like they always did before.  New systems give you an opportunity to change the way you work.  Hopefully this leads to better systems.  But trying to force new systems to work the same as old systems and then claim that the new system is no better than the old system is just plain silly.

So why not just use the SharePoint search functionality to find the files?  Great idea if you craft your search based on moderately unique words or phrases found within the document, something I do all of the time.  The problem is that it takes more creativity that perhaps an equivalent search using Google.  But that is not a fair comparison either because Google constantly performs analytics over millions of searches to determine which searches are successful and then they rank return results based on that information among other techniques.

Unfortunately, doing the same thing in SharePoint requires additional work that many smaller organizations do not have time or resources to do.  So the solution I offered was to restructure the sites and folders to flatten the structure considerable while at the same time adding metadata to libraries to classify the documents.  Yes, it would take time to determine what that metadata should be.  But once defined, we could add that information back as managed metadata to help classify future documents.  After all, search can also take advantage of metadata to help narrow your search.

But the answer I got was that metadata was ‘too hard’ for the system’s users to figure out and manage.  Then the person who said that it was too hard started to pitch a different system in which people could store the data that would make it easier to find individual documents.  After moving most of the documents out of SharePoint into their new system, they would consider restructuring what was left in SharePoint.

The whole argument of ‘too hard’ struck me as odd coming from a professional. I remember a time many years ago when I was told by a secretary that using a word processor was ‘too hard’ compared to using her trusty typewriter.  Fortunately, the department head at the time believed in the future of PCs and convinced the secretary to just try it.  Of course, the test was rigged a bit.  The department head asked the secretary to type a three page letter that needed to get out and then proceeded after each time the secretary finished to edit the resulting letter to add, delete or otherwise modify the letter.  Using a manual typewriter meant retyping the entire letter.  Next he took a progress report to be sent to management and asked the same secretary to use the PC to type the report and then print a copy for his review.  Again he mercilessly modified the report.  But this time, the changes could be made quickly by just entering the changes, not retyping the entire report.  After a ‘few more’ documents were created both ways, the secretary actually came to the manager asking to have the typewriter removed so she would have room on her desktop for the computer and keyboard.

Another example of ‘too hard’ is when we tell children that math and science is ‘too hard’.  If enough people tell children this lie including peers and adults, they begin to believe it and stop even trying.  In fact, they begin to use the ‘excuse’ that it is ‘too hard’ when they don’t do well on homework and tests.  Since many adults believe math is too hard, they accept the excuse and give the child a pass on their poor grades.

I recently was having a conversation with another professional about my same age and when I told him what I do for a living, his response was, “Isn’t that too hard for a person our age?”  Really?  If it were socially acceptable, I would have slapped some sense into him right then and there.

But saying that metadata is too hard is not what really ticked me off.  A few days after this initial discussion about what to do to better organize the files in this SharePoint site, the person who told me that metadata was ‘too hard’ pitched their concept to management using this other non-SharePoint tool that would use ‘metadata’ to classify the documents to make them easy to find.

Stunned! It is really not about the metadata is it?  It is really not about how hard or how easy it is for users to learn a new system is it?  It is really not about the best use of corporate resources and using what one has to the best of its abilities is it?  It is really just about pushing a personal agenda.

C’ya next time.

Advertisements

Putting Quality Into Your Data – Part 1

Over the next several weeks, I want to explore with you a new feature of SQL Server 2012 called Data Quality Services.  To anyone who may ask whether this represents a digression from the months I have spent talking about PowerPivot and Analysis Services this past year, I just want to say that I do not see it that way.  You cannot get reliable results from your data analysis if you begin with bad data.  Some of you may be familiar with the acronym GIGO, meaning Garbage-In, Garbage-Out.  Performing analysis on data that has missing or incorrect data can lead to erroneous results and decisions.  Therefore cleaning your data first before performing your analysis becomes a natural and necessary first step before importing that data into your PowerPivot or Analysis Services model.

Many things can go wrong with your source data.  While I may not be able to identify all of the causes, allow me to categories some of the more common reasons for data issues that could affect your analysis.

Missing data – Unless every field in a table is a required field, users are likely to skip those ‘optional’ fields either accidentally or on purpose.  On the other hand, the solution may not be as simple as making all fields required.  For example, suppose you have a table of customer information including the customer’s address.  One of the address fields is ‘State’.  Since most of your customers are from the United States, you might be tempted to think: Why not just make the State field required.’  For most of your customers, this change would guarantee that each address had a state associated with it.  However, you also have sales that come from other countries, countries that do not have states or even their equivalent.  If we made the state field required, we would have to enter bogus information into this field for customers who did not have states and then in the reports somehow eliminate the bogus data without eliminating the customers themselves.  On the other hand, leaving the field blank means we could have missing information such as a United States customer who did not enter their state or it could be a customer from a country without states.  The point is that we might not now which case is current, at least not without additional analysis time and perhaps expense.  Using bogus data in the report could lead to ‘odd’ results and ignoring it could skew the analysis results as well.

Inconsistent data often results from fields in which users enter values for a field using a textbox rather than selecting values from a list of possible values.  It would be nice to limit users to a list of possible values in all cases, but at times the number of possible values does not make that feasible.  While the list of possible states in the United States is a relatively small list that lends itself to a dropdown control, suppose for a second that you left this field open for free form input. For my adopted state of Florida, I could get records with values such as: Florida, FL and Fla.  None of these values are strictly speaking wrong, but any analysis on state would require complex algorithms to group records with these three state name values together.

Invalid data is what most people think of when you mention data quality problems.  This is data that is just wrong.  Again using the state field in a customer table as a free form input box, suppose that a user enters the value ‘ZZ’.  While we may recognize this value as one that does not represent a state, a computer program will not unless it can compare the entered value to a list of acceptable values.  Invalid data can consist of anything from misspellings to wrong words that are spelled correctly.  For example, until data types became smart enough to recognize valid dates without a complex user written validation routine, systems may have allowed dates such as 02/29/13.

Inaccurate data may seem to be the same thing as invalid data at first, but it is really different.  Inaccurate data is data that is valid for the possible data domain, just the wrong value.  For example, entering a date of 02/12/13 is a valid date, but it could be inaccurate if the correct date should be 02/21/13.  These types of errors are often the hardest to find much less fix.

Duplicate data occurs when someone or some process enters the same record or group of records more than once.  Duplicate data might occur when a customer hits the Submit button more than once in the process of placing an order, or when they hit the backspace key a few times going back a few pages and then re-executing the code on those pages, or when users or support staff upload or process the same data file multiple times.  Sometimes the duplicate data is easy to spot, especially when all of the fields in two or more records have the same values.  Other times when only some fields remain the same, identifying duplicate data could become a challenge.

These are just some of the types of errors that can creep into your data and subsequently distort your analysis results if you do not clean them out first.  Starting next time, I will start exploring the use of Data Quality Services to address these and other problems.

C’ya next time.

In the Name of Safety

I’ve been thinking about the following question lately.  Well, actually, I’ve been thinking about it for several years.  It just doesn’t seem right.  So I’m going to ask you all to think about it and if it bothers you as much as it bothers me, perhaps you might consider writing, emailing, or texting your state congressman.

The question is simply this.  Why do motorcycle riders in the state of Florida not have to wear a helmet?

Stop and think about it a second.  Children have to wear helmets to ride bicycles these days.  Automobile drivers and their front seat passengers must wear a seat belt or other restraint.  Construction workers have to wear helmets.  Why do motorcycle riders and their cling-on passengers get a pass.  Are they afraid of helmet hair?  Do they think that it is too hot in Florida for a helmet?

The problem as I see it is the potential for serious head injury in a motorcycle accident is at least as great as injury from an automobile accident at the same speed.  Ever see what happens to a motorcycle rider in an accident in which the motorcycle flips and the rider goes flying over the handlebars flying like a circus performer, but with far less grace and far greater risk of injury?

I remember the day when I was still in junior high school when I was taking a short cut home through a hillside dirt path.  I was going a bit too fast, but then most kids think they are indestructible.  Anyway, as I was coming to the end of the path down the other side of the hill, I hit a ditch that cut across the path and the front wheel of my bike bit into the dirt stopping the bike, but my momentum carried me over the handlebars and into the gravel stones at the end of the path just where the roadway began.  Fortunately I was not hurt badly.  Just a couple of scrapes on my hands, elbows, shoulder, and knees.  Enough to cause some bleeding, but not life threatening.  Fortunately, I did not hit the ground on my head.  I made it home and suffered through my Mom’s digging out the stones that had tried to bury themselves into my skin.  Actually, that hurt more than the fall did.  But it taught me a lesson that I never had to repeat.

Nearly every day I have to drive on I-4, one of the major roads that cut through Orlando and the only one that makes getting between work and home in a reasonable amount of time possible.  And nearly everyday I see at least one motorcycle rider weaving between cars at a substantial amount over the posted speed limit.  To be fair, it is not all motorcycle riders.  In fact, I have noticed that most Harley riders are fairly safe in their riding.  Furthermore, most of them have helmets, but not all.  No, the real problem seems to be with the smaller, less expensive bikes.  While I may say the riders of Suzuki bikes are the most numerous in this group, it is not limited to Suzuki bikes.  Most of these riders speed along cutting between cars just to get a little further ahead with no regard to the danger they are placing themselves in if the drivers around them don’t see them speeding up behind them just to cut around them at the last minute.  I’ve also seen them cut between two cars in adjacent lanes or even ride in the shoulder of the road.  Now I don’t know what the road shoulders are like where you live, but here there is often everything from fine sand to gravel to rocks, not to mention the occasional bag of trash, tire fragment from a retread that disintegrated, and even the occasional suitcase, table, water jug, or other item that fell off a truck.

Unfortunately, if these people end up in an accident, they, along with the press will be the first to blame the automobile driver for not watching out for motorcycle riders.  I believe they often get it backwards.  The motorcycle rider needs to watch out for how they are driving and whether they are creating the unsafe condition in the first place.  Just the other week, I saw two Suzuki riders pulling wheelies side by side as they raced down I-4 after passing an accident which probably held them up in the resulting traffic for several minutes.  Neither of the riders had helmets on, but one did have a helmet strapped to the back of his bike for all the good it would do there.

Now there is no way to legislate against stupidity.  But why can not we have a law that requires motorcycle riders and their cling-on passengers to wear helmets?  Should insurance companies raise the rates of motorcycle riders who get into accidents without helmets?  Should insurance companies of the automobile driver in an accident with a motorcycle reduce the amount it pays for any injury to the motorcycle rider if they were not wearing a helmet leaving it up to the motorcycle riders insurance to pay the majority of the bill no matter who was at fault?  Should police more aggressively ticket motorcycle riders who behave irresponsibly on the road pulling wheelies, racing around cars, riding on the shoulder, etc.  Hardly a night goes by, especially on Friday and Saturday nights that I do not hear the sound of motorcycles racing along the major road that is only about a block away.  You can hear them rev their engines at the light, then take off down the highway shifting through their gears and perhaps stopping if they get a light a little further down the road.

I for one don’t want to pay their insurance bills.  Insurance should not be to protect the irresponsible while the responsible pay for others mistakes.  Accidents will still occur even when everyone tries to follow the rules and thus insurance is needed.  So if we cannot teach people to be responsible, can we at least make it a law that they must wear a helmet?  If safety restrains are so important in cars, why not helmets for motorcycles?

C’ya next time.

Adding Views to a SharePoint Survey

When I am working with SharePoint lists and libraries, one of the common tasks that I expect to do often is create alternate views of the items.  By default, SharePoint only provides a single view, All Documents or All Items for libraries or lists respectively.  Sure, some of the special lists types like the Task list includes a few other predefined views such as My Tasks or Past Due Tasks.  In all of these cases, if one of the predefined views does not provide exactly what I want, I can often create a custom view.  I can even promote one of my custom views to be the default view.  (Note, only one view in a library or list can be the default view.  When I promote another view to become the default view, the original default view is demoted to a regular view.  Additionally, I can delete only regular views from a library or list.  Therefore, to delete the default view, I must first promote or create a new view and make it the default.)

So when I created a survey for use without our organization, I looked at the available built-in views.  There were three; All Responses, Graphic View and XXXX.  They are fine for most purposes, but I wanted to create a slightly different view.  So I opened the survey list from the All Contents page and clicked the List tab.  Normally I would expect to then click on the List Settings option, scroll to the bottom of that page where the current views appear and click on Add a New View.  But when I got to the bottom of the List Settings page, surprise, there was no list of current views and there was no option to create a new view.

One of the things that many people who have worked with SharePoint know is that even when a particular list or library does not display a specific administration page like the page to create a new list, that page still can be opened if you know the URL and how to specify any query string parameters for the page.  One way to do this is to open another list or library that has the page you want and look at the URL.  So I opened a different list and clicked on Create View from the very bottom of the List Settings page.  This returned a URL like the following which I copy into NotePad so I can edit it more easily:

http://www.mySite.com/MyOtherList/_layouts/ViewType.aspx?
List=%7B03A2DC51%2D9B23%2D46C9%2D8524%2DECFB2237E6FD%7D

Notice the current list is identified with a GUID that is unique to my list on a specific site.  I will replace this GUID with the GUID for the survey.  So next I open the survey and select Settings from the options in the heading bar as shown below:

Survey00

Notice that on the Survey Settings page that if I scroll all of the way down to the bottom of the page, I will not see a list view or an option to create a view.  However, that is ok because the URL in the URL bar at the top of the Survey Settings screen displays the information I need, namely the GUID for the Survey.

Survey02

I can now copy just the last portion of the URL, the list GUID into my previous statement replacing the original GUID.  I may also need to change the reference to the site and list (survey).  Should then have a new URL like the one shown below which I’ve been editing in Notepad that I can copy and paste into the URL box of my browser.

Survey03

The Create View page appears for the Survey and I can either create a new view from one of the view templates displayed at the top of the page or I can use one of the existing views to create my custom view.  I do want to give a view a unique name and I do not want to make my new view the default view.  Suppose I call my new view: MPASurveyView, define all the view parameters, and click OK at the bottom of the Create View page.

Now when I open the Survey list and display the default view (which defaults to the All Responses view), I can use the dropdown list to display the possible views for the survey which now includes my new view definition: MPASurveyView.

Survey05

If I want to change the view used on the home page of the survey, I need to use a different method.  First, I open the survey’s home page and click on the Show all responses link.  Next I select Edit Page from the Site Actions dropdown.

Survey06

The page in Edit mode looks like any other SharePoint page with a large web part that displays the data.  I identify the web part that displays the responses and scroll to the right if necessary to access the web part’s dropdown menu as shown below.

From the dropdown, I select Edit Web Part to display the edit parameters for the web part.  I could either change the view that this link displays or I can click the Edit the current view option to edit the All Responses view.

Survey08

After making my changes and clicking OK, SharePoint updates the Show All Responses link to display my update or new view.

One last note.  If you want to be able to download the survey data to Excel, it appears that you can only do this from the default view.  So, if you want to change the data that exports to Excel, you may want to use the latter method above to simply modify the All Responses view.

C’ya next time.

The Trouble with Health Care – Part 2

Last time I talked a little about some alternative treatments to just chemo, radiation, or surgery for cancer.  This week, I want to explore some of the trials getting the drugs we need for my wife’s home care since insurance forces hospitals to send people home sometimes before they should.  Case in point, when they sent my wife home the Thursday before Christmas but she could not handle the problems she was having and wound up back in the hospital the Thursday after Christmas.

When you do get to see a doctor or get treatment for your illness at a clinic or hospital, most likely they will send you home with one or more prescriptions for additional medicines you will need to take over the next several days at the very least or even longer in some cases.  However, going to your local drug store of choice may not be as easy as it sounds.

First, you have to find a drug store that will accept your insurance.  Some drug stores accept nearly all insurance plans, but most do not.  Where we work, our insurance plan covers something called Express Scripts.  For years we were able to go to our local Walgreen’s drug store.  Then Walgreen’s and our insurance had a disagreement over their contract and Walgreen’s stopped accepted Express Scripts.  Another local drug store picked up Express Scripts and we just transferred our business to them.  All was good until I tried to get the drugs that the hospital doctors prescribed for my wife.

On the Thursday before Christmas, the hospital discharged my wife to go home to recuperate.  They included three prescriptions.  The first was one for nausea.  The drug store we went said they would fill only part of the prescription, not the whole thing.  Instead of getting the 48 pills she was suppose to get to take 1 every 6 hours which would last 12 days, they decided they would only give us 12 pills at a time, enough for 3 days.  I would have to go back every 3 days to get the prescription refilled.  First, that is inconvenient when I am trying to care for someone at home.  Second, it gives the drug store an opportunity to charge a separate co-pay for each refill rather than just a single co-pay for the entire prescription.

The second prescription was a drug for pain and is commonly prescribed.  They did however, need to enter my driver’s license in with the order because it is a prescription drug that is sometimes abused.  When they tried to enter my driver’s license, their computer system rejected it.  In fact, they tried several times on several different cash registers and it was rejected on all of them.  Now I know my driver’s license is valid, but try to explain that to a technician who is more concerned that I may be a drug abuser trying to get a pain medicine to take for fun.

Maybe that is why they did not want to fill the prescription for the third medicine, which was also a pain medicine.  However, this medicine does not come in pill form.  Rather it is a patch.  (It is sort of like the patches for people who try to stop smoking.)  The patch is suppose to be good for 72 hours and is a slow release of pain medicine.  I suppose people can abuse these patches too by applying multiple patches on themselves at one time.  Anyway, they said they did not have any.  So I asked the natural question, ‘Can you check any of your other local stores to see if they have it?’  I was told, ‘No.’

Now while I have no proof other than the word of a very close pharmacist, that such a reaction is common when a pharmacist does not want to fill a prescription.  The secret is that a pharmacist does not have to fill every prescription submitted to them.  If they have any doubts about the interaction of a set of medicines, they can turn down the prescription.  If they feel someone might be abusing the drug in question, they can turn down the prescription.  In fact, if they do not turn down a prescription that for either of these two reasons, they could get into trouble and could lose their license to practice.

Unfortunately, there are rings of drug abusers who get prescription scripts for pain medicine from real or even questionable doctors, especially doctors from out of state or questionable pain clinics and try to get pharmacies to fill them.  Often one or two will try a series of pharmacies in an area and if they get the prescription, they report back to their buddies in the ring and soon the pharmacy is getting dozens of prescriptions for the same pain medicine from the same doctor or clinic.  The real tip-off is when the doctor or clinic is from of state and each person claims that they are in the area on vacation and ran out of their pain medicine.

If the pharmacy gets into trouble for filling these dubious prescriptions, the pharmacy immediately throws the pharmacist under the bus and the pharmacist loses, not just their license, but their job.  So is it any wonder that pharmacists will error on the side of caution?  However, in my case, the script was not from a pain clinic or a doctor from out of state, it was from the local hospital.

So I heard that Walgreen’s was not accepting Express Scripts again so I went there.  They said they had the drug I needed, but that even though they were now accepting Express Scripts again, they were not accepting Express Scripts from our organization.

Finally, I was able to get the prescription filled at a local food store surprisingly enough.

The problem is this.  Why go after the pharmacist in these cases?  Why not go after the doctor prescribing these pain medicines if the doctor or clinic is not legit?  Can doctors from hospitals for example apply a seal to their script (sort of like a notary) to validate the script so that the pharmacist knows they can trust it?  Apparently, the fact that the script was on hospital paper may not be enough.  Without this type of system people in real pain cannot get the medicines they need and people who just want to get high by abusing pain medicines will have a more difficult time getting these drugs.  It would seem like a simple way to validate legitimate scripts and would make our health care system a little more efficient.   Second, this game about who accepts and who does not accept your insurance coverage has to stop.  A caregiver does not have time to run around town looking for a pharmacy that will accept the script while the person they are suppose to be home caring for alone and in pain.

Well, that’s enough for this time.  C’ya next time.

Filter Any Way You Like

Today I’m going to take a quick look at filtering with the LIKE clause in SQL Server.  Unlike the OFFSET-FETCH clause that I talked about last week and is a new addition to SQL Server 2012, the LIKE clause has been available for quite awhile.  I use this clause when I need to filter the results of a SELECT statement but either do not know the exact expression value to search or because I want to include a subset of similar records based on the pattern of characters in the field value I am filtering the SELECT statement on.

One of the most common ways to use Like occurs when I know the field value begins with a specific letter or perhaps a set of letters, but I do not know the exact spelling.  In this case, if I try to enter the exact spelling and get it wrong, I would get either the wrong records in my result set or perhaps no records at all. In either case, I may want to filter on just the portion of the field’s value that I know.  For example, if I am looking for a customer whose last name begins with ‘Hel’ but I do not know the rest of the spelling, I could use the following SELECT statement to find all the customers matching that criteria.

SELECT LastName, FirstName, PhoneNumber FROM Customer WHERE LastName LIKE ‘Hel%’

Notice that the WHERE clause in this expression does not use a equal, greater than or less than sign.  Instead it uses the word LIKE followed by a string.  (Yes, LIKE only works with string data.)  The string itself includes the first three letters that I know the last name begins with, but then ends with a ‘%’ sign.  The percent sign tells SQL to match any records where the last name begins with Hel no matter what characters, even no characters that may follow.

Interestingly, I can also use the percent sign to match characters at the start of the string such as the following statement which looks for all customers that have a last name ending with ‘vich’:

SELECT LastName, FirstName, PhoneNumber FROM Customer WHERE LastName LIKE ‘%vich’

Finally, I can use the percent to match any characters before and after a set of known characters as in:

SELECT LastName, FirstName, PhoneNumber FROM Customer WHERE LastName LIKE ‘%ker%’

While the percent sign is a great placeholder for an arbitrary number of characters, what if I know the entire string except for a single character.  For example, suppose I don’t know if the last name I am looking for is ‘Bronson’ or ‘Bronsen’.  I could use a single underscore character for the character I don’t know in the following statement.

SELECT LastName, FirstName, PhoneNumber FROM Customer WHERE LastName LIKE ‘Brons_n’

I can use the underscore multiple times in the matching string.  I might even use an expression with three underscores to look for last names consisting of 3 characters.

SELECT LastName, FirstName, PhoneNumber FROM Customer WHERE LastName LIKE ‘___’

While these wild card character work great when I don’t know what characters the field value should have, I could be a little more specific.  For example, with the Bronson vs. Bronsen example I know the sixth character is either an ‘o’ or an ‘e’.  Rather than allow any character in this position, I could tell SQL to only return values where the sixth character was one of these two characters as in the following:

SELECT LastName, FirstName, PhoneNumber FROM Customer WHERE LastName LIKE ‘Brons[eo]n’

In this case the square brackets will match any values in which the sixth character of the field is either an ‘e’ or an ‘o’.  In fact, I could any number of characters within these square brackets.  However, if I have more than 3 or 4 characters that follow sequentially, I may want to use the shortcut of defining the range of characters as in the following statement which looks only for names that begin with one of the letters ‘A’ through ‘F’.

SELECT LastName, FirstName, PhoneNumber FROM Customer WHERE LastName LIKE ‘[A-F]%’

Notice that this expression tells SQL that the first letter must be one of the letters in the range ‘A’ through ‘F’, but there can be any combination of characters in the rest of the LastName.  So yes, it is possible to combine different wildcards in the same expression.

Sometimes it is easier to say what something is not rather than what something is.  In these case, you can use the ‘~’ character with either of the square bracket forms as in the following.

SELECT LastName, FirstName, PhoneNumber FROM Customer WHERE LastName LIKE ‘~[S]%’
SELECT LastName, FirstName, PhoneNumber FROM Customer WHERE LastName LIKE ‘~[A-F]%’

The first select above looks for all customers whose last name does not begin with the letter ‘S’.  In the second case, the statement looks for last names that do not begin with one of the letters in the range ‘A’ through ‘F’

One last point to keep in mind, using Like can affect the performance of your queries even if as in the above case with Customer I have an index on the customer’s last name.  If the filter expression begins with a few known characters, SQL may still decide to use the index.  However, if the expression begins with either the ‘%’ or the ‘_’ wildcard values, the filter has no way to narrow the selection of the records using an index.

Hope this helps you like your WHERE clauses more in the future.

C’ya next time.

Another Reason Why the Health Care Industry Needs Reform

As many of you know who have been reading my blog or know me personally, my wife has been fighting cancer for a little over three years now.  Back in December she was admitted to the hospital due to a severe blockage of her colon by a tumor pressing against it from outside the colon but within the abdomen.  Since she had been getting severe abdominal pains for the last several months, first rather infrequent and not too severe, but increasing in frequency.  We had been scheduled to see her colon doctor the Monday before Thanksgiving, but had to cancel when her father in Pennsylvania collapsed at a restaurant and was taken to the hospital.  He actually was turning blue at the restaurant and would probably have been declared dead right then if not for the help of a good Samaritan named Jim who together with a group of friends ate breakfast there almost every morning.  Jim did not know Sue’s dad, but he jumped up and came over to give him CPR keeping him alive until the paramedics arrived to take him to the hospital.  I wish there were more people like Jim in this world.  Oh, and by the way, if you saw Jim on the street you would think he was just a construction worker or contractor, but that morning, Jim was an angel.

Sue immediately booked a flight to Pennsylvania to be by her Dad’s side.  The prognosis was not good and on Saturday before Thanksgiving, she called me and our daughter Natasha to come up because the hospital would try to keep him alive until we got there.  So we both flew up the Sunday before Thanksgiving and arrived in Allentown about 2 in the afternoon.  Although I grew up in Pennsylvania, I had not been back since 1995.  Everything seemed so strange and new yet with a bit of familiarity.  We got to the hospital a little after 3 and the doctors came by to tell us that his systems were shutting down.  He had a living will that for those of you who do not know what that is means that he made the decision not to be kept alive only on machines.  Resuscitation is one thing, but his wife, Sue’s Mom lived her last 6 months in a hospital with machines keeping her body alive.  That was 24 years ago.  He did not want that for himself and had signed a legal document to that effect.  While they did use machine to keep his body alive until we all got there, it was clear to the doctors that all of his internal systems were shutting down.

At 4:30 they turned off the machines, His breath and pulse, weak on their own started to fail immediately.  Every update on the monitors showed his blood pressure dropping and his breaths coming further apart.  Within a half hour, he passed relatively peacefully.

As sad as the loss of a close relative is, we were grateful for the time he spent with us.  Natasha had always loved her grandfather and had some very nice words to say for his eulogy.  Even though it was Thanksgiving and you might think that we did not have a lot to be thankful for, we were thankful that he did not suffer in the end.  We were thankful that he lived as long as he did to be part of our lives and Natasha’s life.  We are thankful that he was not driving when he collapsed which could have caused an accident injuring someone else.  We are thankful for people like Jim who are ready to help others when they need help, not just with words, but with actions.

Sue managed to get around to for the first several days without any real problems as we made funeral preparations and because cleaning out the house as much as we could.  But by the day of the funeral, we could tell she was not herself because she barely ate anything whenever we stopped for a meal.  By the time we flew back home that Saturday, she asked for assistance at the Charlotte airport to get from one gate to the next as we had to change planes on our trip home to Orlando.  Once back in Orlando, we tried to get her appointment set up as quick as possible with her colon doctor, but because we missed the previously one, they could not squeeze her in until December 7.  During those two week, she ate less and less and even stopped drinking water because of the pain in her abdomen.  We actually lost almost three weeks from her first appointment until the colon doctor could see her.  When she did see her, she immediately signed orders to admit Sue to the hospital.

Sue is still with us, although she is still in the hospital.  She is not able to keep any food or drink down.  They are trying a last ditch shot at chemo to see if they can shrink the tumor that is blocking her colon since they say it is not operable.  We know there are other compatible co-treatments that may help.  One of them is something called hyperthermia.  The localized version of hyperthermia  uses an MRI-like electromagnetic field to heat cancer cells to 107 degrees F causing the cells to produce excess lactic acid and effectively killing the cancer cells.  This treatment has been pioneered by Dr Robert Gorter and is commonly used in Germany, but few places in the United States have begun using it yet.  We did find that Cancer Centers of America do offer this treatment, but our insurance does not allow us to get treatment outside of our state (even if they treatment might save the patient).

Other versions of hyperthermia include regional and full-body.  The full-body version heats the body using infrared radiation to about 101-102 degrees F to induce a mild-fever.  This fever activates the immune system doubling its effectiveness.  It is the immune system that identifies foreign bodies like bacteria and viruses in our bodies to keep us healthy.  It also normally keeps the cancer cells that all of our bodies produce each and every day from growing out of control.  The theory being that by kick-starting the immune system, it might help to fight the existing tumors.

Other treatments include the use of dendritic cells, intravenous vitamins, minerals, antioxidants, and glandular extracts.

But the problems is while Anderson is using some of these treatments such as hyperthermia in the Houston location (again out of state for us), they do not offer it locally. Why?  At this point, we would be willing to sign a legal document stating that we would not sue Anderson if the treatment fails as long as they would be willing to try it.  But even more to the point, together with chemo, hyperthermia treatment offers less risk to the patient that the chemo itself.  It is not a replacement for chemo (at least not at this time), but is used in conjunction with it.  At most, hyperthermia is mildly uncomfortable and could result in a few hours of flu-like symptoms, but no real toxicity.  We are at a point where not doing anything is essentially a guaranteed death sentence especially because of the blocked colon.  It has seemed over the last several weeks as we have explored the possibility of different options and each time we find something that may give us hope, our hopes have been dashed against the rocks.

 

Using OFFSET-FETCH

Most of the time when I retrieve data from a database table, I create a SELECT statement that retrieves a defined set of columns for all the rows that match the WHERE clause.  But what if I only want to retrieve a subset of those records because I am filling a table on a web page?  Perhaps I could use a dynamic WHERE clause that changed the filter to eliminate the records I already saw.  But this method is less effective if I jump forward and backward through the data a log or if I want to jump multiple pages forward or backward.

Fortunately, there is an easier way using the OFFSET-FETCH clauses to solve this problem.  The following statement retrieves 50 sales records at a time beginning at the 1001st record.

SELECT SalesId, SalesDate, CustId, SalesAmt, SalesTax
  FROM Sales
 ORDER BY SalesId
 OFFSET 1000 ROWS FETCH NEXT 50 ROWS ONLY

To retrieve the next 50 rows after the above statement, the last line merely needs to change to:

OFFSET 1050 ROWS FETCH NEXT 50 ROWS ONLY

As you probably already guessed, the OFFSET tells SQL how many records from the beginning of the recordset returned by the SELECT statement to skip.  Similarly, the FETCH clause tells SQL how many records to retrieve once it starts retrieving records.  The key to making this statement work is the ORDER BY statement.  Without the ability to retrieve all of the records in a very specific, non changing order, OFFSET-FETCH might retrieve records in one set that were retrieved as part of another set.

What’s the big deal you ask?  A SELECT statement always returns the records in the same order, right?  Actually, that is not true.  For example, suppose I used a statement like the following that sorts the result set on the Customer Id rather than the Sales Id field

SELECT SalesId, SalesDate, CustId, SalesAmt, SalesTax
  FROM Sales
 ORDER BY CustId

.If I ran this statement multiple times, I would always return the same set of records, but because there could be more than one sales record per customer (repeat customers are always appreciated), the order of those records could be different each time I ran the statement or if different people ran the statement.

When the result set of a SELECT statement does not return all of the records in a defined order, the statement is said to be non-deterministic.  This does not mean that the statement or the result set is invalid.  In fact, for many purposes, I do not really care what order the resulting records are returned in as long as I get all of the proper records that I am asking for.  However, a non-deterministic result set cannot be used with OFFSET-FETCH.

To create a deterministic result set useful for OFFSET-FETCH, you need to define an ORDER BY clause that returns all of the records in a specific reproducible order.  That is why I ordered the original result set by SalesId which is the primary key for the Sales table.  While you do not necessarily need to use the table’s primary key, you must use one or more fields that uniquely define each of the records in the result set to make it a deterministic set.

Before ending this week’s topic, how would you create a stored procedure that will return any page of records from a table of any arbitrary size?  The following stored procedure use3s two parameters: @pagenum that defines which page of records to return from the table, and @pagesize that defines the number of records in the page.

IF EXISTS (
  SELECT *
    FROM INFORMATION_SCHEMA.ROUTINES
   WHERE SPECIFIC_NAME = N’GetPageOfRecs
)
DROP PROCEDURE GetPageOfRecs
GO

CREATE PROCEDURE GetPageOfRecs
    @pagenum int = 1,
    @pagesize int = 20
AS
SELECT SalesId, SalesDate, CustId, SalesAmt, SalesTax
  FROM Sales
 ORDER BY SalesId
 OFFSET ((@pagenum-1)*@pagesize) ROWS FETCH NEXT @pagesize ROWS ONLY;

To use this stored procedure to return 25 records from the fourteenth page , use the statement:

EXEC GetPageOfRecs  14, 25

Well, that is it for this week.

C’ya next time.

Sharing Knowledge in the New Year

Sharing Knowledge in the New Year

Many of you may work with co-workers who guard the information they have like they are guarding the gold in Fort Knox.  Perhaps they believe that if only they have specific information, they will become invaluable to the organization and receive immunity to layoffs.  They may even intellectually understand the value of better collaboration, but the emotions of uncertainty in most jobs today can overpower what a person knows should be the best way for them to act.

Maybe one of your co-workers is more comfortable working on their own rather than as part of a team.  Before blaming them, you may want to consider that at one time they were on a team in which other team members took advantage of them by letting them do all of the work while expecting the credit to be shared equally by all team members.  When management condones this type of behavior or is just ignorant that it is occurring in the first place, better performing team members soon start to look for other ways to individually shine on tasks belonging to only them.

At the same time, management needs to become more aware of situations when all members of the team do not appear to participate.  If management waits too long to act, the outstanding performers eventually realize that these other team members are consciously or unconsciously taking advantage of the work they are doing and they begin to withhold information.  In a successful team, all team members must pull together.  You cannot have a few members of the team pulling the sleigh while the rest sit on the sleigh and go along for the ride.  (I just had to have a seasonal analogy.)

Does this mean that everyone has to be part of every single task?  I fundamentally disagree with the notion that everyone in a team or department must know all aspects of everyone else’s job.  A carpenter does not pretend to be an electrician and an electrician does not attempt to do the work of a plumber.  Sure they may know a little about these other jobs, but they are not experts.  So why does management assume that all IT staff are interchangeable?  There are many specialties in IT just like in construction.  A good manager encourages his/her team members to diversify their skills so they augment each other, not duplicate each other.  No one person can be an expert in all areas of IT.  However, everyone should be expected to be an expert in at least one area.  Project teams should then be structured to make the best use of the specialties of each staff member.  In such an environment, people may not feel like they have to protect the expertise they have developed on their own time to benefit others who are too lazy or unprofessional to develop an area of expertise on their own.  In groups of true professions with different areas of expertise, collaboration and sharing of information makes the entire group greater than the sum of the individuals.

Another concern is the manager who does not share information with his/her direct reports.  Sure there may be some information that should be retained at specific levels within the organization, but if a manager never shares information with their direct reports, they appear aloof and out-of-touch with their staffs.  Furthermore, when workers only find out what is going on from their fellow co-workers who work for managers who are more open with sharing information, a lack of trust begins to develop.  The team members begin to doubt that their manager considers them part of the larger team.  At best, this could cause the team to become guarded in everything they say and do rather than freely share information.  However, it could also encourage better performers to begin looking for other positions in the hope that they can work in a more ‘open’ environment.

Since it is New Year’s Day and a time for New Year’s resolutions, perhaps one of the resolutions you might want to consider is how to more openly communicate with your co-workers.  If you have direct reports, do you share information with them on a regular basis, perhaps at weekly team meetings?  How about the co-workers at your level?  Do you regularly communicate with them about what you and your team are doing?  Could you develop some synergism between groups by sharing information between groups rather than letting everyone work within their own silos?

And while you are evaluating your communication skills, think about how well you communicate at home with family, neighbors, and other people.  I won’t even ask the question how many of you even know your neighbors on a first name basis, where they work, and what their interests are?  Perhaps you could improve your communication by changing your perspective to see how others see you?

Change is hard.  Improving collaboration in a time of cut-backs and downsizing can be even harder as the natural tendency of most workers is to guard information they have even tighter than before.  Maybe it is time for management to show that they recognize and appreciate collaboration by rewarding those employees who share their knowledge and help mentor others rather than trying to be ‘fair’ by rewarding all employees equally which is anything but fair.

Well, New Year’s Day is a time for dreaming and wishing isn’t it?

C’ya later.