Build vs. Buy

IT departments are always facing the decision whether to build or buy a new application.  Sometimes the answer is easy.  For example, you would never build your own word processor or spreadsheet program.  Similarly, you would probably never buy an off-the-shelf application to control the rockets on your space shuttle (What? You don’t have one yet?).  However between these two extremes is a lot of grey area that have created great debates around the coffee machine of many companies between the business managers who think that buying software is cheaper and the IT managers who believe that building software can provide more feature flexibility and better support.  But who is right?

First, I cannot absolutely tell you what to do in your current situation.  If I could predict the future, I would have bought a couple of winning lottery tickets years ago. However, perhaps we can look at some of the pros and cons of both sides of the argument.

Buying software can be cheaper, but only as much as the application is a commodity.  That is why people buy their word processing and spreadsheet applications.  There is no way to justify the salary of a programmer and the time it will take to developing a reliable word processor or spreadsheet program.  On the other hand, companies like Microsoft who sell word processors and spreadsheets can afford to sell these products for such a low price.  How?  Because they expect to sell millions of copies and can spread the development costs over all those copies.  Also these programs are commodities in that everyone uses them in pretty much the same way so there is no need to customize the software for different industries much less for individual clients.  What customization is needed by 99% of the users is already built into the product through settings and features.

Buying software is also faster because in most cases the software already exists and all you have to do is pay for it and install it.  These days, you don’t even have to go to a store.  You can just download it from the Internet.  If your need is immediate, you might not be able to wait for someone to design the application, program it, test it, and provide a production version for you to use, so buying a product is the answer. 

On the other hand, the feature set for purchased software is usually fixed.  You cannot go to the vendor in most cases and ask for a custom version of the software just for your organization.  Imagine going to Microsoft to ask for a custom version of Microsoft Project to handle project planning the way your company wants to.  Of course, the more popular the software is, the more features it generally has and you can often customize the application to work well enough to satisfy your needs.  For less popular software, your choices are more limited.  If you want to use the software, you need to adapt your processes to work with the software rather than to adapt the software to work with your processes.

Then there is the support issue.  When you buy software, you may or may not get support.  Sometimes you can buy support contracts that allow you a fixed number of support calls per year.  Sometimes that support consists of a help desk in some foreign country where quite frankly you have trouble communicating with their representatives (Like people from the deep south or New England.  What were you thinking?).   Finally, some products don’t provide any real support.  And if you need a major change to the software to fit your business processes, you are usually out of luck or they may be willing to create a custom version of the application for you, but at a cost of time and materials.  Even if you do get support and even if they are willing to modify the application for you, it may be on their time schedule and not yours.  If your production line is down while you wait for bug fixes or program changes that you need, the cost of lost production and subsequent sales can climb very high.

Another problem with buying software or even with hiring a consultant to write your software is the question of whether they will be around when you need them.  Many private consultants (not all) may just be doing consulting until they get a full time gig.  Then if you did not negotiate to get the source code and system documentation, you are left holding a quickly aging and brain-dead piece of software with no idea what makes it work.  Even if you do get the source code, can your programming staff, if you have one, get into the head of the original developer to figure out what he or she was thinking?

This brings up a side issue.  Even if you have a programming team in-house, if you buy most of your software or hire consultants to write it, you in-house team will quickly lose its edge in terms of developing new applications and even in maintaining the existing ones.  Programming is very much a use it or lose it skill.  Also your best programmers will not hang around if you give all of the cool development projects to outside consultants and leave the maintenance to your internal staff.  No one wants to be the foster parent of orphaned applications.

On the other hand, when you develop an application in-house, you can control the features that the application has.  You foster the development of in-house skills that will assist you in supporting changes to the application and fixing of the inevitable bugs.  Most importantly, you can get exactly what you need and want in the application.   If the application fails or if you need major changes, you own the development team who wrote the application and can control their priorities to get the changes done as quickly as possible.  And since your team wrote the application, they will feel more ownership in getting it fixed and working perfectly.

These few points in favor of in-house may not sound impressive against the advantages of purchased software if you are a business manager, at least not until you find yourself facing a very specific need or a need that frequently changes over time.  Sure you might think you can save money buying a product that meets most of your needs.  However, the cost of customizing that software can quickly surpass the savings of building the application from scratch.  Why?  I’ve seen cases where the purchased software is written in obsolete languages or tools that are several versions out of date and no one on your staff has a clue how to make changes to it.  The only solution then is to totally rewrite the software.  You have already spent time and money on the first version you bought and now need to make a second investment in redoing what you could have done in the first place.

I don’t think there is an absolute way to calculate which path is better when you need a new application.  The decision is often as political as it is technical.  A lot too may also depend on how much development you need over time.  If your needs are high, I would recommend building a good agile development team.  If your needs are rare, perhaps developing a relationship with a good local consulting group (not an individual) who you can hire as needed and who has been around for several years may be a good choice.  But I do not think you will be successful if you relegate your in-house development team to a maintenance role fixing other people’s software who took the money and ran.

See you next time.

Advertisements

Making Your Own Excel Functions

Over the last four weeks, you saw how to create your own macros in Excel, how to store them, and how to use them.  However, sometimes what you need isn’t really a macro.  By that I mean that you cannot simply move the cursor around and use the built-in Excel functions to complete the task you want done.  An example of that uses the list of files shown below in which you want to extract just the filename from column A to column B, removing all the folder (path) references.

As you can see in this list, the number of folders and the length of the path varies.  We cannot simply look for the text after the first, second, or third forward slash to get the filename.  Nor is there a way to ask Excel to get all of the text after the last slash in the filename even though that is exactly what we want.  Excel has the FIND() function which allows you to get the first location of a character in a string.  Because the FIND() function allows you to specify the character that you start at, I suppose there would be a way to iterate through the file path to find the last slash and then assume that all text after the last slash defines the filename.  For example, suppose you had the path:

D:\Operations\IT\WebServices\BC\ServerConsolidation.doc

The result of FIND(“\”,A2,1) would be 3 because the third character in the path is a slash.  Now you could take the result of the first find, increment the value by 1, and use it as the starting point for the new search as in FIND(“\”,A2, 4).  This time you get a value of 14 which is the position of the next slash.  You would continue to repeat the FIND() function incrementing the previous result by 1 until FIND() returns no value.

While this technique would work and you would eventually find the last slash and then using the MID() function you could retrieve just the filename, this process is a bit clumsy.  Wouldn’t it be better if you could search the string from the right?  Unfortunately, Excel does not provide such a function.  However, you could write that function yourself by created a UDF (User Defined Function).  Here’s how.

First, open the Developer ribbon.  If you do not see the Developer tab, refer back to my first macro post to learn how.

Next click the Visual Basic icon in the Code group of the ribbon.  This opens the Visual Basic for Applications development environment.  The panels on the left shows the object model for the current spreadsheet

Right click on Microsoft Excel Objects and select Insert –> Module as shown in the following figure.  This adds a Modules folder and a default code module named Module1.  (While you can rename the code module, there will be no need to do that here for what we are going to do.)

Next begin defining a function by entering the word: Function followed by the name of the function you want to create.  In this case, the first function I will show you how to create is one that reverses the order of the characters in the string so that the first shall be last and the last shall be first.

(This post is not meant to teach you programming, so I will assume that you know some Visual Basic from here on out.)

I am going to pass a single parameter into this function to a variable scoped to the function simply named: text to receive the original string, and the function itself will return a string with the characters displayed in reverse order.  Therefore, I will enter the line:

Function RevStr(text As String) As String

Next I will create a simple For-Next loop to build the new string beginning with the characters at the end of the original string appending one character at a time to a new string using the following code:

For i = Len(text) To 1 Step -1
    RevStr = RevStr + Mid(text, i,1)
Next i

That is all you need for this function.  You can actually test this function by returning to the Excel spreadsheet (click the small green Excel icon beneath the word File in the menu) and entering the command: =RevStr(A1) in cell B1.  But wait, we are not done yet.

Now that we can reverse the string, we need to find the first slash, and then retrieve only the characters up to that first slash but not including the slash.

To generalize this function a little, I will pass two parameters into it.  The first parameter: text is the original string again, and the second parameter: searchStr is the character that I’m looking for, a slash in this case.  Again this function returns a string which should be only the filename.  The code for this function appears in the next figure along with the reverse code function.

Note that after calling the RevStr function, I can use the InStr command to find the first occurrence of the slash character.  (Ok, I know I could have used the InStrRev command and skipped creating the RevStr function entirely, but you never know when you might need a reverse string function.)

The InStr function returns either the column position where it finds the search character or it returns a zero if it does not find the character.  Therefore, I use a little IF-THEN-ELSE block to get the filename that follows the last slash or returns the entire original string.

Apply this function through the rest of column B using an expression such as the following beginning in cell B2:

=GetFileName(A2,”\”)

After adding this function to all the cells from B2 down you should see the results of your user defined functions displaying only the filename for each address in column A as shown in the following results:

You might ask whether we could get rid of the drive reference in the last row of the previous figure.  Of course you can.  All it takes is a bit more code to check for a colon when there is no slash.  But I’ve left that enhancement up to you to try.

So congratulations.  You’ve just created your first UDF (User Defined Function) in Excel that you can now use like any other Excel function.  You might note as you begin using your new function that it even appears in the Insert Function dialog and when entering the function, you get the same tooltip style help to fill in the parameters that you get with every other built-in Excel function.

Now that you can create macros and functions within Excel, you can tell your friends that you are an Excel programmer. I’ll come back later to talk about creating and using forms in Excel to collect data.  First however, we will return to DAX to look at some of the things it can help us with when working in PowerPivot.

See ya.

What Does It Take?

Looking for a new job can be stressful.  But it can be just as stressful on the people performing the job interviews.  What questions should you ask?  What questions will your employee relations department allow you to ask?  What characteristics make a good hiring choice?  These are all good questions.  A lot of people are willing to offer you advice when you find yourself responsible for hiring additional staff.  For example, in an article last April, Forbes suggested that there are only three true job questions:

1) Can you do the job?

2) Will you do the job?

3) Can we tolerate working with you?

How do you determine if you can tolerate working with someone during a short half hour or hour interview?  Well sometimes the candidate helps you out.  This is a true story of a candidate a few years ago.  About midway through the interview, we asked the candidate what they would do if they found themself blocked from doing something by their supervisor.  The candidate without blinking an eye said, “Well it’s like when you are trying to take a hill during a battle and your lieutenant is blocking your progress because they are going too slow or they are just scared.  I’d shoot them to get them out of my way and take that hill.”

We just sat there stunned for a few moments not knowing what to say.  The candidate sensing our uneasiness corrected his statement.  “Oh, I would not shoot to kill. I’d just shoot him in the leg just to get around him.  I certainly wouldn’t shoot him to kill him.”  Oh that makes it soooooo much better.

That candidate did not get the job.

I’m not going to suggest that these questions from Forbes are not good questions because in fact they are, but I maintain that they are a bit short-sighted.  Of course you want someone who can do the current job and will do the current job and who wants to work with someone they cannot tolerate (or someone you are afraid might shoot you in the leg).  However, what happens when the current job finishes, changes, or morphs into something else?  Will the person be adaptable?  Are they willing to learn new things, new applications, new ways of doing things?  Unless you are hiring for a single and short-term job, these questions are just as important as whether the applicant can and will do the current job.

So how do you know if an applicant can not only adapt to change, but can grab new opportunities, master them and propel your company to new heights?  This is when you need to look at the candidate’s job history and what types of things they may have done before.  Then you might ask a very generic question such as, “What was the biggest job challenge in your career where you have had to learn something new and how did you manage to adapt to that change?”  It’s a very open question and allows the candidate to answer with as much detail as they choose.

An answer like the following could be a big warning sign, “I’ve always worked in pretty much the same area so I’ve always been able to make the best use of my past skills.” Another warning sign is negative words or phrases such as, “I really had no choice but to learn the new foobar language and I’ve always hated it.  That is why I’m looking for a new position.”

On the other hand, hiring someone with no obvious passion for what they have done in the past could mean that they will have no passion to do a good job for you.  For example, “I’ll do whatever you want me to do.  It really doesn’t matter to me.  Just tell me what to do.”  People like this can be led step by step through simple processes, but if they have to apply critical thinking to solve a problem, they are totally lost. 

Equally problematic is a person who appears to jump from one technology to another, never staying in any one long enough to develop any real expertise.  This might indicate a personality that is actually afraid to become an expert in a technology because they might then have other co-workers depending on them for complex tasks.  They figure that rather than hang around long enough to become the go-to person for tough problems, it is easier to jump to a different organization, a different technology so they can start all over as a newbie and not have others depend on them.

People who have worked in specific and relatively narrow technology areas for five or more years and have made two or three switches in their career to new technology, new languages, new ways of working are in my mind the gems regardless of their age.  In the computer industry, if there is any one constant, that constant is change. 

When I taught classes at the local community college, I would tell students that if they think for even a minute that what they learn in school now will last them their entire life, they are kidding themselves.  Changing programming languages, even simple changes such as going from Visual Basic 6.0 to Visual Basic, FoxPro 2.6 to Visual FoxPro, or SQL Server 7.0 to SQL Server 2000 or 2005 has seen the attrition of many otherwise good people.

So ultimately, I submit to you that you need to determine if the candidate has shown a willingness to adapt to the inevitable change or whether they try to cling to dying technologies to avoid having to learn anything new.  Isn’t that just as important to you and your organization as whether the person can and will do a specific current job and who you feel will fit into your corporate culture?  While I think so, you need to decide that for yourself before you begin your next hiring cycle which could be right around the corner as the economy improves.

Programming in Microsoft Excel – Macros – Part 4

Back in the first part of this series on programming in Microsoft Excel, I took an inventory list consisting of a repeating set of three rows that held the product ID, the product name, and the quantity of the product on hand.  I showed how easily you could write a simple macro to rearrange this data into rows with each row representing a single product and each of three columns representing one of the three product attributes.  This macro worked great except for one little problem.  You had to press the shortcut key to execute the macro one time for each product as you worked your way down the list.  If your product list only contained a couple of dozen items, you might not care too much.  However, if you work for a large organization with hundreds if not thousands of items in the product list, continuing to press the shortcut key combination for each item not only takes time, but such repeated action would probably result in a sore wrist at best.

So what can we do to make this task easier?  The answer to anyone who has done any programming at all is: Create a loop.

The obvious problem is how do you determine the number of rows to use.  Let’s assume that you do not know how many products are in the list.  You could begin by using the Worksheet.Cells property which returns a range of cells.  For example if we were to use Cells(5,1), we would get the first five cells in column 1 (or A) from the current worksheet.  (If you do not begin the expression with Worksheet with the name of worksheet as a parameter, Excel assumes you mean the current worksheet.)  However, we do not want to hardcode the number of rows.  Another object of the worksheet is Rows which represents all of the rows in the worksheet.  A property of Rows is: Count.  Therefore, Rows.Count returns the number of rows in the spreadsheet.  In Excel 2010, a spreadsheet has  1,048,576 rows.  Therefore, Rows.Count would return this number.  Combined with Cells as in Cells(Rows.Count,1) we now have a range object consisting of all of the rows in column 1.

Ok, still not quite want you want yet.  You need to understand that the focus cell in the range object is now down at row 1,048,576.  How do we get it back up to the last row that contains data.  Using the keyboard, you could simply press CTRL-UpArrow.  In Excel VBA, you can accomplish the same thing by using End(xlUp).  When appended to the end of the Cells reference from the previous paragraph, you get Cells(Rows.Count,1).End(xlUp).  This effectively brings our range of cells back down to only the data in our product inventory list.

Finally, if we tack on the object property: .Row, we can return the row number that currently has focus which should be the last row in the range of inventory data.  Therefore, our entire expression to find the last row of data in our product list is: Cells(Rows.Count,1).End(xlUp).Row.

Ok, that was the hard part.  Honestly.  The rest is easy.  So we can generate a loop like the following:

LastRow = Cells(Rows.Count,1).End(xlUp).Row
For iCnt = 2 to LastRow
<Insert macro created in part 1 here to restructure the data>
Next iCnt

While this would work, you would process a lot of blank rows.  Why?  Remember that after each product is restructured to a single line, you finish the macro by deleting those two lines that previously held the product name and quantity and thus reduce the length of the table by 2.  An easy way to account for this is to add the following line just before the Next iCnt statement to manually increment our counter variable to adjust for this change.  (Ok, it sounds a little backwards because you might think why not just decrease the variable LastRow by those two rows.  Alas, that will not work, so our only real option is to increment the counter by 2.)

iCnt = iCnt + 2

(Another option that would work is to immediately divide LastRow by 3 in the program line that calculates it and before we use it to define our loop.)

Therefore, our final macro code would look like the following after our edits.

Now if we save this code change to our macro, we can go back to an unmodified version of our product list, position our cursor in the first row of data, and execute the shortcut to execute our macro one time.  The macro now runs repeatedly until it reaches the end of the data restructuring all of the rows.

You now have seen that you can modify a previously recorded macro, save the changes, and execute the new macro with the new functionality.  Could you have written the entire macro directly in the editor?  I suppose so, but here is my take on building macros.  Use the macro recorder to your advantage to capture most of the functionality that you want from the macro.  Then go back into the macro to customize the code to meet your needs.  I’ve always been a proponent of using wizards to get the basic functionality of whatever it is you are building such as forms, reports, and even macros like this.   But don’t depend on the wizard to do everything.  Keep your options open by manually customizing the code the wizard generates.

Next time we will look at creating user defined functions (UDFs) that can perform actions not available from the existing built-in Excel functions.

See you next time.

True Learning

This week I’m returning to one of the themes from the FETC conference I attended back in January.  (Don’t remember what FETC stands for?  Look at last Tuesday’s blog.)  That theme was centered around the recognition that traditional face-to-face training does not work anymore.  Several speakers addressed this issue in different ways, but it all comes down to the fact that people only remember a small percentage of the facts that they hear during an oral presentation or training session.  Sorry, I don’t remember what that percentage was J.

Ok, all kidding aside, we have been in the digital age now for at least two decades, longer for some people, but we still rely on printed books and lectures to transfer most knowledge.  That statement remains true no matter whether we are talking about our schools or training in your job.  If you look in on any classroom, you will see some people working on something on their laptops (or more recently their tablets). Some might be texting, perhaps about the game last night or where to go for lunch.  A few might be staring out the window or looking around the room obviously disinterested in the topic being discussed by the presenter.  And if the presenter turns down the lights in the room to begin a PowerPoint presentation, you probably won’t need to wait long for the sound of snoring from that person in the back corner.  The sad fact is that this is true no matter whether you observe a school classroom, a college lecture hall, or a company training room.

Then when it comes time to apply the lessons learned in these classes, many of the people who have supposedly been trained fail.  Who is blamed?  The instructor is of course.  They must not be effective.  Hardly ever do people think about the effectiveness of the delivery method.  In fact, in training centers and classrooms around the country, blackboards have been replaced by whiteboards which have been replaced with smart boards.  With all of this money spend on technology, has education really improved?  The problem is, as Heidi Hayes Jacobs as Executive Director of the Curriculum Mapping Institute and President of Curriculum Designers, Inc., says, “People do dumb things with smart boards.”  The point she was trying to make is that merely adding technology to the training process does not guarantee success.  Rather it is more important to consider how we use technology as part of the training process.

Think about the last several times you attended a training class.  Compare how much you got out of the training class if you merely sat and listened to someone talk as opposed to a class which either used individual or group participation in activities that applied some of the principals that were being taught.  I’m sure you all agree that computer training classes that involve hands-on labs greatly aid your understanding of the material.  However, I ask you to think about the ‘feeling’ of a class which merely asks you to follow a lab step by step with every action you need to perform listed in a nice numbered list.  Compare that to a class in which you individually or within a group solve a series of ‘problems’ using some of the material covered and develop your own solutions to those problems.

One of the presenters suggested that one reason that classes are not more interactive is that even the presenter has limited knowledge of the tool or technology being taught.  They may know how to follow the steps required for the exercises, but little else.  When questions arise outside of their knowledge area, they either fake an answer or try to side step the question.  How much more effective would the training be if the instructor simply said, ‘I don’t know, but let’s find out.’  Then either together with the group they might try different solutions, research the question on the Internet, or perhaps even assign the research to the person asking the question to report back to the group the next day (or meeting period).  Heidi asks, ‘Who owns the learning?’  It should be the students in the class, not the instructor.  Everyone can take ownership of their own learning by researching the answer to any topic by going to the Internet to ‘Google it’ or ‘Bing it’.  This leads to a level of independence and control over your own learning that never existed in prior generations.  Furthermore, technology makes it easy to collaborate with others around the world who may be trying to learn the same thing you are.  They may have insights that you never thought of.  You may even get a chance to talk to experts in your field of study.  But most importantly, the knowledge you can discover will be current, not stale like knowledge recorded in paper books or in binders.  When it comes to books, think of e-textbooks as having the ability to be constantly updated rather than limited to what was known on a topic 5, 10, or even 20 years ago.

How we use technology is more important that the specific technology we use.  Do you really know how to use all of the features of your current technology?  How many people don’t know all of the features of even their cell phone, much less their home or work computer or the applications they have?  The presenters challenged the attendees to learn at least one new thing every day.  There are lots of resources on the Internet to do that.  So why not start today?

One of the problems with all of this is: Change.  People generally resist change, a topic I covered back on November 2, 2011.  However, I submit to you that change is only scary when it is forced upon you.  If you take control over your ability to learn new things, you can control change and vanquish that scary monster that is keeping you from being all that you can be.  If you teach classes, think of how you can use technology to give the students a greater opportunity to take control over their own learning through labs, group work, research projects, etc.  Pose questions and ask individuals or groups to research the answer and report back to the group either verbally or with a demonstration, video, or an interactive lab exercise.  If you are providing training at a company, challenge them to find ways to apply what they learned back at their job.  Give them time to ‘brag’ about their successes to their peers.  That can spur others on to their own success as well.

In all of this, keep in mind that you do not own the learning for someone else.  Only they can own their learning.  Your goal is simply to engage them to learn more on their own than you have time to cover for them in your class.  That is when true learning occurs.

Programming in Microsoft Excel – Macros – Part 3

In the previous examples in this series, you created macros that Excel stored directly with the workbook itself.  In fact, that is why you have to save your spreadsheet not as a standard Excel spreadsheet with the extension of .xlsx, but with the extension that indicates the workbook has macros.  To do this, you must use the dropdown Save as Type in the Save or Save As dialog.  From the list of possible options (bet you did not know that spreadsheets could be saved so many different ways.) select Excel Macro-Enabled Workbook.

Once saved, you can copy the workbook to another location.  You can even give a copy of the workbook to another user who can then install it on their machine.  When they copy the workbook, and open it, they will see a yellow banner across the top of the spreadsheet warning them that: Macros have been disabled.  This is a default security setting in Excel to protect you from malicious macros embedded in other people’s workbooks that you subsequently open on your computer.  While it is nice that Microsoft has our back like this, it can become annoying to get this prompt every time that we open the workbook forcing us to click the Enable Content button at the end of the message if we want to use the embedded macros.  Furthermore, Excel does not care whether someone else created the macro containing workbook or if you did.  The security warning appears in all cases.

A way around this is to create a ‘safe’ folder on your machine where you can store not only your own spreadsheets that use macros, but also the workbooks of friends and colleagues whom you trust.  Once placed in this safe folder, Excel can open the workbook and immediately enable macros without the security warning and prompt.  To create this safe folder, follow these steps:

1)      With Windows Explorer, create a new folder on your local hard drive where you want to save your macro-enabled workbooks.  This folder can be created directly off the root of your drive or it can exist several folders down such as under Documents.

2)      Open Microsoft Excel 2010.

3)      Open the File menu and choose Options found near the bottom of the File menu.

4)      Choose Trust Center.

5)      In the detail area to the right, select the button Trust Center Settings.

6)      In the resulting dialog, click Select Trusted Locations in the left menu.

7)      In the detail area, you will see a list of the folders that Excel currently trusts.  To add a new folder, click the button: Add New Location.

8)      In the dialog that appears, select Browse to navigate to the folder where you want to store your macros.  (Yes, you could directly enter the folder’s path, but navigating to it prevents any possibility of typing in the path incorrectly.

9)      Click the checkbox to allow Excel to trust sub-folders

10)   Click OK to complete this dialog.

11)   Click OK in the Trust Center dialog to close it unless you have additional folders to mark as trusted locations.

12)   Click OK in the Excel Options dialog to close it unless you have further changes to make.

At this point, you can open all macro enabled workbooks stored in this trusted location without having to see the Security Warning and with macros automatically enabled when you open the workbook.  This applies not only to workbooks you create, but also workbooks created by others.  Therefore, make sure you trust the macros that other people may have embedded in your workbooks before adding them to this folder.

How can you examine the macros in a workbook that you get from someone?  After you open it, you must enable the macros if you want to be able to open them to examine the code.  Don’t worry, if you immediately go to the Developer tab, there should be no events to trigger the macros into executing.  Then go to your Developer tab and click on Macros in the Code group.  The dialog that appears displays all the macros in the currently open workbooks.  If you only have the one workbook open then this list is by definition all the macros in that workbook as shown in the following figure.

If there is more than one macro, select the macro you want to examine by clicking on it once and then clicking on the Edit button found on the right side of the dialog.  This action opens the Microsoft Visual Basic for Applications integrated development environment shown below.

As I mentioned before, this series will not attempt to cover everything involve in programming in Excel, but if you have any familiarity with the Microsoft Visual Studio development environment, you should have no trouble figuring out how to use this environment.  Since we are dealing with a simple macro, the code generated when recording the macro appears in the default window that appears in the development environment.  You can read through the code one line at a time to determine what it does and whether you really do trust the code.  You can even execute the code one line at a time, although this would not be my recommended method to determine if the macro has any malicious code because even when stepping through the macro one command at a time, it is too easy to execute just one command too many.

After reviewing the code, you can close the Visual Basic for Application window and then either go back to the Macros list to examine another macro, or you can exit Excel and copy the examined workbook to your trusted location folder.  If for any reason you do not trust one or more of the macros in the spreadsheet, you should just exit Excel and delete the workbook from your system so you don’t accidentally run any of the macros at a future time.  If the workbook comes from a co-worker, you may want to talk with them about your concerns about loading and running their workbook.

If on the other hand, you plan on creating macros that only you will use while developing spreadsheets but which the spreadsheet itself is not going to be dependent on if you were to give the workbook to another user, you have another option when creating your macros.

Remember when you create your macro Excel displays a dialog asking for the name of the macro and the shortcut key.  Right after the shortcut key and before the Description there is another option: Store macro in.  This option defaults to the current workbook.  However, by using the dropdown, you can see two other options, a new workbook and something called a Personal Macro Workbook.  If you select Personal Macro Workbook, Excel creates a new workbook behind the scenes and marks it as hidden.  This workbook is not used to store data.  Rather it is used only to store your macros.  In fact, every time you open a new spreadsheet, Excel also opens the Personal Macro Workbook.  You can then use macros stored here in other spreadsheets without having to recreate them in the new workbook.  You might think of this workbook as your macro library.  Remember that other users will not have access to the macros you create in your Personal Macro Workbook so do not store anything that they might need there.  However, it is a great way to keep all of your prized tools available.

So where is this Personal Macro Workbook stored?  On my machine the path is:

C:\Users\Michael\AppData\Roaming\Microsoft\Excel\XLSTART\PERSONAL.XLSB

This folder is obviously not where I normally would store my workbooks.  However, it is the place where Microsoft stores information that it wants to keep from session to session.  In this case the XLSTART folder under Excel includes workbooks that you want Excel to automatically open when you start Excel, in this case the PERSONAL.XLSB workbook which contains the macros I want to use in any workbook that I’m developing without having to store those macros in each workbook.

So today, you should be able to take away knowledge of how to enable macros automatically by making a Trusted Location for your Excel workbooks with embedded macros.  You should also know how to save macros to your own personal macro library that you can use to develop any workbook without having to add those macros to each one.  I hope this makes creating and using macros more efficient for you.

Next time, we will return to our initial macro to restructure our production data for one more time to show how you can not only edit your macros, but how to make them  loop so you do not get carpel tunnel syndrome from repeating pressing those shortcut keys.

Google Jumps Into the Computer Hardware Market with ChromeBook

I’m writing this blog on the last day of the 2012 FETC conference here in Orlando, FL.  FETC stands for: Florida Educational Technology Conference.  If you did not know what FETC stands for, you probably would not have found the definition on any of the conference materials.  But that is not unusual these days.  I’ve seen more and more people use acronyms without defining them assuming that the reader automatically knows what they mean.  Unfortunately, that is not always easy.  Maybe it bothers me so much because my first editor called me out on every undefined acronym I used if I did not define it the first time I used it.

When I first moved to Florida, I had to learn that DOE means Department of Education (state government) rather than Department of Energy (federal government).  In case you are wondering, it is the Education Department at the federal level.  Then I heard about the CIA on the Food channel.  Were spies from foreign countries trying to steal our recipes so that we needed to call in Central Intelligence Agency operatives to protect our crusine?  Later I find out they were really talking about the Culinary Institute of America.  So it really wasn’t an international threat against America after all.

I guess my crusade against TLA (Three Letter Acronyms) continues.  If you care to see how bad the use of TLA is, check out the following web site.  It lets you enter an acronym to see dozens of possible meanings.

http://acronyms.thefreedictionary.com/tla

(There are other sites that do the same thing.)

My real focus of this blog is not to talk about acronyms, as much fun as that may be, but to discuss Google’s Eye Opener keynote.  It focused on the new ChromeBook computer as a potential new tool for students to use in the classroom.  It does have several very good features including:

  • No touch setup
  • Boot in 8 seconds or less
  • Automatic updates to the OS and the browser
  • Remote management of the devices
  • No anti-virus software is needed
  • Thousands of apps including some like Gmail, Calendar, and Google Docs that run offline.

All of that sounds great especially for smaller districts that do not have extensive technology support teams.  But is it the right choice for you?

After the session, I went out to the Internet to get some more information on these devices.  First, it appears that there are two primary manufacturers of this device, Acer and Samsung.  The price point for a ChromeBook varies between $350 to $450 per unit.  So the first thing that struck me was that the price, while less than an Apple iPad, was not all that much less than a standard netbook.  So how is it different from a netbook?

I noticed is that the ChromeBook does not have a standard hard drive.  Rather it uses a 16 GB SSD drive.  Those are solid-state drives for those not familiar with that TLA.  That is essentially the same memory used in your USB (Universal Serial Bus) thumb drives.  In that respect, it is more like a Smart Phone or an iPad or similar tablet.  In fact, to add applications, you will need to go out to Google’s Chrome app store. The apps found here are similar to those you find in Apple’s app store or the Android Market.  While I did not see anything specific that would limit you to the Chrome app store, I also did not see anything to suggest that you could install and run Window or Linux applications either.  It also was not clear to me whether any other apps such as those written for Android devices would work on ChromeBook.   If true, that just means another app store and for developers, and another platform to port your applications to.

The elimination of any anti-virus also sounds great at first, but again I have to wonder if it is just too soon to know whether viruses can be created to specifically target applications that run on the ChromeBook.  The documentation claims to protect the system by providing multiple layers of protection including sandboxing of running apps, data encryption, and verified boot processes.

A major advantage for school use is the long battery life.  The Acer version boasts a 6 hour battery life while the Samsung version claims up to 8.5 hours.   The Samsung also has a slightly larger screen giving it a slight edge between the two products.  But again, even my Acer netbook gets at least a 6 hour life on its battery.

So is the ChromeBook really a better alternative to the netbook or a tablet?  BTW (by the way), tablets have touch screen technology, the ChromeBook does not.  The netbook has an internal hard drive which can hold more data and applications including standard Windows applications.  On the other hand, Windows applications tend to cost more than apps in an app store.   Neither of these devices have CD (compact Disc) or DVD (digital versatile disc) drives.   Most other features are comparable.

The bottom line to me is that I do not see the ChromeBook as a runaway winner.  Rather, it is just another strong competitor in the market for people who need basic computer applications and Internet access but don’t really need the power of full notebooks or desktop computers.  Could this work for students in the classroom?  Actually, I think the SSD drive for student computers is a better choice since it has no moving parts like in a traditional hard drive.  In any case, I don’t believe it will be long before other netbooks begin to follow this trend especially as the price of SSD drives begins to fall.  So if you have children going to a school that supports the use of digital technology, it might just be worth a look.

Programming in Microsoft Excel – Macros – Part 2

Last time we introduced how to create a simple macro in Excel and use it to restructure a list of product inventory data.  During that discussion, we mentioned creating a shortcut key to execute the macro from the keyboard.  That may be all you need if you are the only one who needs to use the macro or if you only have a few macros.  However, what do you do if you plan to distribute your workbook to others to use and you want and expect them to be able to use your macros?  Do you really think they will go through the Developer ribbon and open the Macros dialog to discover what macros you created and how to access them?

Probably not.  Therefore, you want to ‘publish’ your macros in a way that other users can easily see and use them.  The easy way to do this is to add them to either one of the existing ribbons or to create your own macro ribbon.  In this blog, I’ll take you through the steps on how to add your macro to an existing ribbon and how to create a new macro ribbon to publish your macros.  I’ll also show you how to add macros to the Quick Access Toolbar at the top of the Excel screen.

To customize the current ribbon, you can right click in a blank area to the right of the ribbon to show the following menu.

This action takes you to the same Excel Options dialog you saw last time that you could also get to by opening the File menu and selecting Options near the bottom of the menu.  By right clicking on the ribbon where you want to add the macro, the dialog opens immediately to the correct ribbon tab and expands the groups.  However, Excel does not know what you want to do to the ribbon, so it opens the list of Popular Commands as your source for commands to add to the ribbon.  Switch the Choose Commands From dropdown box to: Macros to see only the macros you created.

If you want to add a macro to an existing ribbon, you will quickly find out that you cannot add commands to the existing groups.  However, you can add a custom group to the ribbon where you can place your macros by following these steps.

  • Click on the tab (ribbon) name where you want to add your macro.
  • Click the New Group button at the bottom of the column.  This creates a new group named: New Group (Custom)
  • Rename the new group by clicking the Rename button. (Excel should select the new group upon created it.  If not, select it first.)  For this example, I will rename my group: Macros.
  • In the Display Name dialog, provide a unique name for the group.  Although this dialog (shown later in these steps) displays icons that you can display with the group, I think this was a mistake as Excel only uses icons with items within a group.
  • Now with your new group renamed and selected, return to the list of macros on the left and choose the one you want to add and click the Add button.

  • With the macro added to your macro group, you can rename the macro to something more user friendly by clicking the Rename button as shown below.  Note here that either you can accept the default icon for macros or you can select a different macro to represent your command.

  • Click Ok to close the Rename dialog. Click OK on the Excel Options dialog to close it.

Now when you open the ribbon, you will see your Macros group and any macros you added to the group.  To execute the macro, you now only need to position your cursor on the cell where you want the macro to begin and then click on the desired macro in the ribbon.

To create your own Macro ribbon, the only addition to the above instructions is that before you create a new group in an existing ribbon, click the New Tab at the bottom of the Customize the Ribbon page of the Excel Options.  This creates not only a new tab (ribbon) but also adds the first group to the ribbon.  You can rename the tab by selecting it and clicking the Rename button just like you would do to rename a group or a item within a group (a really multi-functional button!).  You can even rearrange the

order of tabs, groups within tabs, and items within groups by using the up and down arrow buttons along the right side of the tab list control.  The figure to the right shows a new tab I created called Macros with a group named: Macros by MPA that only has a single command: Inventory Restructure displayed with a custom icon.

You can also add a macro to the Quick Access Toolbar at the top of the Excel window by clicking on the arrow to the right of the Quick Access Toolbar and then selecting More Commands:

You can also right click in an empty area on any ribbon and select Customize Quick Access Toolbar.  Both options take you again to the Excel Options dialog but this time to the Quick Access Toolbar page.  Again select Macros from the list of available commands rather than Popular Commands.  Choose your macro and then click the Add button as shown below:

Notice that in this toolbar, there is no Rename option.  For some reason, Microsoft decided to name this button Modify.  In any case, you can click the Modify button after adding your macro to the toolbar to rename it and to provide a different icon.  You might say, why do I need to provide a display name since the Quick Access Toolbar only shows the icon?  Well, Excel uses the Display Name in the Tooltip when you hover your mouse pointer over the icon.  Therefore, you may still want to change it.

Which method should you use to make your macros available for use?  I suppose if you have many macros, you might want to create a separate tab (ribbon) for your macros, perhaps separating them into groups based on their function.   If you only have a few macros, you could either add them to the end of one of the existing toolbars in their own group or perhaps to make it easy for end-users to find them, put them in the Quick Access Toolbar.

Next time, we will look at different places where you can store your macros and how you might choose the best location.