Appending Data Sources Manually in PowerBI

Last week I looked at how I can load into PowerBI an entire library of source data files and let PowerBI determine how to append the resulting files together. This method works great if all the data source files are of the same time and have the same schema/structure. But what if some of the files come from an Access data source, some from CSV files, some from Excel, etc. Furthermore, what if the data column order is different or perhaps if the column headers are different. These are cases in which you may need to import each data source individually. Then edit the query for each source to align the column, column names, data types, etc. before combining the data sets together. Let’s see how that looks.

For simplicity sake, I am going to start from the same folder as last week that holds 10 CSV files each representing a different sales region.

Although I am not going to show images from each data load, I would proceed to load each CSV file into my PowerBI data model one file at a time rather than loading the folder like last time. This method creates a separate table within PowerBI for each data set as shown in the following figure.

I can view the data in each table by selecting the table name in the Fields panel as shown above. The first thing I would want to do is to ‘fix’ any inconsistencies in the data schema in these tables by selecting the table and then clicking on the Edit Queries button in the External Data group of the Data Tools Modeling ribbon. I basically want to build a common table structure across all of the data sources including a consistent column name for each column that appears in multiple tables.

You should note that if I have a data source with a unique column that does not appear in the other tables, I can keep it. Later when I append the individual tables into a single table, the unique column will be brought into the final data model, but the field will be blank for all the other tables that do not have that column.

In my first example here, all of my data schemas from each data source are the same but they would not have to be. So after simply loading the data from each data source, I am ready to start combining the data into a single table for analysis. To do this, I click the Edit Queries button in the External Data group of the Home ribbon and then select one table to start the process.

There is a button on the far right of the Home tab while in Edit Query mode in the Combine group called Append Queries. I can use this button to begin the process of combining the tables.

The Append Queries button opens the dialog shown below that lets me select which table to append. I can only append one table at a time so to append together all 10 tables, I need to do this step 9 times.

As shown in the figure below, each appended query gets the generic name of: Appended Query followed by a number (after the first one which has no number).

This default name is not descriptive enough to help me identify which appended query refers to which data set. If I want to remove one of the data sets from the final table, I would have to click the settings button on the far right of the applied step (the gear icon) to reopen the Append dialog to see which table is being appended. Then if I want to remove that dataset, I could click the ‘X’ to the left of the applied step to delete that one step.

However, a better option is to right click on the default applied step name and select Rename from the popup menu that appears.

This option allows me to select the current applied step name and replace it with a more meaningful name.

The following figure shows a much clearer picture of which data set is being referenced in each step. It also shows that I have finished appending my 9 additional data sets.

Note that any steps applied to the individual data sets are still applied first prior to the data being appended to the final dataset.

Note: One thing that I did not do here but probably should have is to begin by making a duplicate copy of the table that I wanted to begin with so that I could preserve the original table with only its own custom transforms. Then using the duplicated table, I could append the rest of the data sets.

When all the data is appended into a single dataset, I can close and apply this transformation so that any data refreshes can repeat these steps.

I can now go to my Report tab and start to build the visualizations that I want. In the final figure for this session shown below, I create a table of Sales Amount by Sales Territory and included the Sales Territory Group, Region, and Country. I then click on the Sales Amount column to sort the table by this column in ascending order. You can see very quickly that most of the sales occur in Australia and the least sales are in the Central Region of the United States.

Beneath the table, I create another table with the Sales Territory Group and the Sales Amount. PowerBI automatically sums the sales for each Territory Group and displays a chart with only three segments. After creating the table, I change the visualization to the Donut chart to create the appearance shown below.

As with all PowerBI visualizations, the data in the table is linked to the data in the donut chart. If I click on the North America segment of the donut, the table on top refreshes to display only the 6 rows representing sales in North America. Similarly, if I click on the Pacific segment of the donut chart, the table above immediately updates and displays only the one line for Australia.

That’s it for this week. Come back next week for more PowerBI fun.

C’ya.

Using Power BI On-line

During the last several months, I’ve taken you through an initial exploration of Power BI Desktop, a program that you can download directly from Microsoft. However, there is another version of Power BI, one that resides in the cloud. Many of its features are similar to Power BI Desktop, but there are also significant differences. Let’s begin the new year by taking a look.

First you have to sign up to use Power BI. Click on the following link to go to the self-service signup page:

https://powerbi.microsoft.com/en-us/documentation/powerbi-service-self-service-signup-for-power-bi/

You should see a link on the page that says: self-service signup process which takes you to the link:

https://powerbi.microsoft.com/en-us/

The screen (shown below) tells you that you can sign up and get started with Power BI for free. That is almost true. There are some notable exceptions. For example, you cannot sign up with an email address provided by consumer email services or telecommunication providers, this includes the fact that you cannot sign up with a Microsoft Hotmail account. Also .gov and .mil addresses are not allowed at this time. I guess that surprised me a bit because government and military intelligence is something that we are in great need of. However, I guess if you want to call it BI, then strictly speaking it is business intelligence only. J

Another reason you may not be able to signup is because your IT department has disabled self-service signup for Power BI even though your organization may have the Office 365 SKU that supports Power BI. Again Microsoft provides instructions for your administrator on how to fix that problem should they so decide to let you use Power BI. Again, I can only assume you must work for an organization that has no interest in business intelligence if they tell you no.

Of course, Microsoft does provide a way for you to personally register for a new Office 365 trial subscription and use that ID to sign up to use Power BI.

You will get an email from PowerBI/Microsoft asking you to confirm your email address.

After you have successfully navigated all the steps to get a Power BI account, you should see an initial power BI screen like the following:

If it is not already there, you can downloaded the financial sample file from: http://go.microsoft.com/fwlink/?linkID=619356. I just don’t remember how I got it originally. To install the data after downloading it, I used the Get Data button at the bottom of the left navigation panel

I can then click the Get button within the Files box to load the CSV file I just downloaded.

This will provide me with a flat file dataset to work with as a sample. The next figure shows what appears to be an entry for both the Database group and the Dashboard group with one entry in each named Financial Sample.

By opening the Dataset Financial Sample, I can choose which fields I want to work with from the Fields list.

I can simply click the check box to the left of the field name or I can click on a field name and drag it over to the working area. For this demo, I will use Sales and Month Name.

While I do have the ability to customize some features such as the formatting of the title text and even to highlight one of the columns as shown below, I quickly realize that starting from a raw data file like this in Power BI, I can not define a custom order for the Month dimension to force the months to appear in chronological order.

Rather than being in chronological order, it appears that the month columns are ordered from the largest to the smallest in sales value. While possible, this is typically not what I would want to display.

I’m going to leave that problem as it is for now and attempt to click on the Dashboard entry for Financial Sample. I am told that I have a report with unsaved changes. When did I work on a report since I never clicked inside the Reports group? Well, it appears that when I define visualizations in the dataset area, I am actually creating a report and must save it if I want to keep it.

When I click Save, Power BI prompts for a report name. For now, I am just going to call it: TestReport.

With the report saved, I now can see the Financial Sample default dashboard.

The first thing I notice at the top of the page is a text box which is asking me to enter a question about the data. This is something new that we have not seen in the Power BI Desktop. The interesting thing is, I can enter a question (or even a statement since the syntax of a question is not as important as the words used in the sentence) that references one or more of the data fields in my dataset. In this case, I can ask a question like: What are the total sales? (Interestingly, I can just type: Total sales and get the same result.) As I type enough of the question for Power BI to guess at what I want, it starts displaying results

I can continue to drill down to get sales by product by just adding the words by product to my question/statement from the above example. Note that Power BI automatically decides to switch from a simple text answer for a single value to a bar chart with the bars representing my product dimension.

I can just as easily request to see total sales by month. Note in this image, the months are displayed in the correct chronological order even though the report I created earlier did not show them in that order by default.

I can even display sales by country as shown below.

By simply changing the visualization, however, I can create a map of sales by country which might be a better way to present geographic data to management.

So I’m going to stop here in this week’s tour of Power BI. I’ve discussed some of the issues you may need to deal with to get access to Power BI and have shown how to load and use a simple flat data file in the form of a CSV file. In future weeks, I will explore more of the features of Power BI and especially how you can use Power BI Desktop or even Excel to create the datasets you need to display in Power BI.

On last point that I will cover in more detail at a later time, but which you might want to be aware of at least for now. One of the advantages of using Power BI to display your data analysis is that you can save your dashboards and make them available to others within your organization as a simple download. Yes, with Power BI Desktop you can save your model and copy the file from your machine to others. However, if you make a change, you must redistribute the changed file each time. Also refreshing the data is a manual process at this time when using Power BI Desktop. We will see in future blogs how these issues are addressed by Power BI to make your life simpler.

C’ya next time I’ll take our Contoso data and show how easy it is to use it from a One Drive file.

The Power of the Power BI Query Editor

The power of the Query Editor in Power BI is more than that it allows you to load data from a variety of sources, transform that data to meet your analysis needs, create new columns of calculated results and define special views of the data. One of the things that I am impressed with is that it will remember all of the steps you put into getting your data ready for use and it can rerun all those steps the next time you reload raw data from your data source. So before we roll into a new year and begin looking at Power BI Desktop’s big brother, PowerBI.com, let’s take a look at the Power BI Desktop Query Editor.

In the past, when I first opened Power BI, I would immediately select the Get Data button from the Home ribbon. What if instead I immediately clicked on the Edit Queries button in the External Data group.

The Query Editor opens its own window with 4 tabs: Home, Transform, Add Column, and View. Looking at the Home ribbon, you should note that many of the most commonly used actions appear on this menu as well as on the other ribbons. At this time, there does not appear to be a way to customize the actions that appear on this and the other ribbon, but as with many things Power BI, that may come in time.

The first think I need to do is to open a data set to work with. When I click on Get Data, I get the same drop down of data source options that I would see from the Home ribbon of the Power BI Desktop screen.

In this case, I am going to load my standard Contoso dataset including the FactSales table with related product and date data. I will reference my local host SQL Server. Of course, you may have to reference a network SQL server. I generally do not enter the database name on the first dialog, but select the database from the second screen. Note that on the second screen, I can just click the down arrow to the left of the database name to open the database and display a list of the tables within it. This is in my mind a lot easier than spelling the name of the database correctly before selecting the tables I want to work with.

Because I am using a SQL Server database, I have the option of either importing the data or using DirectQuery which does not download a copy of the data to my local data model, but rather reads the data directly from the SQL Server instance.

Because I selected 5 tables from the database, the Query Editor list 5 queries along the left side of the screen, a preview of the data in the center and the query properties and any steps applied to the data on the right side of the screen.

I can click on any of the five queries to switch the one that I want to work on. In the above image, I elected the DimDate query. Now suppose that I want to make some changes to this data. Perhaps the first thing I want to do is to eliminate all the fiscal calendar fields from the table. One way to do this is the select the columns I want to remove. I can click on the first column and then while pressing the CTRL key click on each of the other columns I want to remove. I can also click on the first column I want to move and if the columns I want to remove are sequential, I can move to the last column I want to remove while press the SHIFT key and click on the column header of that last column.

Still on the home ribbon of the Query Editor, I can click on the Remove Columns button to get rid of the columns I do not want to keep in my final model. I notice that as I do this, another step appears in the Applied Steps section in the right column.

Next I want to add a new column to the date table that I will use to sort the months chronologically rather than alphabetically. That column will simply consist of the month number (January =1, February = 2, etc.). I can do this by clicking on the Add Column tab and selecting the Add Custom Column from the General group. Note the option Add Index Column. This option allows you to add a sequential number in a new column for each row that either begins with a ‘0’ or a ‘1’. You can also add a custom index that begins with any number and has a custom increment which may be different from ‘1’. This could be an interesting way to add a surrogate key to a table when you have to merge data from two difference sources (maybe domestic and international) into a single table for reporting purposes and you still need create a ‘new’ index because the domestic index and the international index may have overlapping values.

When I choose to add a custom column, the Query Editor displays a dialog to let me define the name for column as well as the formula. All of that is well and good, but the Query Editor provides an easier solution. Instead of clicking the Add Custom column, note that the Add Column ribbon has sections for working with text, numbers and dates. In this case, all I need do is to select the column [DateKey] and then click on the lower portion of the Date button to display the menu of options. If I select Month, I get a second set of options of different calculations related to month. The first option actually creates the new column and calculates the month number while the other options return the first day of the month and the last day of the month as date or the number of days in the month as an integer. That is pretty cool too.

Now the default name for the column is just Month. However, I can easily change that by right clicking on the column header and selecting Rename from the dropdown menu and entering the column name I want.

As I said before, each of these steps have been recorded in the Applied Steps section to the right of the screen. If I make a mistake or want to try something different, all I need do is click the ‘X‘ to the left of the step name to remove that step and then try something else. Remember that I am only working with a preview of the data at this point. I have not downloaded the data yet.

I could proceed with other data changes for this query as well as the other four queries, but I hope you get the idea that you can perform any transformation, create new columns, etc. that you may want. I will come back in the future to discuss some of the other features of the Query Editor that could save you time.

Let’s assume that I am done making changes to my data. It is now time to load the data from SQL Server into my model. From the Home ribbon of the Query Editor, I click the button: Close & Apply.

This will close the Query Editor window and begin loading the data from SQL Server into my local in-memory data model using the transformations in the applied steps of each query. This is the secret to how to minimize the data you bring into your model from a data source. Any excess columns will be removed. You can even get rid of specific rows (like we did in PowerPivot to eliminate some of the product categories when we loaded data many weeks ago). You can also create new columns, rename new or existing columns, and basic structure the data they way you want to use it. All of these transformations will happen as the data is loaded into your local data model. Then each time you refresh your data, these same query steps are performed consistently with each refresh.

Well, it is the beginning of a new year, at least a new calendar year. I think it is time that we move on to the big brother of PowerBI, PowerBI.com next time so that we can show how and when each tool should be used and how they can work together.

C’ya next time.

Using a Word Cloud to Analyze Responses

Have you ever had to analyze the results of a survey? If your survey consists of multiple choice, true/false or even Likert Scale type questions, the results are fairly easy to tabulate. All you need to do is to count the number of each unique response for each question and then display the data using a table, column/bar chart, or even a pie chart (please only if there are less than or equal to 6 choices).

But what if you have a question like: ‘Describe with a single word your boss’ and you allow them to enter any word truncating the response at the first blank. I suppose you could count the number of times each word is used and then display the results in a table or a column/bar chart. However, because there are potentially dozens of words that could be used, a column/bar chart could get crowded pretty fast.

Perhaps even more difficult to analyze is asking the responder to describe what they find important in a web site especially when you give then a free-form multi-line text field. Now you have to deal with entire sentences like: ‘The most important thing to me is a strong search engine to make it easier to find the information I want.’ How do you parse that type of response into meaningful data? That is what I am going to look at this week.

To begin, I need to capture the survey data in a format that I can import into Power BI. Generally I will have two choices for this, a simple text file of the column(s) from the survey that I want to analyze or perhaps the survey software (like SharePoint) can export the data to an Excel spreadsheet. In either case, I can then import the data into Power BI.

In terms of how I might want to visualize the data in Power BI, column and bar charts are not an option as I mentioned before because there are simply too many words and thus columns to display anything meaningful. However, a common visualization for just this purpose is something called a Word Cloud. Word clouds have been around for some time especially in blog sites to help identify the most common words used in the blog. Fortunately, Power BI has such a visualization in their Visuals Gallery site which can be found at: https://app.powerbi.com/visuals. This site contains visualizations that have been created and submitted by the Power BI community. There are instructions on the site on how to submit custom visualization that you create using the Power BI Developer tool. The Microsoft team will then review the visualization and if it appears stable and relevant, they will publish it to the site. Anyway, there is a visualization on this site called: WordCloud.

To use a visualization from this site, click on it. This displays a dialog as shown in the following figure which includes some information about the visualization, who created it, the license information, support, etc. However note the Download Visualization button

Clicking the download button may include an agreement to the terms of usage of the visualization and will then download a file with the extension .pbiviz. I recommend creating a separate folder on your machine where you can download this and other visualizations from this site. Next I find the box in my Power BI visualizations Report page and locate the button with the 3 dots (ellipsis).

Clicking on this button first displays a warning about importing custom visualizations as shown below.

Next I will use a standard Open dialog box to locate the file that I just downloaded.

Now with my WordCloud visualization available and my data loaded into Power BI, I could simply drag the column that contains my text onto the blank Reports page (or a blank area on the page if I had other visualizations already started, and then with the data selected, choose the WordCloud visualization.

Initially, the display is cluttered with common words like: the, and, an, a, is, had, have, other, my, I, and many other words that we use in everyday conversation. This noise detracts from the real information we want to gather from the word cloud. So we need to find a way to eliminate (or minimize) the occurrence of these words.

How can we do this? The brute force method (and for now the only method I know) is to return to the Data page, select the Edit Queries button on the Home tab and then use the replace values option in the Any Column group of the Transform ribbon to define values to replace.

Now is where I have to be a bit careful. If I simply try to replace “an” with the empty string: “”, I could alter other words like “change” to “chge” or “analysis” to “alysis”. Neither of these words would make sense. Therefore, I decided to always add a blank as a prefix and a suffix to whatever word I wanted to find and eliminate and replace it with a single blank. (Because if I don’t, a phrase like: “search for an employee phone number” would become “search foremployee phone number”.)

The problem is that if the words I am trying to eliminate begin a line or are followed by a period or comma, my replace will not work (unless I also do replaces with periods and commas.)

Anyway, my point is that this can be a time consuming activity of performing a few dozen replace statements and then switching to the Report page to see what the word cloud now includes. Let’s assume that I’ve already done several rounds of replace statements and my word cloud looks like the following:

Another thing I can do to focus on the most important words is to change some of the properties of the visualization. Click on the Pencil in the Visualizations column to display the properties and open the General property group as shown below:

Notice that you can change the maximum number of words displayed, the minimum and maximum font size and whether word breaking is on or off.

First the number of words displayed greatly affects the appearance of the cloud. The following two images show both 200 words and 50 words. (The previous image was 100 words.)

Obviously changing the minimum and maximum font size will help visually give me an idea of which words occurred the most often. The fourth option is the control that breaks the column into individual words. Otherwise, the entire phrase in the column is used which may be good for short predefined responses, but not so good for my case here.

You should have also noted that each time I change one of the properties, the word cloud regenerates. Another somewhat less obvious thing that occurs is that special characters are converted to blanks when the words are parsed. Therefore, a word like: “site’s URL” gets parsed into three words: “site”, “s” and “URL” which explains why the single letters “s” and “t” often appear even in a 50 word cloud. Of course, it might be possible to go back and replace “‘s ” with ” ” or “‘t ” with ” “. Again, I need to be very creative and very careful to come up with a final word cloud.

Anyway, I hope you found that interesting. In future weeks I will look at additional Power BI features and visualization. C’ya then.

Power BI can Load Data from Multiple Data Sources

This week I’ll show you how to use Power BI to pull together data from multiple data sources that you can then use to create visualizations.

The first step is to open your copy of Power BI Desktop. Then click on the Get Data button. I’m going to begin with some data from SQL Server so I do not have to click the More… button to see all the available data sources, SQL Server is right on the main dropdown.

To work with a SQL Server database, I must first enter the name of the SQL Server that I want to connect to. Note that while the dialog shown below prompts for the name of the database, this information is optional. You might think, ‘How can the database name be optional?’ In a moment, I will show you that Power Bi first needs to connect to the server, but then it can retrieve the database names that you have access to and display them for you to select. This protects you from the possible misspelling of the database name in this dialog. Since I am running everything on a single machine, my SQL Server name is LocalHost.

Of course before you can connect to the server, you must provide credentials. I am running everything on a Surface Pro 3 so I will use my Windows credentials. However, if you need to connect to another server on your network, you may need separate credentials.

The following dialog may or may not appear depending on the database you are connecting to and your version of SQL Server. However, in most cases, clicking the OK button allows you to connect to your data as long as you have rights to the SQL Server box.

When Power BI connects to the server, it first begins with a list of the possible databases along the left side of the screen. By opening a database (clicking on the arrow to the left of the database symbol, I can see the tables in the database. I can select multiple tables from this database. I can even select multiple tables from multiple databases. With each table I select, I will see a sample of the data in the right panel. Note that unlike PowerPivot, there is no way to limit the columns or rows from the table at this point. Maybe in the next major release.

You’ll notice that this week I return to my favorite sample dataset, Contoso.

If I select multiple tables to load, they all appear in the following dialog that shows me the progress of loading my data. In the image below, I can see that I’ve selected five tables from the database. As the data load progresses, Power BI shows me the number of records loaded in the line beneath the table name.

After the data loads, I notice that the table names appears in the Fields column on the far right of the desktop. But you say that these are table names? If I click the arrow to the left of each table name, the table expands to show the names of the fields essentially grouped by table.

Next I can click on one of the icons on the left side of the desktop. The first icon is the report surface. I typically work there after I have loaded all my data and made any changes to the columns and/or rows. So some might think that this icon should be on the bottom. The second icon is the data icon. I can use this icon to edit the column definitions, add new columns, or remove columns that I no long need. I showed some of these techniques in prior weeks of this blog.

However, I usually start with the bottom icon, Relationships. Relationships are the heart of any multi-table analysis. If I do not define the relationships between the tables first, I really cannot do any analysis. I might not be able to create some column either. So let’s start there.

The diagram that Power BI creates shows the tables that have been loaded into the model and if relationships were defined in the source data, it tries to create those relationships for me automatically. Notice that the diagram clearly identifies the one side of the relations with a ‘1’ and the many side of the relationship with an asterisk.

Unlike the diagram view in Power BI, there is no ability to edit, delete, or create new relationships from this dialog. Nor can I delete columns, define hierarchies, or rename columns here. I can hide individual columns or entire tables from the report view of the model and I can maximize the view of the individual table diagram, but that’s it. Again, maybe in the next major release of Power BI these additional features will appear so that the functionality matches that of PowerPivot diagram views.

Next I’m going to load some additional data into my model beginning with some data from an Access database and then data from Excel. I’m not going to take the space here to show screen images of these processes because between the above description and prior week blog discussions, I believe that you can figure out how to load data from different data sources. The only problem you may encounter is the need for new or updated connectivity tools to access your other data sources.

Once I have loaded all my data, my first action is to go to the Relationships view. You can see in the following figure that Power BI automatically shows my additional tables that I’ve loaded, but they show no connection to the rest of the tables in the data model. I need to fix this before going forward.

Find the Manage Relationships button in the Relationships group in the ribbon and click on it.

A dialog appears that displays the current relationships between the tables. This view is similar to the manage relationships dialog in Power Pivot. I can add, edit and even delete relationships here. However, the interesting button is the one labelled Autodetect

When I click Autodetect, Power BI attempts to find and create relationships between the tables that are not currently connected to the data model. It does this by analyzing the field names in the new tables with names in the tables of the data model to find other fields with the same name and the same data type definition. Of course, one table must serve as the parent (one-side) and the other table must serve as the child (many-side). In most cases, you will start with a fact table and create branching child tables so they usually serve as the one side in each of the new relationships. If Power IB is successful, it will attempt to create that relationship and will first tell you in a dialog box such as the one shown below how many relationships it detected.

After closing the information dialog reporting the number of relationships found, the relationships are created and the results for this model appear in the updated Manage Relationships dialog shown here.

Note at this point, I could edit any of the relationships that were not correctly created, I could also add relationships between tables that perhaps had different field names that could not be automatically detected. It is a good practice to check the Relationships diagram to insure that every table is connected to the model through at least one relationship.

I am now ready to go to the Reports desktop to create some simple charts.

Creating a report is as simple. First click in a blank area of the Reports desktop. Then click in the checkbox to the left of the value field you want to count, sum, or average such as Sales Amount. Then click on the field or fields by which you want to report Sales Amount (or any other value field). For example, I might want to see sales by year or by Product category or by Channel. You can create as many visualization as you want on the desktop although you should keep in mind that they need to be large enough to be readable. Let’s start with the three shown below.

Not a bad start. Here is a fun feature that Power BI supports that Power Chart does not. Just click on the Year 2009 column in the chart in the upper left. Immediately, all charts on the page change to shows the sales related to the year 2009 in a darker color and sales from other years in a light color.

Note that I did not have to do anything special to get this to work. I did not have to create a filter, a slicer, or anything. Similarly, I could click on columns in other charts to slice and dice the data different ways.

Filtering the data on the page is actually done entirely differently. To filter the data that appears on the page, you have to define a page filter by dragging the field or fields that you want to filter by into the area beneath Page Level Filters found in the column to the immediate left of the column with all the table fields. Filter fields typically are string values and typically, there are not a large number of unique value. To filter the page, just click in the checkbox to the left of each value that you want to include on the page. You can filter on one or many values at a single time.

After selecting a page level filter, the visualizations on the page change automatically to represent only the filtered data rather the entire dataset. The following desktop image represents only the data from the year 2009.

Well, that’s all for this week, but if you have been following along, save your data from this week because I’ll pick up this sample data from this point next time.

C’ya.