CDK Global is Eliminating Unencrypted Transmissions

CDK letter explaining the end of Telnet

I was glad to receive the following letter from CDK Global yesterday. Although CDK has taken other steps to protect sensitive data over the years, this is an essential move as is the future-planned move to HTTPS and SFTP.

Security Update: Eliminating Unencrypted Transmissions

Dear Valued Customer,

As we continue to improve security, we’re eliminating unencrypted transmission to and from the DMS. As a next step, we’ll be moving from the Telnet protocol to Secure Shell (SSH). We’ll soon be rolling out updates to all DMS and PC clients to use SSH. And, starting January 1, CDK will begin disabling Telnet for inbound transmissions to the DMS.

How this change will impact your dealership

This change will have no visible impact on dealers that only use CDK software and applications. CDK-certified partner applications will be compliant with the change from Telnet to SSH. However, non-CDK software that interfaces with the DMS may be affected. This includes scripting via terminal emulators such as Reflection, which CDK no longer supports. As a reminder, CDK only supports the use of BlueZone terminal emulator, which is included as a component in our software package.

Look for future communications about eliminating other unencrypted data transmissions, including changes from HTTP to HTTPS, and from FTP to SFTP. As always, we strongly recommend keeping all of your CDK software updated to the most recent versions to ensure you have the latest features and security enhancements.

If you have concerns regarding the January 1, 2017 date for transition to SSH, please contact CDK Support. Thank you for your consideration of this matter as we work to implement more secure transmission methods.

Sincerely,

CDK Global

With the new ETL tools added to Excel 2013 (and 2016), many CDK clients are enjoying the benefits of Business Intelligence (BI). These astute clients of CDK use their data from the DMS to improve the performance of their dealerships.

If your dealership has not made arrangements to move to CDK’s Dealer Data Access (DDA) (previously known as MDA or Managed Data Access), you really should. It is an awesome product for dealers who want to join the data revolution and manage their store using the many new self-service Business Intelligence tools.

If some of you are still using the unattended download features of the Reflections terminal emulator (TE), you will need to migrate to the now supported BlueZone TE (ADP Terminal Emulator 4.1.3.1010.msi). For those of you that have Excel workbooks or Word documents that use Visual Basic for Applications to access the DMS there is the te_vba.msi installer. These are both available in the CDK Software Library on Drive.

I don’t believe that CDK has added the unattended download macro to BlueZone and if they don’t, you can still get at your data. If you need help with that let me know.

Start Collecting Your Dealership’s Data So You Can Understand & Improve Your Store’s Results – Reposted

This is a re-post of my post on ADP (CDK Global) Expertise today.

What I am about to explain to you will change your life if you are at all frustrated with your store’s performance?

It may sound strange but usually performance issues arise from a lack of feedback to the workgroups contained within your operations. I use the term workgroups because there are numerous groups of employees in your dealership that need feedback to perform optimally. These groups of employees (workgroups) include:

  • An Individual
  • A Team – (usually front end sales teams but this can include techs and other employees).
  • A Department
  • A Store
  • Same stores – a workgroup important for comparing Year over Year (YoY) results.
  • A Division – ( a group of stores by franchise or geographical location)
  • A Region – ( usually geographical)
  • A Country (if you have stores in different countries)

How would you like to improve the profitability of all your store or stores right now?

We will need to use your data from your DMS (Dealership Management System) extracted properly for analysis and presented in a way that ranks your workgroups. The following SharePoint Online interactive report has a tab that ranks the performance of two workgroups; stores and individual F&I Managers:

capture

The premier business data presentation platform, SharePoint Online, is free with the Enterprise Office 365 subscriptions. With SharePoint Online you can control who is able to see each individual report based on their logon. Those who don’t have permission will not even see the report link on the site.

Feedback is extremely important if you want your employees to improve. If your employees don’t get feedback they will either:

  • Get the data on their own or ask for it or,
  • Just maintain the status quo (under-performing) or,
  • In all cases, leave your employ.

The sad part is, the good ones leave (because they are not recognized) along with the not-so-good (because they are not motivated).

We need to take advantage of a person’s or workgroup’s natural competitive instincts!

Critical to reaping the performance improvement that feedback can create is ranking the workgroups. Feedback can also be fine using objectives but that doesn’t get the competitive juices flowing. We have to rank them!

In my experience, when you rank workgroups they all rise, even the bottom individual or workgroup. That’s because everyone wants (on some level) to do good.

There are three steps to making this happen:

  1. Automate the Collection of the Data.
  2. Train your CFO and Controllers on how to use the latest tools to analyze the data efficiently
  3. Train your CFO and Controllers on how to create reports with the latest tools and publish them to SharePoint and Office365.

I have a lot of experience with CDK, InfinitiNet and DealerTrack and so that is what I will stick with here. You can access your data on these platforms but I am not so sure about Reynolds anymore.

Step 1:

To automate the collection of your data all you really need is an ODBC connection, some knowledge of the DMS data model and a subscription to Microsoft Azure. You would then ask someone like me to write a PowerShell script to run on your server to extract the data from the DMS at least daily and move it to a SQL Azure database (or on-premise).

You can skip this step if you are on CDK and you have signed up for their wonderful MDA (Managed Data Access) or DDA (Dealer Data Access) product. Those products serve up daily refreshes of data on a PostgreSQL database on your CDK server already.

Step 2:

CFOs and Controllers and even advanced Business Managers are the ones to analyze the data and create the reports. They need training and I will be setting up some courses here in Las Vegas to train them. I don’t advocate having IT people do this because they just don’t have the knowledge of how these numbers are generated; where they come from and how they inter-relate. So they need training in:

  • The DMS data model i.e., where the data resides and what transactions put it there.
  • How to use Excel 2013 and 2016 with PowerPivot and PowerQuery and other tools to create interactive reports so they refresh automatically when the underlying data is updated. You don’t want a set of gorgeous reports that require your people to manually update them. We will let SharePoint Online handle that.

Step 3:

Because this data is confidential and because we need to make sure it is correct this step also falls on the responsibility of the financial side of the business. I recommend this be handled by the CFO, Controller and/or Business Manager. In larger dealer groups the CFO or Controller can hire someone with this expertise but they need to understand car dealership transactions and accounting. There are two kinds of reporting in the dealership ( I teach all the competencies needed for both):

  1. Permanent periodic reports. These include rankings, Over-Age Inventory, Heat sheets, A/R Aging, CIT, Incentives, Upcoming Lease Terminations Reporting etc. These reports need to be built once and automated by reporting in their own SharePoint page with automatic refresh with proper permissions.
  2. Ad-Hoc reports. These are reports that you need to create one time or on-the-fly to do analytics prior to making a major business decision. In either case properly trained financial people need to prepare them.

I will be sending out an email announcing a seminar in the spring to lay this out for Dealers, CFOs and Controllers in detail. If you want to be on the list you can email me at jdonnelly at jdonnelly dot com or give me a call at 702.498.8777

PowerShell Cmdlets for Auto Dealerships

I am getting more and more calls for help with CDK Global, DealerTrack, and InfinitiNet data extraction and I want to post now about how I can help best:

  • I will write you a PowerShell cmdlet that will exactly meet your needs

I find the Microsoft PowerShell platform the best for folks that are not technically adept to extract data when, where and as often as they want. It helps to know how to run a PowerShell cmdlet but that is not critical. I can get you up and running with a 15 minute phone call.

The beauty of PowerShell is that I can write you a cmdlet to do whatever it is you want done. We then import those into your PowerShell ISE and then you can run it with different parameters as often as you want. The output you get back can be opened directly in Excel (if that is how you want it) by PowerShell. I can even write the cmdlet so that it emails the information around or in many other ways.

The reason I like PowerShell is because running the report is easy for those who don’t know PowerShell and yet I can make PowerShell do pretty much anything you need.

I will write your custom cmdlet quickly and for a fixed fee because, not only am I adept at writing PowerShell cmdlets and modules, but I know DMS data models very well.

I would estimate that most extraction cmdlets will be quoted in the $150 to $350 range but if you want a cmdlet to not only extract the data but send email reports or marketing it could (depending on how elaborate) cost upwards of $1,000 – $2,000.

Monthly support for the first 3 cmdlets is free and add $100 a month for each batch of 3 thereafter.

Email me now for a quote at jdonnelly at jdonnelly.com

 

Generating ADP (CDK Global) Accounting Schedules in Excel PowerPivot – Part III

The last post entitled Generating ADP (CDK Global) Accounting Schedules in Excel PowerPivot – Part II wrapped up derivation of the SQL statement that we need to use to create the perfect workbook for presenting accounting schedules in Excel using PowerPivot. Continuing to fine tune our project, I made some changes to Part I and Part II so be sure and copy the new SQL statements if you have been following along.

Today we create the PivotTable in a workbook and next time in Part IV I will show you in a video how to customize and use it effectively.

The final version of the query (we created last time) we will use is as follows:

SELECT a.cora_acct_code, a.hostitemid, a.accountnumber, g.accountdescription, a.companyid, c.journalid, c.refer, a.control, c.accountingdate, c.controltype, c.control2, c.controltype2, c.detaildescription, c.postingamount, c.postingsequence, c.postingtime, a.currentmonth, a.dateofoldestscheditem, a.schedulenumber, b.sequenceno
FROM ed.glschedule_v a
INNER JOIN ed.glschedulexref_v b ON (a.cora_acct_code = b.cora_acct_code AND a.schedulenumber::text = b.schedulenumber::text AND a.accountnumber::text = b.accountnumber::text AND a.control::text = b.control::text AND a.currentmonth = b.currentmonth)
INNER JOIN ed.gljedetail_v c ON (b.dtlcompanyid = c.companyid AND b.dtlaccountingdate = c.accountingdate AND b.accountnumber = c.accountnumber AND b.dtljournalid = c.journalid AND b.dtlpostingsequence = c.postingsequence AND b.dtlpostingtime = c.postingtime AND b.dtlrefer = c.refer AND b.control = c.control)
INNER JOIN ed.glcoa_v g ON (a.cora_acct_code = g.cora_acct_code AND a.accountnumber = g.accountnumber)
WHERE a.currentmonth={ts '2014-12-01 12:00:00 AM'} AND a.cora_acct_code  In ('XXXX-A','YYYY-A','ZZZZ-A')
UNION
SELECT d.cora_acct_code, d.hostitemid, d.accountnumber, h.accountdescription, d.companyid, e.journalid, e.refer, d.control, e.accountingdate, e.controltype, e.control2, e.controltype2, e.detaildescription, e.postingamount, e.postingsequence, e.postingtime, d.currentmonth, d.dateofoldestscheditem, d.schedulenumber, f.sequenceno 
FROM ed.glschedule_v d
INNER JOIN ed.glschedulexrefbalfwd_v f ON (d.cora_acct_code = f.cora_acct_code AND d.schedulenumber::text = f.schedulenumber::text AND d.accountnumber::text = f.accountnumber::text AND d.control::text = f.control::text AND d.currentmonth = f.currentmonth)
INNER JOIN ed.glschedulebalfwddetail_v e ON (f.dtlcompanyid = e.companyid AND f.dtlaccountingdate = e.accountingdate AND f.accountnumber = e.accountnumber AND f.dtljournalid = e.journalid AND f.dtlpostingsequence = e.postingsequence AND f.dtlpostingtime = e.postingtime AND f.dtlrefer = e.refer AND f.schedulenumber::text = e.schedulenumber::text AND f.currentmonth = e.currentmonth AND f.control = e.control)
INNER JOIN ed.glcoa_v h ON (d.cora_acct_code = h.cora_acct_code AND d.accountnumber = h.accountnumber)
WHERE d.currentmonth={ts '2014-12-01 12:00:00 AM'} AND d.cora_acct_code In ('XXXX-A','YYYY-A','ZZZZ-A')

Notice that I do not include a JOIN with the glschedulesetup_v view because I know that the dealerships whose data I am using in this example do not have single account schedules for all the accounts I may be interested in viewing.

In order to work through this exercise you must have PowerPivot enabled in Excel 2013. It may work with PowerPivot in Excel 2010 but I have not tried it. Once you open Excel and open a blank workbook click on the POWERPIVOT tab as shown below:

Excel 2013 Ribbon POWERPIVOT tab
Excel 2013 Ribbon with POWERPIVOT tab selected (click to enlarge)

In the Data Model section of the POWERPIVOT ribbon click on Manage which will open up POWERPIVOT as follows:

PowerPivot Data Model Manage Ribbon
PowerPivot Data Model Manage Ribbon (click to enlarge)

So in order to add a table to the data model we will us an existing connection to the Managed Data Access server located at your dealerships CDK Global Drive IP address. If you need to create a connection, CDK will be happy to walk you through installing the npgsql OLEDB or  pgsql ODBC drivers etc. In this post I am assuming the adp_report database already has a connection setup on your PC.

To add a table to your data model containing the schedule detail we need, click on Existing Connections in the Get External Data section of the ribbon presented in the Home tab of POWERPIVOT. It will look like this:

"Select an Existing Connection" dialog
The “Select an Existing Connection” dialog box (click to enlarge)

Make sure the adp_report connection has been highlighted in blue and click on the Open button at the bottom which will take you to the first page of the Table Import Wizard like so:

Choose How to Import
Table Import Wizard (click to enlarge)

You could explore all the views available on adp_report by selecting the first item but since we already know what we want and have the SQL statement already built we will select the option to “Write a query that will specify the table to import” which then takes you to the next dialog of the Table Import Wizard as follows:

Specify a SQL Query
Specify a SQL Query (click to enlarge)

Next you change the query name and paste in our query in the large box and click on Validate and you will see that the query validates like so:

SQL Query Validated
Specify a SQL Query -Validated (click to enlarge)

You then click on the Finish button and the query runs and the table is imported like so:

Importing Data
Importing Data (click to enlarge)

The query ran for approximately 20 minutes for two dealerships, one very large and one medium size. When it is done the Table Import Wizard will report Success and tell you the total number of rows imported (in this case 108,071 rows were imported):

Success
Success (click to enlarge)

So now you will see the schedule detail show up in a tab in PowerPivot. The tab will have the same name as the query which will be in this case, Schedule_Query_40. I have renamed it ScheduleDetail (you can rename tables by right clicking on the tab and selecting Rename). Next you will need to click on PivotTable (which PowerPivot will then create in Excel) on the Home ribbon in PowerPivot like so:

Create PivotTable in PowerPivot
Create PivotTable selection in PowerPivot (click to enlarge)

Excel will then exit the Manage window and open the workbook window and will ask you where you want to place the new PivotTable:

Create PivotTable
Create PivotTable (click to enlarge)

Here you should place the PivotTable on a new sheet as we will build an interactive environment around the PivotTable using slicers. After you click OK the sheet with the empty PivotTable on it should look like this:

Empty PivotTable
Empty PivotTable (click to enlarge)

Now that we have a PivotTable there are many useful ways to build your interactive accounting schedules. The best way to show you the possibilities is to continue the process in Part IV utilizing video.

Stay tuned.

Generating ADP (CDK Global) Accounting Schedules in Excel PowerPivot – Part II

The last post on Generating ADP (CDK Global) Accounting Schedules in Excel PowerPivot – Part I had a lot of information in it and I spent some time this week working on the perfect workbook setup to present the schedules to the end user. As a result, I made some changes to Part I relative to the use of the glschedulesetup_v view.

To begin with, the first dealership whose data I worked with violated one of my assumptions that all scheduled accounts would have at least one schedule setup with only that account on it. That is, the account would appear in a schedule by itself. Since this wasn’t the case, I quickly made some changes to the post.

I left the JOIN of the glschedulesetup_v view in the SQL query with the caveat that:

We are making a large assumption here though; that the dealership has single account schedules for all scheduled accounts. If not, we won’t want to JOIN the glschedulesetup_v view at all. In that case, we will deal with duplications with our slicers in the resulting pivot table.

Well for my work this week, I ended up eliminating the JOIN of the glschedulesetup_v entirely because it didn’t help and, as I will show you, restricting the pivot table to one set of schedule transactions per account is fairly straight forward with the use of slicers.

Another change I made to the original post was in an attempt to optimize the query. Using the EXPLAIN function in PostgresQL I determined that the cost of running a query with a LEFT OUTER JOIN versus an INNER JOIN between the glschedulexref_v view the gljedetail_v view was significant. I thought about it a bit and realized that by definition there had to be a one-to-one relationship between the cross reference entry and the GL detail entry in order for the report to actually be a schedule. We will test this however, to make sure.

So for these reasons, the query implementing the accounting schedule data connection on PowerPivot was reduced to the following:

SELECT a.cora_acct_code, a.hostitemid, a.accountnumber, g.accountdescription, a.companyid, c.journalid, c.refer, a.control, c.accountingdate, c.controltype, c.control2, c.controltype2, c.detaildescription, c.postingamount, c.postingsequence, c.postingtime, a.currentmonth, a.dateofoldestscheditem, a.schedulenumber, b.sequenceno
FROM ed.glschedule_v a
INNER JOIN ed.glschedulexref_v b ON (a.cora_acct_code = b.cora_acct_code AND a.schedulenumber::text = b.schedulenumber::text AND a.accountnumber::text = b.accountnumber::text AND a.control::text = b.control::text AND a.currentmonth = b.currentmonth)
INNER JOIN ed.gljedetail_v c ON (b.dtlcompanyid = c.companyid AND b.dtlaccountingdate = c.accountingdate AND b.accountnumber = c.accountnumber AND b.dtljournalid = c.journalid AND b.dtlpostingsequence = c.postingsequence AND b.dtlpostingtime = c.postingtime AND b.dtlrefer = c.refer AND b.control = c.control)
INNER JOIN ed.glcoa_v g ON (a.cora_acct_code = g.cora_acct_code AND a.accountnumber = g.accountnumber)
WHERE a.currentmonth={ts '2014-11-01 12:00:00 AM'} AND a.cora_acct_code 'XXX-A'

In my experimentation this week I was pleasantly surprised at how wonderful this solution really is. With PowerPivot we can join all kinds of data tables to our schedules to augment the information presented with names, phone numbers on receivable schedules, employees, vehicle details etc. In my final workbook I added queries to pull A/R customer data, employee data, vendor data, car deal data and vehicle inventory data. I made special sheets containing pivot tables for vehicle inventory related accounts and car deal related accounts. It made the accounting schedule review process so much more productive.

We can now move on to the second part of this series where we expand the query to handle “Balance Forward” schedules.

In order to accommodate “Balance Forward” schedules we have to add all the “Balance Forward” detail to the schedule detail table created using the above MDA query. This “Balance Forward” detail is maintained in these two views:

  • glschedulexrefbalfwd_v
  • glschedulebalfwddetail_v

which are analogous to these two views for “Detail Forward” schedules:

  • glschedulexref_v
  • gljedetail_v

To do this, we simply UNION it in. We copy the SELECT statement we made in the last post (above) and paste it again at the end of the first one and insert the word UNION between them. We then change the “Detail Forward” tables in the SELECT statement after the UNION to the “Balance Forward” tables and change the aliases so they don’t conflict (pardon the reversal of e and f) . The result is:

SELECT a.cora_acct_code, a.hostitemid, a.accountnumber, g.accountdescription, a.companyid, c.journalid, c.refer, a.control, c.accountingdate, c.controltype, c.control2, c.controltype2, c.detaildescription, c.postingamount, c.postingsequence, c.postingtime, a.currentmonth, a.dateofoldestscheditem, a.schedulenumber, b.sequenceno
FROM ed.glschedule_v a
INNER JOIN ed.glschedulexref_v b ON (a.cora_acct_code = b.cora_acct_code AND a.schedulenumber::text = b.schedulenumber::text AND a.accountnumber::text = b.accountnumber::text AND a.control::text = b.control::text AND a.currentmonth = b.currentmonth)
INNER JOIN ed.gljedetail_v c ON (b.dtlcompanyid = c.companyid AND b.dtlaccountingdate = c.accountingdate AND b.accountnumber = c.accountnumber AND b.dtljournalid = c.journalid AND b.dtlpostingsequence = c.postingsequence AND b.dtlpostingtime = c.postingtime AND b.dtlrefer = c.refer AND b.control = c.control)
INNER JOIN ed.glcoa_v g ON (a.cora_acct_code = g.cora_acct_code AND a.accountnumber = g.accountnumber)
WHERE a.currentmonth={ts '2014-11-01 12:00:00 AM'} AND a.cora_acct_code  In ('XXX-A')
UNION
SELECT d.cora_acct_code, d.hostitemid, d.accountnumber, h.accountdescription, d.companyid, e.journalid, e.refer, d.control, e.accountingdate, e.controltype, e.control2, e.controltype2, e.detaildescription, e.postingamount, e.postingsequence, e.postingtime, d.currentmonth, d.dateofoldestscheditem, d.schedulenumber, f.sequenceno 
FROM ed.glschedule_v d
INNER JOIN ed.glschedulexrefbalfwd_v f ON (d.cora_acct_code = f.cora_acct_code AND d.schedulenumber::text = f.schedulenumber::text AND d.accountnumber::text = f.accountnumber::text AND d.control::text = f.control::text AND d.currentmonth = f.currentmonth)
INNER JOIN ed.glschedulebalfwddetail_v e ON (f.dtlcompanyid = e.companyid AND f.dtlaccountingdate = e.accountingdate AND f.accountnumber = e.accountnumber AND f.dtljournalid = e.journalid AND f.dtlpostingsequence = e.postingsequence AND f.dtlpostingtime = e.postingtime AND f.dtlrefer = e.refer AND f.schedulenumber::text = e.schedulenumber::text AND f.currentmonth = e.currentmonth AND f.control = e.control)
INNER JOIN ed.glcoa_v h ON (d.cora_acct_code = h.cora_acct_code AND d.accountnumber = h.accountnumber)
WHERE d.currentmonth={ts '2014-11-01 12:00:00 AM'} AND d.cora_acct_code In ('XXX-A')

It should be noted at this point that this query takes a considerable amount of time. I needed to review the accounting schedules on 7 dealerships this week so in my WHERE clause I used a.cora_acct_code In (‘XXX1-A’,’XXX2-A’,’XXX3-A’,’XXX4-A’,’XXX5-A’,’XXX6-A’,’XXX7-A’).  To run the query on the seven dealerships it took about an hour. That’s quick compared to the queries I tested earlier using LEFT OUTER JOIN. For the 7 stores it returned 195,000 rows.

In the next post in the series I will add some other tables to the PowerPivot data model and we will build some pivot tables to present the schedules.

Stay tuned.

Generating ADP (CDK Global) Accounting Schedules in Excel PowerPivot – Part I

Some background

Last year when I was evaluating the new PostgresQL data server (appropriately named “Managed Data Access” or MDA) for ADP, I asked them after at about the 6 month mark if they had a way to query the appropriate General Ledger tables such that we could analyze accounting schedules in Excel. In particular I was very intent on utilizing new Business Intelligence add-in, PowerPivot. Sadly, there was not a one among the dev team that was an accountant and they showed little enthusiasm. I challenged them to write a SQL query that could bring into a PowerPivot data model what I needed to review 28 dealership’s accounting schedules (to make sure the books are clean you see) in one day.

I am a CPA by education (UCLA Accounting), experience (practicing California Automotive CPA for years) and training (keeping up with technology). I knew early on that to be effective in financial management one would need to be able to analyze data, accounting and other operational data. To this end, I have always been an early adopter of technology that would help me do so. I bought my first PC early on with a copy of Lotus 1-2-3 and never looked back.

Throughout those years the PC and spreadsheet software was limited in its usefulness by the availability of data connections in which to get data into the machine to analyze. When I was starting out there were several EDP (Electronic Data Processing) systems in the marketplace to help auto dealers keep track of things. Even though I have worked with a number of different systems, the ones that seemed to dominate always was ADP and Reynolds and Reynolds (R&R). Luckily both of these vendors had built into their systems a feature to extract data. It was slow and it wasn’t pretty. It wasn’t relational. R&R called it “Query Builder” and ADP called it “RPX” (a la RePort Generator EXtract maybe). They both used the Kermit file transfer protocol running in your Telnet session. You see, ADP and R&R both chose the PICK operating system to develop their EDP system. I don’t know the history of these two companies but I smell a partner fall-out and a competitive firm starting up with the renegade partners.

PICK was awesome but it seems a bit like list processing and it did not have a relational database built in. Apparently the business logic and the data logic was all combined together. You could pull information out of this list or that list and work with them locally in Excel but you couldn’t join lists. The query language (ENGLISH) had no functions except the standard non-logical operators. Why was PICK awesome? Because list items could be functions (pointing to other list items) or stored procedures and that was all powerful before SQL Server 1.0 came on the scene in 1989.

I had the pleasure of working with a large auto dealership group that started out with Saturn dealerships in the early 90s and, as a result, was using the EDS dealership system. The EDS system utilized the IBM iSeries AS400 with a DB2 relational database and I was very happy. I could write SQL statements all day long with extensive joins and functions and pull data using ODBC connections to Excel, Tableau, SharePoint and a host of other solutions. ADP bought EDS and quickly moved to convert its dealership base to Drive.

The sad part of this whole story is that it was only last year when there was a viable, relational database in either Reynolds and Reynolds or ADP. [I don’t mention UCS here because even though they had a relational database (DB2), I believe connectivity was blocked].

So the winner is ADP now CDK Global…..

As a condition of conversion from EDS to ADP was that ADP provide the DAP (Data Access Program) to us as many of you also did. Although I didn’t, many of you spun up your own data servers and put together a relational database using ADP‘s data objects; figuring out the relationships etc. I applaud all of you who went that route because you were able to give your dealerships real Business Intelligence (BI) utilizing such tools as Microsoft’s SQL Server Analysis and Reporting Services while the rest of us waited. At the time I worked for a group that didn’t see the capabilities and/or benefits of BI so it was never a priority. The SharePoint (Excel Services) reporting system I implemented and utilized to keep my managers informed ceased to be maintained after I moved on.

At this point, because of the recent addition of the PostgresQL data server to the ADP (CDK Global) system, I will spend my time talking about CDK Global to any dealership that is interested in besting its competitors. My esteem for ADP arises not only from the new data server but the data model itself is incredible. Obviously it was developed over the years with care and completeness by professionals. It is a pleasure to work with such a database and now that it is relational, I am ready to develop exciting data applications.

In all fairness to R&R, I don’t know if they have anything new in this area but I know Bob Brockman’s attitude toward a dealer’s data and it doesn’t seem compatible with open connectivity. Until someone tells me different, I will cease to follow R&R, UCS or any other system while I pursue developing BI solutions for dealers with ADP. I think ADP is moving in the right direction like no other. Perhaps DealerTrack is also on the right path as they are IBM based but, in spite of their unique operating system, ADP is the front runner. It’s kind of like aligning with Microsoft if you want to build powerful business applications. In this blog and in others you will hear me talk about ADP, PostgresQL and Microsoft products for the most part with the occasional Android and iOS mobile device development thrown in because they are everywhere.

One more thing. I write these how-to type posts so the community can take the work I’ve done and build on it. If you read back through my old posts, I blogged about pulling data from DAP using PowerShell mostly and I didn’t do it right or perfect or even efficiently because I am not a PowerShell expert. But it was enough to get a lot of you started quickly and easily pulling data from DAP and working with it in Excel. Some of you are using Out-SQL now and using PowerShell to MERGE in the data differentials I would imagine.

In my blog posts I typically don’t work through the development completely unless it comes from a project for a client (and the client didn’t mind me sharing it with the community). I’ve included in this post enough information to produce accounting schedules in PowerPivot accurately and efficiently. It should also help you to extend the schedule pivot to include data that isn’t available in an ADP printed schedule unless the data is in one of the schedule fields. That’s not to say that the PivotTable isn’t cumbersome at times and with a little extra time couldn’t be improved. Rather than lay it all out for you I expect you all to run with it and make it better and blog about it on your own blog or put the improvements in the comments below.

Schedule related views in the PostgresQL database

I knew that being able to create schedules in PowerPivot would be possible and would depend on my being able to decipher the following to get a pivot table that balances to the GL, namely:

  • What views would be needed
  • What the relationships between the views are and,
  • How can we pivot the results by date, month, account numbers and schedule numbers

The views involved in the process are:

  • glschedule_v
  • glschedulesetup_v
  • glschedulexref_v
  • glschedulexrefbalfwd_v
  • glschedulebalfwddetail_v
  • gljedetail_v

glschedule_v view

The glschedule_v view gives us all of the control numbers on all scheduled accounts by schedule number. If the GL account appears on more than one schedule, the control number will appear as many times. Here’s the query I ran to inspect this table and the first few lines of output:

glschedule
glschedule_v view (click to enlarge)

Most of the columns in the above query output are straightforward. However, the interesting column here is the currentmonth column. After perusing the file and running a few different queries it seems to me that the currentmonth column gives us all the control numbers for the schedule that existed for the currentmonth whether that month was in fact the previous month or the actual current month. For example, running the above query today (11-30-14) with glschedule_v.currentmonth = “11-01-14” gives me 909 rows of control numbers for the schedule numbers where the “23100” account appears. If I run the same query again with glschedule_v.currentmonth = “10-01-14”, it gives me 828 rows of control numbers for the same schedule numbers. The difference being, it seems to me, that in moving from October to November the net number of control numbers on the schedule dropped due to more control numbers becoming a zero balance than were newly added to the account.

glschedulesetup_v view

The glschedulesetup_v view contains a row for each schedule number setup for the company. This table helps us to restrict the number of times account number detail appears to once even though the account may appear on several different schedule numbers (side-by-side and alone). Here’s the query I ran to inspect this table and the first few lines of output:

glschedulesetup_v view (click to enlarge)
glschedulesetup_v view (click to enlarge)

Off to the right we can see the schedulenumber column which is unique. The interesting column is the account2 column and we can see by the query up top that I selected for records where account2 was null. This means that I only want single account schedules not side-by-side schedules (such as with inventory and flooring together on a schedule). It is possible to add up to 7 accounts on a schedule so there are fields through account7.  For our purposes though, our query works because we reject any schedules with a value in account2 and that eliminates anything other than single account schedules. We are making a large assumption here though; that the dealership has single account schedules for all scheduled accounts. If not, we won’t want to JOIN the glschedulesetup_v view at all. In that case, we will deal with duplications with our slicers in the resulting pivot table.

glschedulexref_v view

The glschedulexref_v view is the one that tells us where in the gljedetail_v view to get the records to populate the schedule. This is the linking table that tells what General Ledger entries still need to be reported on the schedule (i.e. ones that have not zeroed out by control numbers). Here’s the query I ran to inspect this table and the first few lines of output:

glschedxref_v view (click to enlarge)
glschedxref_v view (click to enlarge)

We can see from the output that the xref part of this view are the columns where the names start with dtl. These fields give us information with which to JOIN this cross reference view (glschedulexref_v) with the gljedetail_v view. The columns with names that don’t begin with dtl are column names that would be used to JOIN this view with glschedule_v view. The view is aptly named “xref” because it simply cross references schedule meta-data in the glschedule_v view with schedule detail found in the gljedetail_v view.

So, the goal is to JOIN the gljedetail_v view to the glschedulexref_v view via the dtl columns and the non-dtl columns to the glschedule_v view. First I will present the SQL query which is as follows:

SELECT a.cora_acct_code, a.hostitemid, a.accountnumber, g.accountdescription, a.companyid, c.journalid, c.refer, a.control, c.accountingdate, c.controltype, c.control2, c.controltype2, c.detaildescription, c.postingamount, c.postingsequence, c.postingtime, a.currentmonth, a.dateofoldestscheditem, a.schedulenumber, b.sequenceno
FROM ed.glschedule_v a
INNER JOIN ed.glschedulesetup_v s ON (a.cora_acct_code = s.cora_acct_code and a.schedulenumber::text = s.schedulenumber::text and a.accountnumber::text = s.account1::text)
INNER JOIN ed.glschedulexref_v b ON (a.cora_acct_code = b.cora_acct_code AND a.schedulenumber::text = b.schedulenumber::text AND a.accountnumber::text = b.accountnumber::text AND a.control::text = b.control::text AND a.currentmonth = b.currentmonth)
INNER JOIN ed.gljedetail_v c ON (b.dtlcompanyid = c.companyid AND b.dtlaccountingdate = c.accountingdate AND b.accountnumber = c.accountnumber AND b.dtljournalid = c.journalid AND b.dtlpostingsequence = c.postingsequence AND b.dtlpostingtime = c.postingtime AND b.dtlrefer = c.refer AND b.control = c.control)
INNER JOIN ed.glcoa_v g ON (a.cora_acct_code = g.cora_acct_code AND a.accountnumber = g.accountnumber)
WHERE a.currentmonth={ts '2014-11-01 12:00:00 AM'} AND a.cora_acct_code 'XXX-A' AND s.account2 IS NULL

The SELECT statement needs to contain all the possible columns I might need in my eventual PowerPivot data model and, my primary view to be queried is glschedule_v which I aliased as “a”. In order to get the desired result set we need to setup the following JOINs with the glschedule_v view:

  • INNER JOIN with the glschedulesetup_v (alias “s”) view so we can select only single account schedules and so we can also SELECT for the s.schedulename column so we will have that in our result set and in the PowerPivot data model.
  • INNER JOIN with glschedulexref_v (alias “b”) view ON glschedulesetup_v (alias “s”) view.
  • INNER JOIN with gljedetail_v (alias “c”) ON glschedulexref_v (alias “b”) view.
  • INNER JOIN with the glcoa_v (no alias) view (the chart of accounts view) so we get an account description to include in our results and in our PowerPivot data model.

Finally, in my WHERE clause I place the following conditions:

  • I require that a.currentmonth equals November AND,
  • I require that the dealership code is a.cora_acct_code AND,
  • I require that the schedule detail returned is only for single account schedules (i.e., no duplication of account number detail allowed)

Next time I will show the results in a pivot table and show you how to deal with “Balance Forward” schedules.

I hope you can see that this type of data extraction would not be possible with DAP and that MDA is really what dealerships need to gain incredible efficiencies going forward. Knowledge is power and efficiency.

Stay tuned.

Using MS Query in Excel to Access ADP’s new MDC PostgreSQL Data Server

Here at Automotive Systems Training we have been reviewing the new MDC (Managed Download Center) product that ADP is offering. It is a PostgreSQL database that mirrors most of the DAP (Data Access Program) Standard Business Data Objects. It’s nice to have a SQL Data Server, however the PostgreSQL database is currently only updated once every 24 hours at night whereas my VBA code and Powershell scripts hitting the DAP data objects directly give me real time data right now.

But in the interest of fairness I am using the very ancient MS Query application (circa 1997) to pull data into my Excel 2010 with PowerPivot CFO Dashboard to let me monitor all aspects of all 28 dealerships every day. And it works. Cumbersome at times, but it works.

Question for those out there that pull data into Excel; what are you using? ADO.Net in VBA code, MS Query. I have been using all-of-the-above plus calling Powershell scripts from VBA. I would like to see what folks are finding to be most robust.

Anyway yesterday I was building a PowerPivot Pivot Table to view daily RO Averages and I had a fit trying to pull Closed ROs that tied to the financial statement counts. Finally I got it to work (within ~1%) by joining the gljedetail table to the servicesalesclosed table. It wasn’t as straight-forward as I had hoped. I am writing this to get it in my head and down on paper how to do these joins in MS Query. In Showcase it was easy as the GUI allowed for multiple outer joins and made it intuitively easy to set them up. In MS Query however, at first glance using the GUI, it tells you basically “ONLY ONE OUTER JOIN ALLOWED PER QUERY”.

However, one can modify the SQL statement directly and have multiple outer joins in MS Query. If you do this though, you can’t subsequently use the GUI to edit the query. You can however continue to edit the query in MS Query using the SQL button which brings up a text editor.

The final query that worked for me (below) selects all rows in gljedetail that have accountnumber 46000, 46100 & 46003 (CP Labor Sale accounts) along with companyid 17 (Anytown Motors) and an accounting date in Sept 2013. These conditions can be seen in the WHERE clause.

The FROM clause then is saying that if the refer (document number – LEFT OUTER JOIN #1) and the cora_acct_code (accountingaccount – LEFT OUTER JOIN #2) in those rows matches (in both ways) any row in the servicesalesclosed table, bring back the data from that row in the servicesalesclosed table and append it to the gljedetail row.

I have put superscripts in places in the query to explain as follows:

SELECT gljedetail_v.accountingdate, gljedetail_v.accountnumber, gljedetail_v.cora_acct_code, gljedetail_v.refer, servicesalesclosed_v.cora_acct_id, servicesalesclosed_v.accountingaccount, servicesalesclosed_v.actualhours, servicesalesclosed_v.actualhourscustomerpay, servicesalesclosed_v.actualhoursinternal, servicesalesclosed_v.actualhourswarranty, servicesalesclosed_v.apptdate, servicesalesclosed_v.apptflag, servicesalesclosed_v.appttime, servicesalesclosed_v.cashierno, servicesalesclosed_v.closedate,…etc…..,servicesalesclosed_v.year, servicesalesclosed_v.cora_acct_code FROM { oj1 ed.gljedetail_v gljedetail_v LEFT OUTER JOIN2 ed.servicesalesclosed_v servicesalesclosed_v ON3 (gljedetail_v.cora_acct_code = servicesalesclosed_v.accountingaccount and gljedetail_v.refer = servicesalesclosed_v.ronumber)4 }5 WHERE ((gljedetail_v.companyid=’17’) AND (gljedetail_v.accountnumber In (‘46000′,’46100′,’46003’)) AND (gljedetail_v.accountingdate>={ts ‘2013-09-01 00:00:00’} And gljedetail_v.accountingdate<={ts ‘2013-09-30 00:00:00’}))

  1. This little “oj” appears to be an indicator to MS Query to expect an outer join coming up.
  2. This LEFT OUTER JOIN statement tells MS Query to return all rows of gljedetail (the first table mentioned right after “oj”) that match the WHERE clause irrespective of the JOIN condition but to return joined rows from servicesalesclosed (the succeeding table mentioned) that satisfy the JOIN condition(s) that is, the condition(s) in the ON clause.
  3. The start of the ON clause mentioned above.
  4. These are the JOIN conditions. In this case there are two LEFT OUTER JOINS (which MS Query GUI doesn’t support). This query works though; you just can’t use the drag and drop etc. I would assume if you added another table to the FROM clause you could add another “And” in the JOIN condition statement of the ON clause.
  5. From curly brace to curly brace is going to be the entire FROM clause including all JOINs, INNER, OUTER, SELF and otherwise.

Any comments or suggestions please feel free. I hope this helps someone.