CDK Global is Eliminating Unencrypted Transmissions

CDK letter explaining the end of Telnet

I was glad to receive the following letter from CDK Global yesterday. Although CDK has taken other steps to protect sensitive data over the years, this is an essential move as is the future-planned move to HTTPS and SFTP.

Security Update: Eliminating Unencrypted Transmissions

Dear Valued Customer,

As we continue to improve security, we’re eliminating unencrypted transmission to and from the DMS. As a next step, we’ll be moving from the Telnet protocol to Secure Shell (SSH). We’ll soon be rolling out updates to all DMS and PC clients to use SSH. And, starting January 1, CDK will begin disabling Telnet for inbound transmissions to the DMS.

How this change will impact your dealership

This change will have no visible impact on dealers that only use CDK software and applications. CDK-certified partner applications will be compliant with the change from Telnet to SSH. However, non-CDK software that interfaces with the DMS may be affected. This includes scripting via terminal emulators such as Reflection, which CDK no longer supports. As a reminder, CDK only supports the use of BlueZone terminal emulator, which is included as a component in our software package.

Look for future communications about eliminating other unencrypted data transmissions, including changes from HTTP to HTTPS, and from FTP to SFTP. As always, we strongly recommend keeping all of your CDK software updated to the most recent versions to ensure you have the latest features and security enhancements.

If you have concerns regarding the January 1, 2017 date for transition to SSH, please contact CDK Support. Thank you for your consideration of this matter as we work to implement more secure transmission methods.

Sincerely,

CDK Global

With the new ETL tools added to Excel 2013 (and 2016), many CDK clients are enjoying the benefits of Business Intelligence (BI). These astute clients of CDK use their data from the DMS to improve the performance of their dealerships.

If your dealership has not made arrangements to move to CDK’s Dealer Data Access (DDA) (previously known as MDA or Managed Data Access), you really should. It is an awesome product for dealers who want to join the data revolution and manage their store using the many new self-service Business Intelligence tools.

If some of you are still using the unattended download features of the Reflections terminal emulator (TE), you will need to migrate to the now supported BlueZone TE (ADP Terminal Emulator 4.1.3.1010.msi). For those of you that have Excel workbooks or Word documents that use Visual Basic for Applications to access the DMS there is the te_vba.msi installer. These are both available in the CDK Software Library on Drive.

I don’t believe that CDK has added the unattended download macro to BlueZone and if they don’t, you can still get at your data. If you need help with that let me know.

Start Collecting Your Dealership’s Data So You Can Understand & Improve Your Store’s Results – Reposted

This is a re-post of my post on ADP (CDK Global) Expertise today.

What I am about to explain to you will change your life if you are at all frustrated with your store’s performance?

It may sound strange but usually performance issues arise from a lack of feedback to the workgroups contained within your operations. I use the term workgroups because there are numerous groups of employees in your dealership that need feedback to perform optimally. These groups of employees (workgroups) include:

  • An Individual
  • A Team – (usually front end sales teams but this can include techs and other employees).
  • A Department
  • A Store
  • Same stores – a workgroup important for comparing Year over Year (YoY) results.
  • A Division – ( a group of stores by franchise or geographical location)
  • A Region – ( usually geographical)
  • A Country (if you have stores in different countries)

How would you like to improve the profitability of all your store or stores right now?

We will need to use your data from your DMS (Dealership Management System) extracted properly for analysis and presented in a way that ranks your workgroups. The following SharePoint Online interactive report has a tab that ranks the performance of two workgroups; stores and individual F&I Managers:

capture

The premier business data presentation platform, SharePoint Online, is free with the Enterprise Office 365 subscriptions. With SharePoint Online you can control who is able to see each individual report based on their logon. Those who don’t have permission will not even see the report link on the site.

Feedback is extremely important if you want your employees to improve. If your employees don’t get feedback they will either:

  • Get the data on their own or ask for it or,
  • Just maintain the status quo (under-performing) or,
  • In all cases, leave your employ.

The sad part is, the good ones leave (because they are not recognized) along with the not-so-good (because they are not motivated).

We need to take advantage of a person’s or workgroup’s natural competitive instincts!

Critical to reaping the performance improvement that feedback can create is ranking the workgroups. Feedback can also be fine using objectives but that doesn’t get the competitive juices flowing. We have to rank them!

In my experience, when you rank workgroups they all rise, even the bottom individual or workgroup. That’s because everyone wants (on some level) to do good.

There are three steps to making this happen:

  1. Automate the Collection of the Data.
  2. Train your CFO and Controllers on how to use the latest tools to analyze the data efficiently
  3. Train your CFO and Controllers on how to create reports with the latest tools and publish them to SharePoint and Office365.

I have a lot of experience with CDK, InfinitiNet and DealerTrack and so that is what I will stick with here. You can access your data on these platforms but I am not so sure about Reynolds anymore.

Step 1:

To automate the collection of your data all you really need is an ODBC connection, some knowledge of the DMS data model and a subscription to Microsoft Azure. You would then ask someone like me to write a PowerShell script to run on your server to extract the data from the DMS at least daily and move it to a SQL Azure database (or on-premise).

You can skip this step if you are on CDK and you have signed up for their wonderful MDA (Managed Data Access) or DDA (Dealer Data Access) product. Those products serve up daily refreshes of data on a PostgreSQL database on your CDK server already.

Step 2:

CFOs and Controllers and even advanced Business Managers are the ones to analyze the data and create the reports. They need training and I will be setting up some courses here in Las Vegas to train them. I don’t advocate having IT people do this because they just don’t have the knowledge of how these numbers are generated; where they come from and how they inter-relate. So they need training in:

  • The DMS data model i.e., where the data resides and what transactions put it there.
  • How to use Excel 2013 and 2016 with PowerPivot and PowerQuery and other tools to create interactive reports so they refresh automatically when the underlying data is updated. You don’t want a set of gorgeous reports that require your people to manually update them. We will let SharePoint Online handle that.

Step 3:

Because this data is confidential and because we need to make sure it is correct this step also falls on the responsibility of the financial side of the business. I recommend this be handled by the CFO, Controller and/or Business Manager. In larger dealer groups the CFO or Controller can hire someone with this expertise but they need to understand car dealership transactions and accounting. There are two kinds of reporting in the dealership ( I teach all the competencies needed for both):

  1. Permanent periodic reports. These include rankings, Over-Age Inventory, Heat sheets, A/R Aging, CIT, Incentives, Upcoming Lease Terminations Reporting etc. These reports need to be built once and automated by reporting in their own SharePoint page with automatic refresh with proper permissions.
  2. Ad-Hoc reports. These are reports that you need to create one time or on-the-fly to do analytics prior to making a major business decision. In either case properly trained financial people need to prepare them.

I will be sending out an email announcing a seminar in the spring to lay this out for Dealers, CFOs and Controllers in detail. If you want to be on the list you can email me at jdonnelly at jdonnelly dot com or give me a call at 702.498.8777

Generating ADP (CDK Global) Accounting Schedules in Excel PowerPivot – Part III

The last post entitled Generating ADP (CDK Global) Accounting Schedules in Excel PowerPivot – Part II wrapped up derivation of the SQL statement that we need to use to create the perfect workbook for presenting accounting schedules in Excel using PowerPivot. Continuing to fine tune our project, I made some changes to Part I and Part II so be sure and copy the new SQL statements if you have been following along.

Today we create the PivotTable in a workbook and next time in Part IV I will show you in a video how to customize and use it effectively.

The final version of the query (we created last time) we will use is as follows:

SELECT a.cora_acct_code, a.hostitemid, a.accountnumber, g.accountdescription, a.companyid, c.journalid, c.refer, a.control, c.accountingdate, c.controltype, c.control2, c.controltype2, c.detaildescription, c.postingamount, c.postingsequence, c.postingtime, a.currentmonth, a.dateofoldestscheditem, a.schedulenumber, b.sequenceno
FROM ed.glschedule_v a
INNER JOIN ed.glschedulexref_v b ON (a.cora_acct_code = b.cora_acct_code AND a.schedulenumber::text = b.schedulenumber::text AND a.accountnumber::text = b.accountnumber::text AND a.control::text = b.control::text AND a.currentmonth = b.currentmonth)
INNER JOIN ed.gljedetail_v c ON (b.dtlcompanyid = c.companyid AND b.dtlaccountingdate = c.accountingdate AND b.accountnumber = c.accountnumber AND b.dtljournalid = c.journalid AND b.dtlpostingsequence = c.postingsequence AND b.dtlpostingtime = c.postingtime AND b.dtlrefer = c.refer AND b.control = c.control)
INNER JOIN ed.glcoa_v g ON (a.cora_acct_code = g.cora_acct_code AND a.accountnumber = g.accountnumber)
WHERE a.currentmonth={ts '2014-12-01 12:00:00 AM'} AND a.cora_acct_code  In ('XXXX-A','YYYY-A','ZZZZ-A')
UNION
SELECT d.cora_acct_code, d.hostitemid, d.accountnumber, h.accountdescription, d.companyid, e.journalid, e.refer, d.control, e.accountingdate, e.controltype, e.control2, e.controltype2, e.detaildescription, e.postingamount, e.postingsequence, e.postingtime, d.currentmonth, d.dateofoldestscheditem, d.schedulenumber, f.sequenceno 
FROM ed.glschedule_v d
INNER JOIN ed.glschedulexrefbalfwd_v f ON (d.cora_acct_code = f.cora_acct_code AND d.schedulenumber::text = f.schedulenumber::text AND d.accountnumber::text = f.accountnumber::text AND d.control::text = f.control::text AND d.currentmonth = f.currentmonth)
INNER JOIN ed.glschedulebalfwddetail_v e ON (f.dtlcompanyid = e.companyid AND f.dtlaccountingdate = e.accountingdate AND f.accountnumber = e.accountnumber AND f.dtljournalid = e.journalid AND f.dtlpostingsequence = e.postingsequence AND f.dtlpostingtime = e.postingtime AND f.dtlrefer = e.refer AND f.schedulenumber::text = e.schedulenumber::text AND f.currentmonth = e.currentmonth AND f.control = e.control)
INNER JOIN ed.glcoa_v h ON (d.cora_acct_code = h.cora_acct_code AND d.accountnumber = h.accountnumber)
WHERE d.currentmonth={ts '2014-12-01 12:00:00 AM'} AND d.cora_acct_code In ('XXXX-A','YYYY-A','ZZZZ-A')

Notice that I do not include a JOIN with the glschedulesetup_v view because I know that the dealerships whose data I am using in this example do not have single account schedules for all the accounts I may be interested in viewing.

In order to work through this exercise you must have PowerPivot enabled in Excel 2013. It may work with PowerPivot in Excel 2010 but I have not tried it. Once you open Excel and open a blank workbook click on the POWERPIVOT tab as shown below:

Excel 2013 Ribbon POWERPIVOT tab
Excel 2013 Ribbon with POWERPIVOT tab selected (click to enlarge)

In the Data Model section of the POWERPIVOT ribbon click on Manage which will open up POWERPIVOT as follows:

PowerPivot Data Model Manage Ribbon
PowerPivot Data Model Manage Ribbon (click to enlarge)

So in order to add a table to the data model we will us an existing connection to the Managed Data Access server located at your dealerships CDK Global Drive IP address. If you need to create a connection, CDK will be happy to walk you through installing the npgsql OLEDB or  pgsql ODBC drivers etc. In this post I am assuming the adp_report database already has a connection setup on your PC.

To add a table to your data model containing the schedule detail we need, click on Existing Connections in the Get External Data section of the ribbon presented in the Home tab of POWERPIVOT. It will look like this:

"Select an Existing Connection" dialog
The “Select an Existing Connection” dialog box (click to enlarge)

Make sure the adp_report connection has been highlighted in blue and click on the Open button at the bottom which will take you to the first page of the Table Import Wizard like so:

Choose How to Import
Table Import Wizard (click to enlarge)

You could explore all the views available on adp_report by selecting the first item but since we already know what we want and have the SQL statement already built we will select the option to “Write a query that will specify the table to import” which then takes you to the next dialog of the Table Import Wizard as follows:

Specify a SQL Query
Specify a SQL Query (click to enlarge)

Next you change the query name and paste in our query in the large box and click on Validate and you will see that the query validates like so:

SQL Query Validated
Specify a SQL Query -Validated (click to enlarge)

You then click on the Finish button and the query runs and the table is imported like so:

Importing Data
Importing Data (click to enlarge)

The query ran for approximately 20 minutes for two dealerships, one very large and one medium size. When it is done the Table Import Wizard will report Success and tell you the total number of rows imported (in this case 108,071 rows were imported):

Success
Success (click to enlarge)

So now you will see the schedule detail show up in a tab in PowerPivot. The tab will have the same name as the query which will be in this case, Schedule_Query_40. I have renamed it ScheduleDetail (you can rename tables by right clicking on the tab and selecting Rename). Next you will need to click on PivotTable (which PowerPivot will then create in Excel) on the Home ribbon in PowerPivot like so:

Create PivotTable in PowerPivot
Create PivotTable selection in PowerPivot (click to enlarge)

Excel will then exit the Manage window and open the workbook window and will ask you where you want to place the new PivotTable:

Create PivotTable
Create PivotTable (click to enlarge)

Here you should place the PivotTable on a new sheet as we will build an interactive environment around the PivotTable using slicers. After you click OK the sheet with the empty PivotTable on it should look like this:

Empty PivotTable
Empty PivotTable (click to enlarge)

Now that we have a PivotTable there are many useful ways to build your interactive accounting schedules. The best way to show you the possibilities is to continue the process in Part IV utilizing video.

Stay tuned.

Generating ADP (CDK Global) Accounting Schedules in Excel PowerPivot – Part II

The last post on Generating ADP (CDK Global) Accounting Schedules in Excel PowerPivot – Part I had a lot of information in it and I spent some time this week working on the perfect workbook setup to present the schedules to the end user. As a result, I made some changes to Part I relative to the use of the glschedulesetup_v view.

To begin with, the first dealership whose data I worked with violated one of my assumptions that all scheduled accounts would have at least one schedule setup with only that account on it. That is, the account would appear in a schedule by itself. Since this wasn’t the case, I quickly made some changes to the post.

I left the JOIN of the glschedulesetup_v view in the SQL query with the caveat that:

We are making a large assumption here though; that the dealership has single account schedules for all scheduled accounts. If not, we won’t want to JOIN the glschedulesetup_v view at all. In that case, we will deal with duplications with our slicers in the resulting pivot table.

Well for my work this week, I ended up eliminating the JOIN of the glschedulesetup_v entirely because it didn’t help and, as I will show you, restricting the pivot table to one set of schedule transactions per account is fairly straight forward with the use of slicers.

Another change I made to the original post was in an attempt to optimize the query. Using the EXPLAIN function in PostgresQL I determined that the cost of running a query with a LEFT OUTER JOIN versus an INNER JOIN between the glschedulexref_v view the gljedetail_v view was significant. I thought about it a bit and realized that by definition there had to be a one-to-one relationship between the cross reference entry and the GL detail entry in order for the report to actually be a schedule. We will test this however, to make sure.

So for these reasons, the query implementing the accounting schedule data connection on PowerPivot was reduced to the following:

SELECT a.cora_acct_code, a.hostitemid, a.accountnumber, g.accountdescription, a.companyid, c.journalid, c.refer, a.control, c.accountingdate, c.controltype, c.control2, c.controltype2, c.detaildescription, c.postingamount, c.postingsequence, c.postingtime, a.currentmonth, a.dateofoldestscheditem, a.schedulenumber, b.sequenceno
FROM ed.glschedule_v a
INNER JOIN ed.glschedulexref_v b ON (a.cora_acct_code = b.cora_acct_code AND a.schedulenumber::text = b.schedulenumber::text AND a.accountnumber::text = b.accountnumber::text AND a.control::text = b.control::text AND a.currentmonth = b.currentmonth)
INNER JOIN ed.gljedetail_v c ON (b.dtlcompanyid = c.companyid AND b.dtlaccountingdate = c.accountingdate AND b.accountnumber = c.accountnumber AND b.dtljournalid = c.journalid AND b.dtlpostingsequence = c.postingsequence AND b.dtlpostingtime = c.postingtime AND b.dtlrefer = c.refer AND b.control = c.control)
INNER JOIN ed.glcoa_v g ON (a.cora_acct_code = g.cora_acct_code AND a.accountnumber = g.accountnumber)
WHERE a.currentmonth={ts '2014-11-01 12:00:00 AM'} AND a.cora_acct_code 'XXX-A'

In my experimentation this week I was pleasantly surprised at how wonderful this solution really is. With PowerPivot we can join all kinds of data tables to our schedules to augment the information presented with names, phone numbers on receivable schedules, employees, vehicle details etc. In my final workbook I added queries to pull A/R customer data, employee data, vendor data, car deal data and vehicle inventory data. I made special sheets containing pivot tables for vehicle inventory related accounts and car deal related accounts. It made the accounting schedule review process so much more productive.

We can now move on to the second part of this series where we expand the query to handle “Balance Forward” schedules.

In order to accommodate “Balance Forward” schedules we have to add all the “Balance Forward” detail to the schedule detail table created using the above MDA query. This “Balance Forward” detail is maintained in these two views:

  • glschedulexrefbalfwd_v
  • glschedulebalfwddetail_v

which are analogous to these two views for “Detail Forward” schedules:

  • glschedulexref_v
  • gljedetail_v

To do this, we simply UNION it in. We copy the SELECT statement we made in the last post (above) and paste it again at the end of the first one and insert the word UNION between them. We then change the “Detail Forward” tables in the SELECT statement after the UNION to the “Balance Forward” tables and change the aliases so they don’t conflict (pardon the reversal of e and f) . The result is:

SELECT a.cora_acct_code, a.hostitemid, a.accountnumber, g.accountdescription, a.companyid, c.journalid, c.refer, a.control, c.accountingdate, c.controltype, c.control2, c.controltype2, c.detaildescription, c.postingamount, c.postingsequence, c.postingtime, a.currentmonth, a.dateofoldestscheditem, a.schedulenumber, b.sequenceno
FROM ed.glschedule_v a
INNER JOIN ed.glschedulexref_v b ON (a.cora_acct_code = b.cora_acct_code AND a.schedulenumber::text = b.schedulenumber::text AND a.accountnumber::text = b.accountnumber::text AND a.control::text = b.control::text AND a.currentmonth = b.currentmonth)
INNER JOIN ed.gljedetail_v c ON (b.dtlcompanyid = c.companyid AND b.dtlaccountingdate = c.accountingdate AND b.accountnumber = c.accountnumber AND b.dtljournalid = c.journalid AND b.dtlpostingsequence = c.postingsequence AND b.dtlpostingtime = c.postingtime AND b.dtlrefer = c.refer AND b.control = c.control)
INNER JOIN ed.glcoa_v g ON (a.cora_acct_code = g.cora_acct_code AND a.accountnumber = g.accountnumber)
WHERE a.currentmonth={ts '2014-11-01 12:00:00 AM'} AND a.cora_acct_code  In ('XXX-A')
UNION
SELECT d.cora_acct_code, d.hostitemid, d.accountnumber, h.accountdescription, d.companyid, e.journalid, e.refer, d.control, e.accountingdate, e.controltype, e.control2, e.controltype2, e.detaildescription, e.postingamount, e.postingsequence, e.postingtime, d.currentmonth, d.dateofoldestscheditem, d.schedulenumber, f.sequenceno 
FROM ed.glschedule_v d
INNER JOIN ed.glschedulexrefbalfwd_v f ON (d.cora_acct_code = f.cora_acct_code AND d.schedulenumber::text = f.schedulenumber::text AND d.accountnumber::text = f.accountnumber::text AND d.control::text = f.control::text AND d.currentmonth = f.currentmonth)
INNER JOIN ed.glschedulebalfwddetail_v e ON (f.dtlcompanyid = e.companyid AND f.dtlaccountingdate = e.accountingdate AND f.accountnumber = e.accountnumber AND f.dtljournalid = e.journalid AND f.dtlpostingsequence = e.postingsequence AND f.dtlpostingtime = e.postingtime AND f.dtlrefer = e.refer AND f.schedulenumber::text = e.schedulenumber::text AND f.currentmonth = e.currentmonth AND f.control = e.control)
INNER JOIN ed.glcoa_v h ON (d.cora_acct_code = h.cora_acct_code AND d.accountnumber = h.accountnumber)
WHERE d.currentmonth={ts '2014-11-01 12:00:00 AM'} AND d.cora_acct_code In ('XXX-A')

It should be noted at this point that this query takes a considerable amount of time. I needed to review the accounting schedules on 7 dealerships this week so in my WHERE clause I used a.cora_acct_code In (‘XXX1-A’,’XXX2-A’,’XXX3-A’,’XXX4-A’,’XXX5-A’,’XXX6-A’,’XXX7-A’).  To run the query on the seven dealerships it took about an hour. That’s quick compared to the queries I tested earlier using LEFT OUTER JOIN. For the 7 stores it returned 195,000 rows.

In the next post in the series I will add some other tables to the PowerPivot data model and we will build some pivot tables to present the schedules.

Stay tuned.

Conversation with the RCI Desk at Reynolds and Reynolds

I recently had a request from a company to find out the current status of Reynolds’ approach to 3rd-party data access. Basically I found out that Reynolds decided to move from a telnet client to what this gentleman said was a “proprietary connection”.

In a nutshell ERALink is out and ERAccess is in, and it has been for some time apparently (I haven’t had a Reynolds client in a while). ERALink I believe, was really only intended for remote access (a telnet client) and the nice gentleman at Reynolds told me that now everyone that accesses ERA XT and ERA XTD systems uses ERAccess.

From the outset the Reynolds person knew that I developed custom Intranet (Sharepoint and others) solutions for automobile dealer groups. He knew I only needed to pull data and that there would never be any write-back to the DMS. Yet he steered me away from the RCI program and said that ERAccess was the way to go.

I’ve used ERALink and their Query Builder software extensively in the past and I really hoped that it wasn’t their best solution for a custom programmer like me, so I pushed on.

I asked what the RCI program provided to the 3rd-party vendor; he said that it was really for the guys that needed bi-directional read-write continuous access to the DMS. I said, “cool, but what do you offer? An ADO.Net data provider, XML Engine, what?” He said, “a set of XML APIs that will return the entire record associated with a data object (say a car or transaction) with no filtering ability.” He was really selling me on it NOT being for me.

So I said “ok, I send you a SOAP-like request and you send back some XML.” He said “yes”. I said “sounds like ADP’s DAP program” and I mentioned the Integration Points ADP has defined and I asked if they had a similar deal. He said “yes”.

So then I said, “well I really would love to have an easier time of it than using ERAccess and QueryBuilder so how much would that be?” Of course he wouldn’t say, but he did say that I, as the developer, would be assessed a one-time up-front certification fee and a per dealer installation and monthly fee. I said “what’s the certification fee, about $10,000?” He said “that would be the bottom end, it most likely would be more.”

I told him I was just a little guy developing important custom solutions for Auto Dealers. He said use ERAccess and Query Builder. I yawned.

Interestingly though, he did say that there were 3rd party vendors writing back to the DMS without Reynolds Certification (i.e. hacking) and that surprised me. I also learned that Reynolds is still using the PICK DBMS which is now called IBM UniVerse. Sheepishly he confided that UniVerse was state of the art in 1984 and I said “that’s alright ADP uses the same thing!.”

Until next time….

Accessing ADP’s XML Engine with PowerShell

Creating my first script to explore the various Data Objects in their entirety was a success I felt and a great experience exploring PowerShell’s XML capability. The experience also taught me quite a bit about .Net’s XML capabilities. I reviewed a few objects, by no means all of them because it gets real dry even for a data hound like me. I reviewed the ServiceSalesClosed, ServiceSalesOpen, ServiceSalesTechHoursOpen, ServiceSalesMLSOpen, ServiceSalesPartsOpen, ServiceSalesDetailsOpen, ServiceSalesDetailsClosed, GL Detail, Customer, Vehicle, and Employee.

During this process one day I had the need to pull some data from the General Ledger Detail. It should have been easy using my script but I ran into problems because I needed to filter the results. In other words I had to use the “Selection” tag in the XML Request. The DAP documentation states that the Selection tag accepts “ENGLISH” statements. ENGLISH seems to be the query language of the PICK DBMS which has been improved or upgraded by IBM (I believe) and is called retrieVe which I think might be part of uniVerse (which I think is what IBM calls PICK these days.) Anyway ADP uses the old ENGLISH it seems, not retrieVe. ENGLISH, as I mentioned in an earlier post, has limited operators and no function building capability. Also no way to join files. PICK has multivalued fields which is a challenge for me. An ENGLISH statement in the selection clause allows for unlimited ANDs and ORs. Here is my first selection clause used on the GL Detail object:


AND WITH ACCTG.DATE.EXT BETWEEN “08/01/10” AND “08/31/10”

The problem with ADP DAP selection clause development is that there is no good source for the Property (Field) Names (called DICT names in ADP speak). The help desk advises you to use an Object exploration tool to look into the “Desc” property of the file on which the object depends and, if the developer described things properly, the DICT name would be there. The problem is that it is not always clear what file the object gathers its information from. Needless to say I invested a significant amount of time searching for the correct DICT name for the property I wanted to filter on. I now have an Excel spreadsheet that houses all my successful ENGLISH statements and what they do for which object.

The above ENGLISH statement says to filter the XML Response to only include GL entries for Account 205 (which would be Contracts in Transit) with an Accounting Date in August 2010. I learned later though that the BETWEEN operator is not inclusive; so I received no 08/01/10 or 08/31/10 entries!

It was very frustrating trying to build successful ENGLISH statements. There are so many wonderful fields available for which I could not find a successful DICT name that I decided that I would need to filter the response after it is returned from the server. This seems inefficient as hell but until I figure a better way to get to these DICT names and, in order to have the most robust data extraction system possible, I need to filter the response……. with XPath….yikes!

As I mentioned before I have never delved this far into XML processing so I reviewed everything I could find on the web about XPath and PowerShell and tested and experimented.

First I developed a script without using Select-XML and then one using it. In the first script which I will develop in this post I used an XPathNavigator object and passed in XPath query strings so here it is:

$Url = http://xxx.yyy.zz.aa:pppp 
$webclient = new-object System.Net.WebClient 
$soapMessage = @" 
<?xml version="1.0" encoding="UTF-8"?> 
<eoapi:Request xmlns:eoapi=http://www.adp.com/xxxxxx version="1.0" ShowUnknownElementErrors="Yes"> 
   <Session> 
   <Connection> 
      <Product>Product</Product> 
      <Server>Server</Server> 
      <Password>password</Password> 
      <Pooling>No</Pooling> 
   </Connection> 
   <GetSet TimeoutSeconds="1800"> 
    <Qbject>Employee</Qbject>
    <Group>DealershipCode</Group> 
    <Selection></Selection> 
    <MultiFlag>E</MultiFlag> 
   </GetSet> 
  </Session> 
</eoapi:Request> 
"@ 
# Upload the Soap payload ($SoapMessage) to the URL:PORT of the ADP XML Engine ($Url)
$result = ([XML]$webclient.UploadString($Url, $soapMessage))
# Create an XPath Object by eliminating the duplicate reply tags starting the XML string at 2nd Reply tag (one with the group attribute), then convert that string to Xml ([xml])
$xpath = [System.XML.XPath.XPathDocument][System.IO.TextReader][System.IO.StringReader]($result.reply.session.get_InnerXML() | out-string | %{$_.substring($_.IndexOf('#Run the query and store the OuterXML property of the results in $qryResult
$qryResult = ($navigator.Select($query) | %{$_.OuterXml})
#Add Reply Tag to query result to create Root
$qryResultXML = [xml]('<Reply>'+$qryResult+'</Reply>')
#Date conditions are handled in the pipeline (until I figure out how to convert in the XPath expression) then send required fields to a CSV file.
$qryResultXML.reply.Employee | ?{[dateTime]$_.DateAdded -gt [dateTime]"2010-04-01"} | Select HostItemID, CustNo, NameCode, AccountingAccount, DateAdded, LastUpdate, CustOrCompanyCode, FirstName, MiddleName, LastName, Name1, SSN, CreditLimit, DealerField1, DealerField2, DealerField3, DealerField4, DealerField5, DealerField6, DealerField7, DealerField8, DealerField9, DealerField10 | Export-CSV "C:\Users\John Donnelly\Desktop\Employee.csv" -NoTypeInformation
#Open Excel
$excel = new-object -comobject Excel.Application
#Make Excel visible
$excel.visible = $true
#Open the CSV just created
$workbook = $excel.Workbooks.open("C:\Users\John Donnelly\Desktop\Employee.csv")

Step by step:

In this instance I am perusing the Employee object. The field names I use are fairly common. The first several lines are the same as in the previous post. I use a WebClient object to send a request to the XML Engine and store the results in the $result variable.

I then create an XPathDocument object and pass in the $result.reply.session InnerXML (slightly modified) using the object’s get_InnerXML() method

I then invoke the CreateNavigtor() method on the XPathDocument object and create a navigator object and store it as $navigator.

The fun part was writing a few queries which I have listed with all but one commented out. I could not figure out how to get DateTime to work so I put all DateTime filters in the pipeline later.  In this script the uncommented query pulls all employees who are coded as persons not businesses (interesting distinction eh?) . The date query commented out indicated that I can’t figure out how to write that one. Anyone care to comment?

I then invoke the Select() method of the $navigator object and pass in the query storing the OuterXML of the result in $qryResult. then in order to have an XML object to put in the pipeline I need to give it a root tag, in this case a Reply tag.

Next I select the Employee node like so; $qryResultXML.reply.Employee and pipe that to a foreach to filter out the DateAdded properties we don’t want and then Select the fields we want and Export –CSV.

This took me about 5 hours to put together because of the dearth of PowerShell related discussions about XML and my lack of coding experience.. I really want to thank Dan at PluralSight for his write-ups on XML and PowerShell. They were wonderfully specific enough to help me through this.

This script is somewhat a kluge but it works and will serve as the basis for more refined queries in the future. Thanks for listening and if you see anything strange, please comment and help out those who might rely on my work in the future.

Digg This

ADP DAP, PICK, and PowerShell

ADP’s DAP program allows the developer to access data objects on the UNIX server. There is a standard set of Business Data Objects that include groups of data fields related to each other. For example, in the Service Department objects there are objects called ServiceSalesClosed and ServiceSalesOpen, the former holding data relating to all the Service Transactions that are finished (i.e., closed) and the “Open” service sales are still in process. Inside the objects there are up 400+ fields of data.

My plan was to jump in and understand the XML Engine front end that ADP created and see what it could do. I decided that PowerShell would be the best way to experiment and learn.

I am a beginner at PowerShell especially as it relates to Xml so any readers that cares to offer helpful hints and/or critique, please do. Also feel free to correct my verbiage. I know that I am not using the terms root, node, child etc correctly at times. Also please note the back tick to indicate a line continuation. I have note found a good code inserted for PowerShell script for Windows Live Writer yet.

My first script was designed to set up the request as specified by ADP and use the .Net WebClient object to submit it. The request has a tag for Selection filtering but I really wanted to see all there was so I ignored that parameter.  Voila I received a response!

Here’s the script:

$Url = http://xxx.yyy.zz.aa:pppp
$webclient = new-object System.Net.WebClient
$soapMessage = @"
<?xml version="1.0" encoding="UTF-8"?>
<eoapi:Request xmlns:eoapi=http://www.adp.com/xxxxxx version="1.0" ShowUnknownElementErrors="Yes">
   <Session>
   <Connection>
      <Product>Product</Product>
      <Server>Server</Server>
      <Password>password</Password>
      <Pooling>No</Pooling>
   </Connection>
   <GetSet TimeoutSeconds="1800">
    <Qbject>TheDataObjectName</Qbject>
    <Group>DealershipCode</Group>
    <Selection></Selection>
    <MultiFlag>E</MultiFlag>
   <GetSet>
  </Session>
<eoapi:Request>
"@
# Upload the Soap payload ($SoapMessage) to the URL:PORT of the ADP XML Engine ($Url)
$result = ([xml]$webclient.UploadString($Url, $soapMessage))
# Eliminate the duplicate reply tags by starting the XML string at 2nd Reply tag (one with the #group attribute), then convert that string to Xml ([xml]) and store in $xmldata
$xmldata = [xml]($result.reply.session.get_InnerXML() | out-string | %{$_.substring($_.IndexOf('
$xmldata.reply.ServiceSalesClosed  |  Export-Csv "C:\Users\John Donnelly\Desktop\Output.csv" -NoTypeInformation
$excel = new-object -comobject Excel.Application
$excel.visible = $true
$workbook = $excel.Workbooks.open("C:\Users\John Donnelly\Desktop\Output.csv")

Explanation:

I set the $Url variable for the address of the XML Engine front end with IP and Port as follows:

$Url = http://xxx.yyy.zz.aa:pppp

and I then instantiate a WebClient object to handle the request response as follows:

$webclient = new-object System.Net.WebClient

I then setup the request payload as a variable called $soapmessage per the ADP DAP specifications (encoding, namespace etc):

$soapMessage = @"
<!xml version="1.0" encoding="UTF-8"?>
<eoapi:Request xmlns:eoapi=http://www.adp.com/xxxxxx version="1.0" ShowUnknownElementErrors="Yes">
   <Session>
   <Connection>
      <Product>Product</Product>
      <Server>Server</Server>
      <Password>password</Password>
      <Pooling>No</Pooling>
   </Connection>
   <GetSet TimeoutSeconds="1800">
    <Qbject>TheDataObjectName</Qbject>
    <Group>DealershipCode</Group>
    <Selection></Selection>
    <MultiFlag>E</MultiFlag>
   </GetSet>
  </Session>
<eoapi:Request>
"@

(FYI the Qbject tag above should be Object but WordPress won’t allow that particular tag.)

We invoke the WebClient UploadString method and supply the 2 parameters, URL and Payload . The result is returned and stored in a variable $result: as follows:

$result = ([xml]$webclient.UploadString($Url, $soapMessage))

The reply comes back with a header then a root Reply node with child nodes for connection error code and message. If there is a connection error you will receive error information here. If the connection is successful the response will include a Session Root node. If the Selection parameter is successful (in our case there isn’t one so it was successful) then you will receive a response as follows:

<Session>
  <ErrorMessage />0
  <Reply group="TheDealershipCode" object="TheDataObjectName" type="GetSet">
    <TheDataObjectName>blah blah blah</TheDataObjectName>
    <TheDataObjectName>blah2 blah2 blah2</TheDataObjectName>
 <ErrorMessage />0
 </Reply>
</Session>

If the Selection parameter generates an error you will see error information in the ErrorCode and ErrorMessage nodes in the first Reply node.

In my initial evaluation I didn’t need any data other than the TheDataObjectName nodes. The $result variable is typed as XmlDocument so we can use the dot method of extracting the inner Xml string. I created a new variable ($xmldata) by using the get_InnerXml() method to pull the Xml as a string from the Session Node (exclusive of the Session tags) as follows:

$xmldata = [xml]$result.reply.session.get_InnerXML()

Another way to say this (probably in my mind only) is that we drilled down to the session level (node) and grabbed the XML as a string. We then pipe it out:

| out-string | %{$_.substring($_.IndexOf('</pre>
and we then create a substring of the Session InnerXML starting with the&nbsp;Reply tag which happens to have an attribute for group which does not exist in the Reply tag we don't want.

At this point $xmldata is a string which we strongly typed as [xml].

The string is checked for well-formedness when typed so we know we have good XmlDocument object going forward.

If we take a look at the InnerXML for this XmlDocument object it will look like this:


<Reply group="TheDealershipCode" object="TheDataObjectName" type="GetSet">
  <TheDataObjectName>blah blah blah</TheDataObjectName>
  <TheDataObjectName>blah2 blah2 blah2</TheDataObjectName>
</Reply>

I then use dot notation to select the TheDataObjectName nodes and pipe them to the Export-Csv commandlet and save it to a file on my desktop with no type information:

$xmldata.reply.ServiceSalesClosed  |  Export-Csv "C:\Users\John Donnelly\Desktop\Output.csv" -NoTypeInformation

I then open the file in Excel to take a look.

If anyone has better suggestions on the script or comments please share.