We have already covered loading time & deduction records from an S3 environment to the cloud, so let’s dig into the flow we built to automate this. Essentially, this builds the email footer. Using that parameter, it extracts that data and produce a file. The key thing is the SQL, which is what RPI built to automate the whole extract and populate those three record types in the system. For each parameter, we extract the file and move it to the SFTP server for CloudSuite, and then use it again later to complete a DB import.
This interface will run while also completing a record count. For each month of data we extracted, a record count is built so that you have audit control. when you extract records and load them into GHR, you will be able to make sure you don't lose any records. Essentially, that's a high level examination of of what our flow does.
After running your interface, you’ll have a whole bunch of CSV files ready to import, along with the DB import we spoke about earlier. The ratio is one DB import per file that you bring to GHR. By doing a DB import and having no errors come up on that file, it brings the data to this history import (basically Payroll History Import Business Class). In there, you can actually purge and process records because this is where all the tabs, the error tabs, non-process and everything else lives. If you click on the non-process, all the data that you have loaded would be there, so then you could click process record to send an async job. It takes a while to evaluate all your data before spawning a couple of queues to process all that data. Keep in mind that if you are processing like 800,000 records, it will take several hours to process that data.
