Now I will do the incremental backup of M3 users and roles.
Refer to part 1 and part 2 for the background.
Design decisions: Pull or Push?
We can either make IPA pull the data from M3, or make M3 push the data to IPA.
We can make a process flow with SQL queries that pull data from M3, creates or updates the respective record in IPA, scheduled to run at a certain frequency, e.g. once an hour.
It is the simplest strategy to implement. But it is not efficient because it re-processes so frequently thousands of records that have already been processed. And it does not reflect the changes of M3 sooner than the scheduled frequency. And each iteration causes unnecessary noise in the IPA logs and database, so the less iterations the better. And IPA is already slow and inefficient for some reason, so the less work in IPA the faster it is. And after an undetermined amount of work, IPA becomes unstable and stops responding after which we have to restart it, so the less work in IPA the more stable it is.
We can also make a series of Event Hub subscriptions and a process flow with Landmark activity nodes. It will reflect the changes in IPA virtually immediately, faster. And it will operate only on the record being affected, not all, which is efficient.
I will use a mix: I used the pull strategy for the initial mass load (see part 2), and here below I will use the push strategy for the incremental backup.
Here is my process flow that does the incremental backup:
- The top section handles the M3 user (MNS150, CMNUSR) and the respective IPA identity, actor, actor-identity, actor-role, in the gen data area, and the user in the environment data area (e.g. DEV, TST).
- The second section handles the M3 email address (CRS111, CEMAIL, EMTP=4) and the respective IPA actor email address.
- The third section handles the M3 role (MNS405, CMNROL) and the respective IPA task.
- The bottom section handles the M3 user-roles (MNS410, CMNRUS) and the respective IPA user-tasks.
- Each section handles the M3 operations: create, update, and delete.
- I merged everything in one big flow, but we could split it in individual flows too.
- I upload the process with logs disabled to avoid polluting the logs and database.
You can download the process flow source code on my GitHub here.
Event Hub subscriptions
I created the corresponding Event Hub subscriptions in the IPA Channels Administrator:
Here are the resulting WorkUnits:
Repeat per environment
Deploy the process flow, and setup the event hub subscriptions on each environment data area (e.g. DEV, TST).
Refer to the challenges section in part 1 for the limitations, notably the data model dissonance which will cause collisions, the out-of-order execution which will cause inconsistencies, and the constraint mismatch which will cause failures.
- Prevent deleting the administrative users M3SRVADM and lawson.
- Recursively delete dependencies (e.g. in IPA we cannot delete a user that has pending WorkUnits)
This three-part blog post was a complete solution to do unidirectional synchronization of users between M3 and IPA, by means of an initial mass load via the command line, and an incremental backup via a process flow and Event Hub subscriptions. Unfortunately, Infor does not have complete documentation about this, there are serious shortcomings, and IPA is defunct anyway. Also, check out the valuable comments of Alain Tallieu whom takes user synchronization to the next level.
- Part 1, overview of user synchronization between M3 and IPA
- Part 2, initial mass load
- Part 3, incremental backup
- Comments by Alain Tallieu where he shares his experience and valuable tips
- Event Hub for Infor Process Automation (IPA)
Please like, comment, follow, share, and come author with us.
15 thoughts on “User synchronization between M3 and IPA – Part 3”
its a great idea to merge them all, wish I had an IPdesigner to read it. Remember I had to add some exception scenarios, right now can only recall delete pfimetrics before deleting user in case of UA history or check if user in several dataareas.
Yes, there are plenty of problems like that, and I don’t yet know how to identify them all. I will do a global scale test at my customer soon, and I will be posting my findings here.
Also, you don’t need PFDesigner to read the process flow, you can pause the animated GIF, or you can open the lpd file in a text editor such as Notepad, it’s just XML.
I know but its just not the same 😦
Then pause the animated GIF, I took all 28 screenshots.
still not the same, I just miss it 🙂
You miss IPA?? Why? It’s flawed and buggy in so many ways.
Well I love desiging workflows and I only experienced PFI/IPA so far. For me it was the perfect process/ M3 functional/ M3 technical balance. There were indeed some bugs but we always managed to find another way.
By the way I receive mixed signals about the product life expectancy. I did hear it was dead but other people would say its not. There was a new version in may but I cannot find anything else on internet. Did you get confirmation from Infor?
If you like designing workflows, checkout Infor ION, MuleSoft ESB, Pentaho Kettle, etc.
I don’t remember Infor’s official position on IPA, it’s pretty much condemned since Infor (ION) is the acquirer of Lawson (IPA). They will probably keep maintenance of IPA for existing customers, but I heard most everyone at Infor for new customers are already doing just ION.
There will be a transition period because ION for workflows was very limited last time I checked several years ago; I don’t know now, probably still not as fully featured as IPA. It’s choosing between two bad products so it doesn’t really help.
can you help me I got this message while trying to explorer your IPA file
error connecting to landmark:
com.lawson.rdtech.type.viewException:Remote call faild
com.lawson.rdtech.type.CompoundField; Local Class incompatible stream class desc
Hi Ahmed. I don’t know. Try removing the activity nodes ones by one from the XML contents and opening in PF Designer to see which one is the problem. Or re-create it from scratch from the animated GIF. It’s a lot of copy/paste. –Thibaud
UPDATE: added section future work