Optimap_v4

The big deal

So one of well known swiss speciality is milk chocolates. Taking into consideration that it will be soon Christmas, i have a very few days to organize my chocolate boxes distribution roundtrip. Chocolate does not wait. I also know my friends are eager to taste them… My deliveries are packed but in which order should i load them in my car ? So many destination points, are you kidding me ?!

Hopefully, Optimap will help me find the best trip to drop my boxes on time. A few settings in TOI module plus the use of API MYS450MI and i am ready to go !

Not reinventing the wheel

Starting from Thibaud’s last post, i would like to present you an example of integration with M3.

As soon as you have exported your deliveries to Optimap, it calculates the fastest round trip :

  • each delivery point is tied to a label (DLIX value) and you are able to manually modify the order on the Optimap site by playing with the “edit route” functionality; simply drag and drop the label up or down through the list

opti_v4_1

  • the START label represents the starting/ending point of the roundtrip and corresponds to the departure warehouse
  • once you are done, go to sub-menu “export / raw path with labels” and drag and drop the result in the textbox at the top of the window

opti_v4_2

API MYS450MI/AddDelivery is used to update loading and unloading sequences of each DLIX. This way, you are able to print the loading list from DRS100 for example with the proper order you have chosen.

 

Functional tips :

MYS450MI uses files from TOI : MYOPIH, MYOPID, MYOPIU

opti_v4_3

To make it work, we need a partner described in MMS865. This partner should be the default value in MWS410/P. The loop is the following one :

  1. starting point : deliveries are not downloaded (message OQMSGN = blank, download status OQIRST = 10 “ready to be downloaded”)
  2. download deliveries to MYS450 (use MYS410 or MWS410+related option 54) (OQMSGN gets a new value, OQIRST = 20)
  3. Update status of the message in MYS450 (from 10 to 20) to tell M3 the external system has downloaded those deliveries (mandatory to avoid error message later during API call)
  4. Call MYS450MI/AddDelivery to upload new values for OQSULS and OQMULS fields on each delivery + execute the message (MSGN in MYS450 should get status 90 = finished, OQIRST is reset to 10 and OQSULS/OQMULS are updated)

opti_v4_4

Business rules :

  • all deliveries sent to Optimap have the same departure warehouse
  • the partner E0PA is set in MWS410/P. If you change this value in MWS410/P you must close the panel and re-open it (because CSYSTR is updated on closing the panel)
  • in our example, all deliveries have the same unloading place. As a result, we will force OQMULS = 1. MULS normally depends from DRS021 settings.
  • the first DLIX to ship is the last one loaded on the truck; OQSULS has the same sorting as Optimap roundtrip

 

Technical choice :

As you can manually drag to re-order stop points on the website, the drop event in WPF to catch the desired route has been chosen. The update in M3 is done after a confirm dialog box and uses a background worker with a progress bar. The navigator is closed to minimize the memory consumption.

As there is no API to retrieve the partner E0PA, an SQL statement is done to retrieve the proper value in CSYSTR.

If one of the delivery of the selected list has no message OQMSGN or a bad download status (OQIRST < 20), the progress bar displays an error message and is stucked at 99%. You can enhance business rules checks at will.

 

Other possibility : you can also explore the XMLHttpRequest. Some examples are available into the LSO developers guide (example with PFI integration). A good idea for v5 !

Source code

Available on Thibaud’s GitHub.

 

Demo

Example here.

 

Conclusion

Sigh ! Here was an example of M3 integration with Optimap round trip solver, TOI module and WPF event drag and drop.

I can’t imagine the complexity Santa Claus will face in a few days…

 

That’s it !

Maxime.

 

 

 

Route optimization for MWS410 with OptiMap (continued)

Today I unbury old scripts from my archives, and I post them here and on my GitHub.

This script illustrates how to integrate the M3 Delivery Toolbox – MWS410/B with OptiMap – Fastest Roundtrip Solver to calculate and show on Google Maps the fastest roundtrip from the Warehouse to the selected Delivery addresses; it’s an application of the Traveling Salesman Problem (TSP) to M3. This is interesting for a company to reduce overall driving time and cost, and for a driver to optimize its truck load according to the order of delivery.

OptiMap_V3

The Warehouse (WHLO) is now detected from the selected rows, and its address is dynamically retrieved using API MMS005MI.GetWarehouse; no need to specify the Warehouse address as a parameter anymore.

Create a view in M3 Delivery Toolbox MWS410/B that shows the address fields in the columns (e.g. ADR1, ADR2, ADR3), and set the field names in the parameter of the script.

8 10 13

OptiMap_V4  [PENDING]

Add ability to Export OptiMap’s route to M3 Loading (MULS) and Unloading sequence (SULS) using API MYS450MI.AddDelivery, closing the loop of integrating M3 to OptiMap.

This is an idea for version 4. Script to be developed…

Source code

I posted the source code on my GitHub.

Related posts

Mashup quality control #7

I enhanced my quality control tool for Infor Smart Office Mashups with the following new verifications.

NEW

  1. It verifies the XAML of Document Archive (now Infor Document Management, IDM) by TargetKey=SearchXQuery.
  2. It verifies the XAML of M3 Web Services (MWS):
    • It ensures the parameters WS.Wsdl and WS.Address use the Smart Office profile value {mashup:ProfileValue Path=M3/WebService/url} (i.e. not a hard-coded URL), so that the Mashup can be migrated across environments (e.g. DEV, TST)
    • It ensures there are no hard-coded WS.User or WS.Password, but that is uses the current user’s credentials with WS.CredentialSource=Current.
  3. It verifies if there are other hard-coded values, mostly in mashup:Parameter in Mashup events.

GitHub

I moved the source code to a new GitHub repository at https://github.com/M3OpenSource/MashupQualityControl
1

Result

As usual, compile and run the script in the Smart Office Script Tool (mforms://jscript):
2

It will verify all the Mashups of your Smart Office, and it will show the elements in the XAML that have hard-coded values or other problems.

That’s it!

Please visit the source code, try it, click like, leave a comment, subscribe, share, and come write the next post with us.

Related posts

 

HTTP channels in MEC (part 5 bis)

I will re-visit the HTTPSOut communication channel of Infor M3 Enterprise Collaborator (MEC) as a follow-up of my previous post part 5.

TLDR; I stopped using the HTTPSOut channel for its various problems. Now, I use the Send Web Service process.

Previously

In part 5, I had used the sample HTTPSOut channel from the MEC documentation, for MEC to make HTTPS requests, but it turned out to have a variety of problems.

Then, in a previous post I had explored the Send Web Service process for MEC to make SOAP requests, and I had realized it can be used for any content, not just for SOAP. I will explore that here.

Nowadays

Instead of HTTPSOut, use the Send Web Service process. It is easy to use: set the Service Endpoint to the desired URL, put whatever value in the SOAP Action (it will be ignored), set the Content type (e.g. text/plain, application/xml), select the checkbox if you want the response, and set the user/password if you need Basic authentication:

The process will use Java’s HttpsURLConnection and its default hostname verifier and certificate validation. If the URL endpoint uses a certificate that is not part of the MEC JRE truststore, you will have to add it with keytool import (see part 5).

Orchestration

In MEC, the processes are piped in order: the output of one process is the input of the next process. In my example, I send a message to my agreement, that goes through an XML Transform with my mapping, the output of that is sent to example.com, and the HTTP response of that is archived.

Maintenance

One problem with the Send Web Service is the URL, user, and password are hard-coded in the agreement, and I do not know how to un-hard-code them; I would like to use an external property instead.

We have many agreements, and several environments (e.g. DEV, EDU, TST, PRD) each with its URL, user, and password. When we migrate the agreements to the other environments, we have to manually update each value in each agreement, it is a maintenance nightmare.

As a workaround, I mass SQL UPDATE the values in MEC’s table PR_Process_Property:

SELECT REPLACE(PropValue, 'https://DEV:8082', 'https://TST:8083') FROM PR_Process_Property WHERE (PropKey='Service Endpoint')
SELECT REPLACE(PropValue, 'john1', 'john2') FROM PR_Process_Property WHERE (PropKey='User')
SELECT REPLACE(PropValue, 'password123', 'secret123') FROM PR_Process_Property WHERE (PropKey='Password')

Note: Use UPDATE instead of SELECT.

4

Conclusion

That is my new setup for MEC to do HTTPS outbound.

Related articles

That’s it.

HTTP channels in MEC (parts 3,6 bis)

I will re-visit the HTTPSyncIn and HTTPSyncOut communication channels of Infor M3 Enterprise Collaborator (MEC) as a follow-up of my previous posts part 3 and part 6.

TL;DR: It turns out there is no obligation to use a channel detection, we can use any of the available detections (e.g. XML detection). Also, it turns out we can re-use the HTTPSyncIn in multiple agreements (i.e. shared port).

Previously

In part 3, I showed how to use the HTTPSyncIn and HTTPSyncOut channels for MEC to receive documents over HTTP (instead of traditionally FTP). And in part 6, I showed how to add nginx as a reverse proxy with TLS termination for MEC to receive the documents securely over HTTPS (encrypted).

As for MEC, an HTTPSyncIn and its port number can only be used in one agreement; otherwise MEC will throw “Target values already defined”:
reuse3

As a workaround, I had to setup one HTTPSyncIn per agreement, i.e one port number per agreement. It is feasible (a computer has plenty of ports available), but in the configuration of nginx I had to setup one server block per agreement, which is a maintenance nightmare.

It turns out…

It turns out the HTTPSyncIn persists the message to disk and queues it for detection and processing:
0_

That means I do not have to use a channel detection in the agreement, I can use any of the other detections, such as XML or flat file detection.

Channel

Like previously, setup one HTTPSyncIn (e.g. port 8085), it will apply to the entire environment:
1-2

And like previously, setup one HTTPSyncOut, it can be used by any agreement:
1-3

Detection

Setup as many XML detections as needed, e.g.:
2-1 2-2 2-3 2-4

Agreement

Here is the key difference compared to previously: select XML Detection, not channel detection:
3-2__

In the processes, like previously, use the Send process with the HTTPSyncOut:
3-4

Setup as many agreements like that as needed.

nginx

If you use nginx for HTTPS, setup one server block only for all agreements (instead of one per agreement), listen on any port (e.g. 8086) and forward to the HTTPSyncIn port (e.g. 8085):

server {
    listen 8086 ssl;
    ssl_certificate .crt;
    ssl_certificate_key .key;
    location / {
        auth_basic "Restricted";
        auth_basic_user_file .htpasswd;
        proxy_pass http://localhost:8085/;
    }
}

Conclusion

Those were my latest findings on the easiest way to setup HTTPSyncIn in MEC to receive HTTP(S) when having multiple agreements. My previous recommendation to use one channel (one port) per agreement does not apply anymore. Everything will go through the same port, will be persisted and queued, and MEC will iterate through the detections to sort it out and process. I do not know if there are any drawbacks to this setup; for future discovery.

That’s it!

Let me know what you think in the comments below, click Like, click Follow to subscribe, share with your colleagues, and come write the next blog post with us.

User synchronization between M3 and IPA – Part 3

Now I will do the incremental backup of M3 users and roles.

Refer to part 1 and part 2 for the background.

Design decisions: Pull or Push?

We can either make IPA pull the data from M3, or make M3 push the data to IPA.

Pull

We can make a process flow with SQL queries that pull data from M3, creates or updates the respective record in IPA, scheduled to run at a certain frequency, e.g. once an hour.

It is the simplest strategy to implement. But it is not efficient because it re-processes so frequently thousands of records that have already been processed. And it does not reflect the changes of M3 sooner than the scheduled frequency. And each iteration causes unnecessary noise in the IPA logs and database, so the less iterations the better. And IPA is already slow and inefficient for some reason, so the less work in IPA the faster it is. And after an undetermined amount of work, IPA becomes unstable and stops responding after which we have to restart it, so the less work in IPA the more stable it is.

Push

We can also make a series of Event Hub subscriptions and a process flow with Landmark activity nodes. It will reflect the changes in IPA virtually immediately, faster. And it will operate only on the record being affected, not all, which is efficient.

Mix

I will use a mix: I used the pull strategy for the initial mass load (see part 2), and here below I will use the push strategy for the incremental backup.

Process flow

Here is my process flow that does the incremental backup:
gif

  • The top section handles the M3 user (MNS150, CMNUSR) and the respective IPA identity, actor, actor-identity, actor-role, in the gen data area, and the user in the environment data area (e.g. DEV, TST).
  • The second section handles the M3 email address (CRS111, CEMAIL, EMTP=4) and the respective IPA actor email address.
  • The third section handles the M3 role (MNS405, CMNROL) and the respective IPA task.
  • The bottom section handles the M3 user-roles (MNS410, CMNRUS) and the respective IPA user-tasks.
  • Each section handles the M3 operations: create, update, and delete.
  • I merged everything in one big flow, but we could split it in individual flows too.
  • I upload the process with logs disabled to avoid polluting the logs and database.

Source code

You can download the process flow source code on my GitHub here.

Event Hub subscriptions

I created the corresponding Event Hub subscriptions in the IPA Channels Administrator:

  • M3:CMNUSR:CUD
  • M3:CEMAIL:CUD
  • M3:CMNROL:CUD
  • M3:CMNRUS:CUD

channel

Result

Here are the resulting WorkUnits:
result

Repeat per environment

Deploy the process flow, and setup the event hub subscriptions on each environment data area (e.g. DEV, TST).

IMPORTANT

Refer to the challenges section in part 1 for the limitations, notably the data model dissonance which will cause collisions, the out-of-order execution which will cause inconsistencies, and the constraint mismatch which will cause failures.

Future work

  • Prevent deleting the administrative users M3SRVADM and lawson.
  • Recursively delete dependencies (e.g. in IPA we cannot delete a user that has pending WorkUnits)

Conclusion

This three-part blog post was a complete solution to do unidirectional synchronization of users between M3 and IPA, by means of an initial mass load via the command line, and an incremental backup via a process flow and Event Hub subscriptions. Unfortunately, Infor does not have complete documentation about this, there are serious shortcomings, and IPA is defunct anyway. Also, check out the valuable comments of Alain Tallieu whom takes user synchronization to the next level.

Related articles

That’s it!

Please like, comment, follow, share, and come author with us.

User synchronization between M3 and IPA – Part 2

Now I will do the initial mass load of users.

As a reminder, we create the identities and actors in the gen data area, and the users and tasks in the environment data area (e.g. DEV, TST). For more information, refer to part 1.

Design decisions: Command line or process flow?

We could use either the command line or the Landmark activity node in a process flow. I will explore the former now, and the latter next time. Note the command line is not available in Infor CloudSuite.

1. Identities and actors

I will generate a file of identities and actors, reading from M3, and I will use the secadm command to import the file to the gen data area in IPA.

1.1. Documentation

There is some documentation for the secadm command at Infor Landmark Technology Administration Guides > Infor Landmark Technology User Setup and Security > Landmark for Administrators > Using the Administrative Tools > The Security Administration Utility (secadm):
11

1.2. Extract the data

Extract all the users (MNS150) and email addresses (CRS111) from M3, and save them to a file somewhere (e.g. semi-colon separated users.csv):

SELECT DISTINCT JUUSID, JUTX40, CBEMAL
FROM MVXJDTA.CMNUSR U
LEFT OUTER JOIN MVXJDTA.CEMAIL E
ON U.JUUSID=E.CBEMKY AND E.CBEMTP='04'

3

Note: If you already know the subset, you can filter the list to only the users that will participate in approval flows, and discard the rest.

PROBLEM: M3 is environment specific (e.g. DEV, TST), but the gen data area is not. And M3 is company (CONO) specific, whereas IPA is not. So we will have collisions and omissions.

1.3. Transform

Transform the list of users to a list of secadm commands, where for each user, we have the commands to create the identity, actor, actor-identity, and actor-role, e.g.:

identity add SSOPV2 USER123 --password null
actor add USER123 --firstname Thibaud --lastname "Lopez Schneider" --ContactInfo.EmailAddress thibaud.lopez.schneider@example.com
actor assign USER123 SSOPV2 USER123
role assign USER123 InbasketUser_ST

Not all attribute keys are documented, but you can find them all here:
1 2

For the transformation you can use the following DOS batch file (e.g. users.bat):

@echo off
for /f "tokens=1-3 delims=;" %%a in (users.csv) do (
    echo identity add SSOPV2 %%a --password null
    for /f "usebackq tokens=1,* delims= " %%j in ('%%b') do (
        echo actor add %%a --firstname %%j --lastname "%%k" --ContactInfo.EmailAddress %%c
    )
    echo actor assign %%a SSOPV2 %%a
    echo role assign %%a InbasketUser_ST
)

Note 1: Replace delims with the delimiter of your file (e.g. semi-colon in my case).

Note 2: The command will naively split the name TX40 in two, where the first word is the first name and the rest is the last name; this will be an incorrect split in many cultures.

Save the result to a text file (e.g. users.txt):

users.bat > users.txt

We now have a list of commands ready to be executed:
4

1.4. secadm

Execute the secadm command to import the file to the gen data area:

cd D:\Infor\LMTST\
enter.cmd
secadm -f users.txt -d gen

5

1.5. Result

Here is the resulting list of identities, actors, actor-identity, and actor-role, in the gen data area:
6 7 8 9 10

1.6. Repeat per environment

Repeat from step 1.2 for the next environment (e.g. DEV, TST). Due to the data model dissonance between M3 and IPA, there will be collisions; see the challenges section in part 1.

1.7. Delete

To delete the records, proceed in reverse with the remove and delete sub-commands and the –complete argument. Be careful to not delete the administrative users M3SRVADM and lawson.

@echo off
for /f "tokens=1-3 delims=;" %%a in (users.csv) do (
    echo role remove %%a InbasketUser_ST
    echo actor remove %%a SSOPV2 %%a
    echo actor delete %%a --complete
    echo identity delete SSOPV2 %%a
)

1.8. Update

I could not find a command to update the Actor; for future work; meanwhile, delete and re-add.

1.9. More

Here is some more help for the secadm command:

D:\Infor\LMTST>enter.cmd

D:\Infor\LMTST>secadm
Usage: Utility for security administration.
Syntax: secadm [secadm-options] command [command-options]
where secadm-options are global secadm options
(specify --secadm-options for a list of secadm options)
where command is a secadm command
(specify --help-commands for a list of commands
where command-options depend on the specific command
(specify -H followed by a command name for command-specific help)
Specify --help to receive this message
FAILED.

D:\Infor\LMTST>secadm --secadm-options
-c Continue on error
-d dataarea
-? Print help meesage
-i Enter interactive shell mode
-H <command> Command-specific help
-f <filename> File to use as for commands
-r Recover Secadm Password
-q Run quietly
--secadm-options For a list of secadm options
-s Run silently
--help-commands For a list of commands
-m Enter interactive menu mode
-p Password for secadm
--help Print this message
-v Print version information
[-p >password>] -u Upgrade AuthenDat
FAILED.

D:\Infor\LMTST>secadm --help-commands

Valid sub-commands are:
accountlockoutpolicy Maintain system account lockout policies.
actor Maintain system actors
httpendpoint Maintain system HTTP endpoints and HTTP endpoint assignments.
identity Maintain system identities.
load Load data from a file.
provision Provision Lawson users
loginscheme Maintain system login schemes.
migrate Migrate supplier identities from default primary SSO service to domain primary SSO service
passwordresetpolicy Maintain system password reset policies.
role Maintain system roles
secanswer Maintain system security answers.
secquestion Maintain system security questions.
service Maintain system services.
ssoconfig Maintain Single Sign On Configuration
ssodomain Maintain system domain.
security Assign security classes to roles and control Security activation
admin Lawson Security Admin Configuration
passwordpolicy Maintain system password policies.
generate Secadm script generation from data
agent Migrate system agents and actors
principalresolver Maintain custom Principal Resolver code.
report Security Data Reports
mitrustsetup Set up trusted connections for an MI socket service.
keys Key Management
SSOCertificate Manage Federated Server Certificates
wsfederation Manage WS Federation Settings
proxy Proxy
class SecurityClass
FAILED.

D:\Infor\LMTST>secadm -H identity
identity Maintain system identities.

Valid sub-commands are:
privileged Maintain privileged identities.
add Add identity to the system.
update Update identity in the system.
delete Delete identity from the system.
display Display identity in the system.
pwdResetByIdentity Password reset by identity in the system.
pwdResetByService Password reset by service in the system.
listIdentities List all identities in the system.
listBadPasswords List identities with bad passwords by service in the system
overrideBadPasswords Override password for identities with bad password by service in the system
DONE.

D:\Infor\LMTST>secadm -H actor
actor Maintain system actors

Valid sub-commands are:
add Add actor to the system.
delete Delete actor from the system. !This option is temporarily unavailable
assign Assign Identity to an actor.
remove Remove Identity from an actor.
accountenable Enable actor account in the system.
accountdisable Disable actor account in the system.
enablerunas Enable Run As for Actor in the system.
disablerunas Disable Run As for Actor in the system.
actorenable Enable actor in the system.
actordisable Disable actor in the system.
context Actor context maintenance
ctxproperty Context property maintenance
list List all actors in the system.
link Actor to Agent link maintenance
DONE.

D:\Infor\LMTST>

2. Users and tasks

Now, for each M3 environment (e.g. DEV, TST), I will generate a file of users and tasks, and I will call the importPFIdata command to import the file to the respective data area.

2.1. Documentation

There is some documentation for the importPFIdata command at Infor Landmark Technology Installation Guides > Infor Lawson System Foundation Using Infor Process Automation Configuration Guide > Post-Installation Procedures > Run migration scripts:
12

2.2. Extract the data

For each environment (e.g. DEV, TST), extract all the roles (MNS405) and user-roles (MNS410) from M3, and save them to files somewhere (e.g. roles.csv and user-roles.csv):

SELECT KRROLL, KRTX40 FROM MVXJDTA.CMNROL
SELECT KUUSID, KUROLL FROM MVXJDTA.CMNRUS

Note: If you already know the subset, you can filter the list to only the users and roles that will participate in approval flows, and discard the rest.

2.3. Transform

Transform the list of roles and user-roles to the XML syntax of importPFIdata, e.g.:

<?xml version="1.0" encoding="UTF-8"?>
<ImpExpData Version="1">
    <Tables>
        <Table Name="WFTASK">
            <Rows>
                <Row>
                    <Column Name="TASK"><Value>FLEET_MGR</Value></Column>
                    <Column Name="WF-DESCRIPTION"><Value>Fleet manager</Value></Column>
                </Row>
            </Rows>
        </Table>
    </Tables>
</ImpExpData>

Not all table names and columns are documented, but you can find them all here:
13 14

For the transformation you can use the following DOS batch file (e.g. user-roles.bat):

@echo off
echo ^<?xml version="1.0" encoding="UTF-8"?^>
echo ^<ImpExpData Version="1"^>
echo ^<Tables^>
echo ^<Table Name="WFUSRPROFL"^>
echo ^<Rows^>
for /f "tokens=1,* delims=;" %%a in (users.csv) do (
echo ^<Row^>^<Column Name="WF-RM-ID"^>^<Value^>%%a^</Value^>^</Column^>^</Row^>
)
echo ^</Rows^>
echo ^</Table^>
echo ^<Table Name="WFTASK"^>
echo ^<Rows^>
for /f "tokens=1-2 delims=;" %%a in (roles.csv) do (
echo ^<Row^>^<Column Name="TASK"^>^<Value^>%%a^</Value^>^</Column^>^<Column Name="WF-DESCRIPTION"^>^<Value^>%%b^</Value^>^</Column^>^</Row^>
)
echo ^</Rows^>
echo ^</Table^>
echo ^<Table Name="WFUSERTASK"^>
echo ^<Rows^>
for /f "tokens=1-2 delims=;" %%a in (user-roles.csv) do (
echo ^<Row^>^<Column Name="WF-RM-ID"^>^<Value^>%%a^</Value^>^</Column^>^<Column Name="TASK"^>^<Value^>%%b^</Value^>^</Column^>^<Column Name="START-DATE"^>^<Value^>00000000^</Value^>^</Column^>^<Column Name="STOP-DATE"^>^<Value^>00000000^</Value^>^</Column^>^</Row^>
)
echo ^</Rows^>
echo ^</Table^>
echo ^</Tables^>
echo ^</ImpExpData^>

Save the result to an XML file (e.g. user-roles.xml):

user-roles.bat > user-roles.xml

We now have an XML file ready to be imported:
15

 

2.4. importPFIdata

Execute the importPFIdata command to import the file to the specified data area (e.g. lmdevipa):

cd D:\Infor\LMTST\
enter.cmd
env\bin\importPFIdata.bat lmdevipa -f user-roles.xml

16

2.5. Result

Here is the resulting list of users, tasks, and user-tasks, in the specified data area:
21 18 20 17 19

2.6. Repeat per environment

Repeat from step 2.2 for the next environment (e.g. DEV, TST).

2.7. Delete

I do not yet know how to delete via the command line; for future work.

2.8. Update

The importPFIdata command will automatically update the record if it already exists.

Source code

I made a unified PowerShell script m3users.ps1 that I put on my GitHub.

Conclusion

That was the initial mass load of users from M3 to IPA using the command lines secadm for identities and actors in the gen data area, and importPFIdata for users and tasks in each environment data area (e.g. DEV, TST).

See also

See part 1 for the overview of user synchronization between M3 and IPA.

And read the comments by Alain Tallieu where he shares his experience and valuable tips.

Future work

Here is some future work:

  • What to do about the lack of environment and CONO in IPA
  • How to update actors
  • How to delete users and tasks
  • Prevent deleting the administrative users M3SRVADM and lawson.
  • Finish the PowerShell script

To be continued…

I will continue in part 3 for the incremental backup.