SFTP in MEC

Infor M3 Enterprise Collaborator (MEC) now includes support for SSH FTP (SFTP), a secure file transfer protocol.

FTP vs. PGP vs. SFTP vs. FTPS

Plain FTP does not provide security properties such as confidentiality (against eavesdropping) and integrity (against tampering). FTP provides authentication, but it is plain text. As such, plain FTP is insecure and strongly discouraged.

Even coupled with PGP file encryption and signature verification to protect the contents of the file, the protocol, the credentials, the files and the folders are still vulnerable.

On the other hand, SFTP provides secure file transfer over an insecure network. SFTP is part of the SSH specification. This is what I will explore in this post.

There is also FTPS (also known as FTP-SSL and FTP Secure). Maybe I will explore that in another post.

MEC 9.x, 10.x, 11.4.2.0

If you have MEC 11.4.2.0 or earlier, your MEC does not have full built-in support for SFTP or FTPS. I found some traces in MEC 10.4.2.0, 11.4.1 and 11.4.2. And I was told that support for SFTP was made as a plugin sort of, in some MEC 9.x version; maybe they meant FTPS. Anyway, if you are handy, you can make it work. Or you can manually install any SFTP/FTPS software of your choice and connect it to MEC. Do not wait to secure your file transfers.

MEC 11.4.3.0

MEC 11.4.3.0 comes with built-in support for SFTP, and I found traces of FTPS. Unfortunately, it did not yet ship with documentation. I was told a writer is documenting it now. Anyway, we can figure it out by ourselves. Let’s try.

Here are the release notes:
3

In Partner Admin > Managed > Advanced, there are two new SFTP channels, SFTPPollIn and SFTPOut, which are SFTP clients:
5 6

By looking deeper at the Java classes, we find JCraft JSch, a pure implementation of SSH2 in Java, and Apache Commons VFS2:
2 13

SFTPOut channel

In this post, I will explore the SFTPOut channel in MEC which is an SFTP client for MEC to exchange files with an existing SFTP server.

Unit tests

Prior to setting up MEC SFTPOut, we have to ensure our MEC host can connect to the SFTP server. In my case, I am connecting to example.com [11.22.33.44], on default port 22, with userid mecuser, and path /outbound. Contact the SFTP server administrator to get the values, and eventually contact the networking team to adjust firewall rules, name servers, etc.

Do the basic networking tests (ICMP ping, DNS resolution, TCP port, etc.):

Then, do an SSH test (in my example I use the OpenSSH client of Cygwin). As usual with SSH TOFU, verify the fingerprint on a side channel (e.g. via secure email, or via phone call to the administrator of the SFTP server assuming we already know their voice):

Then, do an SFTP test:
4b

Optionally, compile and execute the Sftp.java example of JSch. For that, download Ant, set JAVA_HOME and ANT_HOME in build.bat, set the user@host in Sftp.java, and execute this:
build.bat
javac -cp build examples\Sftp.java
java -cp build;examples Sftp

16

Those tests confirm our MEC host can successfully connect to the SFTP server, authenticate, and exchange files (in my case I have permissions to put and retrieve files, not to remove files).

Now, we are ready to do the same SFTP in MEC.

Partner Admin

In Partner Admin > Manage > Communication, create a new Send channel with protocol SFTPOut, hostname, port, userid, password, path, filename, extension, and other settings:
78

I have not yet played with all the file name options.

The option for private key file is for key-based client authentication (instead of password-based client authentication). For that, generate a public/private RSA key pair, for example with ssh-keygen, and send the public key to the SFTP server administrator, and keep the private key for MEC.

The button Send test message will send a file of our choice to that host/path:
10

The proxy settings are useful for troubleshooting.

A network trace in Wireshark confirms it is SSH:

Now, we are ready to use the channel as we would use any other channel in MEC.

Problems

  • The SFTPOut channel does not allow us to verify the key fingerprint it receives from the SFTP server. Depending on your threat model, this is a security vulnerability.
  • There is a lack of documentation (they are working on it)
  • At first, I could not get the Send test message to work because of the unique file name (I am not familiar with the options) and the jzlib JAR file (see below).
  • MEC is missing JAR file jzlib, and I got this Java stacktrace:
    com.jcraft.jsch.JSchException: java.lang.NoClassDefFoundError: com/jcraft/jzlib/ZStream
    at com.jcraft.jsch.Session.initDeflater(Session.java:2219)
    at com.jcraft.jsch.Session.updateKeys(Session.java:1188)
    at com.jcraft.jsch.Session.receive_newkeys(Session.java:1080)
    at com.jcraft.jsch.Session.connect(Session.java:360)
    at com.jcraft.jsch.Session.connect(Session.java:183)
    at com.intentia.ec.communication.SFTPHandler.jsch(SFTPHandler.java:238)
    at com.intentia.ec.communication.SFTPOut.send(SFTPOut.java:66)
    at com.intentia.ec.partneradmin.swt.manage.SFTPOutPanel.widgetSelected(SFTPOutPanel.java:414)

    I was told it should be resolved in the latest Infor CCSS fix. Meanwhile, download the JAR file from JCraft JZlib, copy/paste it to the following folders, and restart MEC Grid application and Partner Admin:
    MecMapGen\lib\
    MecServer\lib\
    Partner Admin\classes\
  • Passwords are stored in clear text in the database, that is a security vulnerability yikes! SELECT PropValue FROM PR_Basic_Property WHERE PropKey='Password' . I was told it should be fixed in the Infor Cloud branch, and is scheduled to be merged back.
  • With the proxy (Fiddler in my case), I was only able to intercept a CONNECT request, nothing else; I do not know if that is the intention.
  • In one of our customer environments, the SFTPOut panel threw:
    java.lang.ClassNotFoundException: com.intentia.ec.partneradmin.swt.manage.SFTPOutPanel

Future work

When I have time, I would like to:

  • Try the SFTPPollIn channel, it is an SFTP client that polls an existing SFTP server at a certain time interval
  • Try the private key-based authentication
  • Try SFTP through the proxy
  • Try FTPS
  • Keep an eye for the three fixes (documentation, jzlib JAR file, and password protection)

Conclusion

This was an introduction about MEC’s support for SFTP, and how to setup the SFTPOut channel for MEC to act as an SFTP client and securely exchange files with an existing SFTP server. There is more to explore in future posts.

Please like, comment, subscribe, share, author. Thank you.

UPDATES 2017-01-13

  • Corrected definition of SFTPPollIn (it is not an SFTP server as I had incorrectly said)
  • Added security vulnerability about lack of key fingerprint verification in MEC SFTPOut channel
  • Emphasized the security vulnerability of the passwords in clear text

User synchronization between M3 and IPA – Part 4

I re-discovered a technique to synchronize users between Infor M3 and Infor Process Automation (IPA) using IPA’s web based administration tools.

It is now my preferred technique for the initial mass load of users, whereas previously in part 2, I had been using command line tools.

IPA web admin tools

The IPA web admin tools are located at http://host/UserManagement/home?csk.gen=true for the gen data area, to create the Identities, Actors, Actor-Identity, and Actor-Roles; and at http://host/lmprdlpa/LpaAdmin for the environment data area – PRD in my case – to create the Users, Tasks, and User-Tasks. It is the same as the IPA Rich Client Admin but web-based instead of Java GUI based.

To find the tools, use the shortcuts ValidationURLs.htm in your IPA installation folder: 0

Replaying the actions

I manually created users in the IPA web admin tools, intercepted in Fiddler the HTTP requests of each action, replayed the POST requests in Fiddler Composer, and repeatedly pruned the unnecessary data away until I identified the minimum set of values.

I then translated the result into the equivalent JavaScript code.

Result

Here below is the result. It supposes you already have an array of users which you can generate with SQL or M3 API (from MNS150 and CRS111).

Identities:
identity

Actors:
actor
actor_
actor__

Actor-Identities:
actor-identity

Actor-Roles:
actor-role

Users:
user

Source code

I posted the source code WebAdmin.js on my GitHub repository. I will update the source code there.

Future work

  • Stitch the JavaScript code together
  • Create the Tasks (from MNS405)
  • Create the User-Tasks (from MNS410)

Conclusion

This is yet another technique to synchronize users between M3 and IPA. I find it easier than the command line tools I was using previously. Follow the progress on the GitHub repo.

Related articles

  • Part 1, overview of user synchronization between M3 and IPA

That’s it! Please like, comment, subscribe, share, and author with us.

Troubleshooting M3 Web Services

I forgot how to troubleshoot Infor M3 Web Services (MWS) and wasted time researching again. This time I share what I rediscovered so I remember next time.

Scenario

I created a simple web service of type M3 Display Program (MDP) that creates a Facility in CRS008. The steps work correctly in M3:
2_

MWS Designer error

I created the equivalent web service in MWS Designer (MWSD), but it throws ‘Unable to execute MPD!’:

MWS Designer log

The MWSD log does not provide more information:

We can increase the MWSD log level from WARN to DEBUG/TRACE, it shows some HTTP/SOAP information (remember to revert when done as it fills the disk space):
10 10_

Fiddler proxy

We can use an HTTP proxy such as Fiddler to see the HTTP/SOAP request/response:
11 11_

SoapUI

We can use another SOAP client such as SoapUI to test the web service:
12 12_

Generated Java code

MWS Server generates Java code for each web service. I do not see any error with the generated code. The code uses Apache Axis2/CXF for SOAP, and the old Intentia Movex MPD for the interactive session:
7_ 7__

MWS Server log

The MWS Server log provides some more information, it mentions m3.program which hints me to believe my web service is correct and the problem is in M3:
4

MWS Server DEBUG/TRACE

I increased the Grid’s log levels of Subsystem and MWS to DEBUG and TRACE (remember to revert when done as it fills the disk space):
5 5

I ran the test case again, and opened the merged log:
5__

This time the log shows more information. It shows the <MPD> XML, and it shows the error Abnormal termination of mvx.app.pgm […] Dump log created:
5____

M3 DUMPLOG

The M3 dumplog shows even more information, it shows there was a NullPointerException on READ_DSP ICFCMD:
6 6__

I do not know how to analyze an M3 dumplog, so I will forward it to an M3 developer and ask for help. The problem points not to MWS but to the M3 program. That is as far as I can go in my troubleshooting.

Future work

Furthermore, depending on the case, it may also be useful to:

  • Intercept network traffic to the BCI port, and troubleshoot the old Intentia Movex BCI protocol; MPD uses M3Session which uses BCIConnection:
    8 8_
  • Use the Advanced Grid tools:
    9-0
  • Analyze the Thread dumps:
    9-1 9-2
  • Analyze the MWS Profiler:
    9-3
  • Analyze the Grid Status Report:
    9-4
  • Decrypt the encrypted Grid network traffic
  • Debug the MWS Server Java code line by line at runtime

Conclusion

That was how to troubleshoot M3 Web Services and digg more information about an error. I dug as far as I could, all the way to M3, and asked help to an M3 developer. Furthermore, there are plenty other ways to troubleshoot and dig beyond.

That’s it!

Please like, comment, subscribe, share, and come author the next post.

Related posts

Route optimization for MWS410 with OptiMap (continued)

Today I unbury old scripts from my archives, and I post them here and on my GitHub.

This script illustrates how to integrate the M3 Delivery Toolbox – MWS410/B with OptiMap – Fastest Roundtrip Solver to calculate and show on Google Maps the fastest roundtrip from the Warehouse to the selected Delivery addresses; it’s an application of the Traveling Salesman Problem (TSP) to M3. This is interesting for a company to reduce overall driving time and cost, and for a driver to optimize its truck load according to the order of delivery.

OptiMap_V3

The Warehouse (WHLO) is now detected from the selected rows, and its address is dynamically retrieved using API MMS005MI.GetWarehouse; no need to specify the Warehouse address as a parameter anymore.

Create a view in M3 Delivery Toolbox MWS410/B that shows the address fields in the columns (e.g. ADR1, ADR2, ADR3), and set the field names in the parameter of the script.

8 10 13

OptiMap_V4  [PENDING]

Add ability to Export OptiMap’s route to M3 Loading (MULS) and Unloading sequence (SULS) using API MYS450MI.AddDelivery, closing the loop of integrating M3 to OptiMap.

This is an idea for version 4. Script to be developed…

Source code

I posted the source code on my GitHub.

Related posts

Mashup quality control #7

I enhanced my quality control tool for Infor Smart Office Mashups with the following new verifications.

NEW

  1. It verifies the XAML of Document Archive (now Infor Document Management, IDM) by TargetKey=SearchXQuery.
  2. It verifies the XAML of M3 Web Services (MWS):
    • It ensures the parameters WS.Wsdl and WS.Address use the Smart Office profile value {mashup:ProfileValue Path=M3/WebService/url} (i.e. not a hard-coded URL), so that the Mashup can be migrated across environments (e.g. DEV, TST)
    • It ensures there are no hard-coded WS.User or WS.Password, but that is uses the current user’s credentials with WS.CredentialSource=Current.
  3. It verifies if there are other hard-coded values, mostly in mashup:Parameter in Mashup events.

GitHub

I moved the source code to a new GitHub repository at https://github.com/M3OpenSource/MashupQualityControl
1

Result

As usual, compile and run the script in the Smart Office Script Tool (mforms://jscript):
2

It will verify all the Mashups of your Smart Office, and it will show the elements in the XAML that have hard-coded values or other problems.

That’s it!

Please visit the source code, try it, click like, leave a comment, subscribe, share, and come write the next post with us.

Related posts

 

HTTP channels in MEC (part 5 bis)

I will re-visit the HTTPSOut communication channel of Infor M3 Enterprise Collaborator (MEC) as a follow-up of my previous post part 5.

TLDR; I stopped using the HTTPSOut channel for its various problems. Now, I use the Send Web Service process.

Previously

In part 5, I had used the sample HTTPSOut channel from the MEC documentation, for MEC to make HTTPS requests, but it turned out to have a variety of problems.

Then, in a previous post I had explored the Send Web Service process for MEC to make SOAP requests, and I had realized it can be used for any content, not just for SOAP. I will explore that here.

Nowadays

Instead of HTTPSOut, use the Send Web Service process. It is easy to use: set the Service Endpoint to the desired URL, put whatever value in the SOAP Action (it will be ignored), set the Content type (e.g. text/plain, application/xml), select the checkbox if you want the response, and set the user/password if you need Basic authentication:

The process will use Java’s HttpsURLConnection and its default hostname verifier and certificate validation. If the URL endpoint uses a certificate that is not part of the MEC JRE truststore, you will have to add it with keytool import (see part 5).

Orchestration

In MEC, the processes are piped in order: the output of one process is the input of the next process. In my example, I send a message to my agreement, that goes through an XML Transform with my mapping, the output of that is sent to example.com, and the HTTP response of that is archived.

Maintenance

One problem with the Send Web Service is the URL, user, and password are hard-coded in the agreement, and I do not know how to un-hard-code them; I would like to use an external property instead.

We have many agreements, and several environments (e.g. DEV, EDU, TST, PRD) each with its URL, user, and password. When we migrate the agreements to the other environments, we have to manually update each value in each agreement, it is a maintenance nightmare.

As a workaround, I mass SQL UPDATE the values in MEC’s table PR_Process_Property:

SELECT REPLACE(PropValue, 'https://DEV:8082', 'https://TST:8083') FROM PR_Process_Property WHERE (PropKey='Service Endpoint')
SELECT REPLACE(PropValue, 'john1', 'john2') FROM PR_Process_Property WHERE (PropKey='User')
SELECT REPLACE(PropValue, 'password123', 'secret123') FROM PR_Process_Property WHERE (PropKey='Password')

Note: Use UPDATE instead of SELECT.

4

Conclusion

That is my new setup for MEC to do HTTPS outbound.

Related articles

That’s it.

HTTP channels in MEC (parts 3,6 bis)

I will re-visit the HTTPSyncIn and HTTPSyncOut communication channels of Infor M3 Enterprise Collaborator (MEC) as a follow-up of my previous posts part 3 and part 6.

TL;DR: It turns out there is no obligation to use a channel detection, we can use any of the available detections (e.g. XML detection). Also, it turns out we can re-use the HTTPSyncIn in multiple agreements (i.e. shared port).

Previously

In part 3, I showed how to use the HTTPSyncIn and HTTPSyncOut channels for MEC to receive documents over HTTP (instead of traditionally FTP). And in part 6, I showed how to add nginx as a reverse proxy with TLS termination for MEC to receive the documents securely over HTTPS (encrypted).

As for MEC, an HTTPSyncIn and its port number can only be used in one agreement; otherwise MEC will throw “Target values already defined”:
reuse3

As a workaround, I had to setup one HTTPSyncIn per agreement, i.e one port number per agreement. It is feasible (a computer has plenty of ports available), but in the configuration of nginx I had to setup one server block per agreement, which is a maintenance nightmare.

It turns out…

It turns out the HTTPSyncIn persists the message to disk and queues it for detection and processing:
0_

That means I do not have to use a channel detection in the agreement, I can use any of the other detections, such as XML or flat file detection.

Channel

Like previously, setup one HTTPSyncIn (e.g. port 8085), it will apply to the entire environment:
1-2

And like previously, setup one HTTPSyncOut, it can be used by any agreement:
1-3

Detection

Setup as many XML detections as needed, e.g.:
2-1 2-2 2-3 2-4

Agreement

Here is the key difference compared to previously: select XML Detection, not channel detection:
3-2__

In the processes, like previously, use the Send process with the HTTPSyncOut:
3-4

Setup as many agreements like that as needed.

nginx

If you use nginx for HTTPS, setup one server block only for all agreements (instead of one per agreement), listen on any port (e.g. 8086) and forward to the HTTPSyncIn port (e.g. 8085):

server {
    listen 8086 ssl;
    ssl_certificate .crt;
    ssl_certificate_key .key;
    location / {
        auth_basic "Restricted";
        auth_basic_user_file .htpasswd;
        proxy_pass http://localhost:8085/;
    }
}

Conclusion

Those were my latest findings on the easiest way to setup HTTPSyncIn in MEC to receive HTTP(S) when having multiple agreements. My previous recommendation to use one channel (one port) per agreement does not apply anymore. Everything will go through the same port, will be persisted and queued, and MEC will iterate through the detections to sort it out and process. I do not know if there are any drawbacks to this setup; for future discovery.

That’s it!

Let me know what you think in the comments below, click Like, click Follow to subscribe, share with your colleagues, and come write the next blog post with us.

User synchronization between M3 and IPA – Part 3

Now I will do the incremental backup of M3 users and roles.

Refer to part 1 and part 2 for the background.

Design decisions: Pull or Push?

We can either make IPA pull the data from M3, or make M3 push the data to IPA.

Pull

We can make a process flow with SQL queries that pull data from M3, creates or updates the respective record in IPA, scheduled to run at a certain frequency, e.g. once an hour.

It is the simplest strategy to implement. But it is not efficient because it re-processes so frequently thousands of records that have already been processed. And it does not reflect the changes of M3 sooner than the scheduled frequency. And each iteration causes unnecessary noise in the IPA logs and database, so the less iterations the better. And IPA is already slow and inefficient for some reason, so the less work in IPA the faster it is. And after an undetermined amount of work, IPA becomes unstable and stops responding after which we have to restart it, so the less work in IPA the more stable it is.

Push

We can also make a series of Event Hub subscriptions and a process flow with Landmark activity nodes. It will reflect the changes in IPA virtually immediately, faster. And it will operate only on the record being affected, not all, which is efficient.

Mix

I will use a mix: I used the pull strategy for the initial mass load (see part 2), and here below I will use the push strategy for the incremental backup.

Process flow

Here is my process flow that does the incremental backup:
gif

  • The top section handles the M3 user (MNS150, CMNUSR) and the respective IPA identity, actor, actor-identity, actor-role, in the gen data area, and the user in the environment data area (e.g. DEV, TST).
  • The second section handles the M3 email address (CRS111, CEMAIL, EMTP=4) and the respective IPA actor email address.
  • The third section handles the M3 role (MNS405, CMNROL) and the respective IPA task.
  • The bottom section handles the M3 user-roles (MNS410, CMNRUS) and the respective IPA user-tasks.
  • Each section handles the M3 operations: create, update, and delete.
  • I merged everything in one big flow, but we could split it in individual flows too.
  • I upload the process with logs disabled to avoid polluting the logs and database.

Source code

You can download the process flow source code on my GitHub here.

Event Hub subscriptions

I created the corresponding Event Hub subscriptions in the IPA Channels Administrator:

  • M3:CMNUSR:CUD
  • M3:CEMAIL:CUD
  • M3:CMNROL:CUD
  • M3:CMNRUS:CUD

channel

Result

Here are the resulting WorkUnits:
result

Repeat per environment

Deploy the process flow, and setup the event hub subscriptions on each environment data area (e.g. DEV, TST).

IMPORTANT

Refer to the challenges section in part 1 for the limitations, notably the data model dissonance which will cause collisions, the out-of-order execution which will cause inconsistencies, and the constraint mismatch which will cause failures.

Future work

  • Prevent deleting the administrative users M3SRVADM and lawson.
  • Recursively delete dependencies (e.g. in IPA we cannot delete a user that has pending WorkUnits)

Conclusion

This three-part blog post was a complete solution to do unidirectional synchronization of users between M3 and IPA, by means of an initial mass load via the command line, and an incremental backup via a process flow and Event Hub subscriptions. Unfortunately, Infor does not have complete documentation about this, there are serious shortcomings, and IPA is defunct anyway. Also, check out the valuable comments of Alain Tallieu whom takes user synchronization to the next level.

Related articles

That’s it!

Please like, comment, follow, share, and come author with us.

User synchronization between M3 and IPA – Part 2

Now I will do the initial mass load of users.

As a reminder, we create the identities and actors in the gen data area, and the users and tasks in the environment data area (e.g. DEV, TST). For more information, refer to part 1.

Design decisions: Command line or process flow?

We could use either the command line or the Landmark activity node in a process flow. I will explore the former now, and the latter next time. Note the command line is not available in Infor CloudSuite.

1. Identities and actors

I will generate a file of identities and actors, reading from M3, and I will use the secadm command to import the file to the gen data area in IPA.

1.1. Documentation

There is some documentation for the secadm command at Infor Landmark Technology Administration Guides > Infor Landmark Technology User Setup and Security > Landmark for Administrators > Using the Administrative Tools > The Security Administration Utility (secadm):
11

1.2. Extract the data

Extract all the users (MNS150) and email addresses (CRS111) from M3, and save them to a file somewhere (e.g. semi-colon separated users.csv):

SELECT DISTINCT JUUSID, JUTX40, CBEMAL
FROM MVXJDTA.CMNUSR U
LEFT OUTER JOIN MVXJDTA.CEMAIL E
ON U.JUUSID=E.CBEMKY AND E.CBEMTP='04'

3

Note: If you already know the subset, you can filter the list to only the users that will participate in approval flows, and discard the rest.

PROBLEM: M3 is environment specific (e.g. DEV, TST), but the gen data area is not. And M3 is company (CONO) specific, whereas IPA is not. So we will have collisions and omissions.

1.3. Transform

Transform the list of users to a list of secadm commands, where for each user, we have the commands to create the identity, actor, actor-identity, and actor-role, e.g.:

identity add SSOPV2 USER123 --password null
actor add USER123 --firstname Thibaud --lastname "Lopez Schneider" --ContactInfo.EmailAddress thibaud.lopez.schneider@example.com
actor assign USER123 SSOPV2 USER123
role assign USER123 InbasketUser_ST

Not all attribute keys are documented, but you can find them all here:
1 2

For the transformation you can use the following DOS batch file (e.g. users.bat):

@echo off
for /f "tokens=1-3 delims=;" %%a in (users.csv) do (
    echo identity add SSOPV2 %%a --password null
    for /f "usebackq tokens=1,* delims= " %%j in ('%%b') do (
        echo actor add %%a --firstname %%j --lastname "%%k" --ContactInfo.EmailAddress %%c
    )
    echo actor assign %%a SSOPV2 %%a
    echo role assign %%a InbasketUser_ST
)

Note 1: Replace delims with the delimiter of your file (e.g. semi-colon in my case).

Note 2: The command will naively split the name TX40 in two, where the first word is the first name and the rest is the last name; this will be an incorrect split in many cultures.

Save the result to a text file (e.g. users.txt):

users.bat > users.txt

We now have a list of commands ready to be executed:
4

1.4. secadm

Execute the secadm command to import the file to the gen data area:

cd D:\Infor\LMTST\
enter.cmd
secadm -f users.txt -d gen

5

1.5. Result

Here is the resulting list of identities, actors, actor-identity, and actor-role, in the gen data area:
6 7 8 9 10

1.6. Repeat per environment

Repeat from step 1.2 for the next environment (e.g. DEV, TST). Due to the data model dissonance between M3 and IPA, there will be collisions; see the challenges section in part 1.

1.7. Delete

To delete the records, proceed in reverse with the remove and delete sub-commands and the –complete argument. Be careful to not delete the administrative users M3SRVADM and lawson.

@echo off
for /f "tokens=1-3 delims=;" %%a in (users.csv) do (
    echo role remove %%a InbasketUser_ST
    echo actor remove %%a SSOPV2 %%a
    echo actor delete %%a --complete
    echo identity delete SSOPV2 %%a
)

1.8. Update

I could not find a command to update the Actor; for future work; meanwhile, delete and re-add.

1.9. More

Here is some more help for the secadm command:

D:\Infor\LMTST>enter.cmd

D:\Infor\LMTST>secadm
Usage: Utility for security administration.
Syntax: secadm [secadm-options] command [command-options]
where secadm-options are global secadm options
(specify --secadm-options for a list of secadm options)
where command is a secadm command
(specify --help-commands for a list of commands
where command-options depend on the specific command
(specify -H followed by a command name for command-specific help)
Specify --help to receive this message
FAILED.

D:\Infor\LMTST>secadm --secadm-options
-c Continue on error
-d dataarea
-? Print help meesage
-i Enter interactive shell mode
-H <command> Command-specific help
-f <filename> File to use as for commands
-r Recover Secadm Password
-q Run quietly
--secadm-options For a list of secadm options
-s Run silently
--help-commands For a list of commands
-m Enter interactive menu mode
-p Password for secadm
--help Print this message
-v Print version information
[-p >password>] -u Upgrade AuthenDat
FAILED.

D:\Infor\LMTST>secadm --help-commands

Valid sub-commands are:
accountlockoutpolicy Maintain system account lockout policies.
actor Maintain system actors
httpendpoint Maintain system HTTP endpoints and HTTP endpoint assignments.
identity Maintain system identities.
load Load data from a file.
provision Provision Lawson users
loginscheme Maintain system login schemes.
migrate Migrate supplier identities from default primary SSO service to domain primary SSO service
passwordresetpolicy Maintain system password reset policies.
role Maintain system roles
secanswer Maintain system security answers.
secquestion Maintain system security questions.
service Maintain system services.
ssoconfig Maintain Single Sign On Configuration
ssodomain Maintain system domain.
security Assign security classes to roles and control Security activation
admin Lawson Security Admin Configuration
passwordpolicy Maintain system password policies.
generate Secadm script generation from data
agent Migrate system agents and actors
principalresolver Maintain custom Principal Resolver code.
report Security Data Reports
mitrustsetup Set up trusted connections for an MI socket service.
keys Key Management
SSOCertificate Manage Federated Server Certificates
wsfederation Manage WS Federation Settings
proxy Proxy
class SecurityClass
FAILED.

D:\Infor\LMTST>secadm -H identity
identity Maintain system identities.

Valid sub-commands are:
privileged Maintain privileged identities.
add Add identity to the system.
update Update identity in the system.
delete Delete identity from the system.
display Display identity in the system.
pwdResetByIdentity Password reset by identity in the system.
pwdResetByService Password reset by service in the system.
listIdentities List all identities in the system.
listBadPasswords List identities with bad passwords by service in the system
overrideBadPasswords Override password for identities with bad password by service in the system
DONE.

D:\Infor\LMTST>secadm -H actor
actor Maintain system actors

Valid sub-commands are:
add Add actor to the system.
delete Delete actor from the system. !This option is temporarily unavailable
assign Assign Identity to an actor.
remove Remove Identity from an actor.
accountenable Enable actor account in the system.
accountdisable Disable actor account in the system.
enablerunas Enable Run As for Actor in the system.
disablerunas Disable Run As for Actor in the system.
actorenable Enable actor in the system.
actordisable Disable actor in the system.
context Actor context maintenance
ctxproperty Context property maintenance
list List all actors in the system.
link Actor to Agent link maintenance
DONE.

D:\Infor\LMTST>

2. Users and tasks

Now, for each M3 environment (e.g. DEV, TST), I will generate a file of users and tasks, and I will call the importPFIdata command to import the file to the respective data area.

2.1. Documentation

There is some documentation for the importPFIdata command at Infor Landmark Technology Installation Guides > Infor Lawson System Foundation Using Infor Process Automation Configuration Guide > Post-Installation Procedures > Run migration scripts:
12

2.2. Extract the data

For each environment (e.g. DEV, TST), extract all the roles (MNS405) and user-roles (MNS410) from M3, and save them to files somewhere (e.g. roles.csv and user-roles.csv):

SELECT KRROLL, KRTX40 FROM MVXJDTA.CMNROL
SELECT KUUSID, KUROLL FROM MVXJDTA.CMNRUS

Note: If you already know the subset, you can filter the list to only the users and roles that will participate in approval flows, and discard the rest.

2.3. Transform

Transform the list of roles and user-roles to the XML syntax of importPFIdata, e.g.:

<?xml version="1.0" encoding="UTF-8"?>
<ImpExpData Version="1">
    <Tables>
        <Table Name="WFTASK">
            <Rows>
                <Row>
                    <Column Name="TASK"><Value>FLEET_MGR</Value></Column>
                    <Column Name="WF-DESCRIPTION"><Value>Fleet manager</Value></Column>
                </Row>
            </Rows>
        </Table>
    </Tables>
</ImpExpData>

Not all table names and columns are documented, but you can find them all here:
13 14

For the transformation you can use the following DOS batch file (e.g. user-roles.bat):

@echo off
echo ^<?xml version="1.0" encoding="UTF-8"?^>
echo ^<ImpExpData Version="1"^>
echo ^<Tables^>
echo ^<Table Name="WFUSRPROFL"^>
echo ^<Rows^>
for /f "tokens=1,* delims=;" %%a in (users.csv) do (
echo ^<Row^>^<Column Name="WF-RM-ID"^>^<Value^>%%a^</Value^>^</Column^>^</Row^>
)
echo ^</Rows^>
echo ^</Table^>
echo ^<Table Name="WFTASK"^>
echo ^<Rows^>
for /f "tokens=1-2 delims=;" %%a in (roles.csv) do (
echo ^<Row^>^<Column Name="TASK"^>^<Value^>%%a^</Value^>^</Column^>^<Column Name="WF-DESCRIPTION"^>^<Value^>%%b^</Value^>^</Column^>^</Row^>
)
echo ^</Rows^>
echo ^</Table^>
echo ^<Table Name="WFUSERTASK"^>
echo ^<Rows^>
for /f "tokens=1-2 delims=;" %%a in (user-roles.csv) do (
echo ^<Row^>^<Column Name="WF-RM-ID"^>^<Value^>%%a^</Value^>^</Column^>^<Column Name="TASK"^>^<Value^>%%b^</Value^>^</Column^>^<Column Name="START-DATE"^>^<Value^>00000000^</Value^>^</Column^>^<Column Name="STOP-DATE"^>^<Value^>00000000^</Value^>^</Column^>^</Row^>
)
echo ^</Rows^>
echo ^</Table^>
echo ^</Tables^>
echo ^</ImpExpData^>

Save the result to an XML file (e.g. user-roles.xml):

user-roles.bat > user-roles.xml

We now have an XML file ready to be imported:
15

 

2.4. importPFIdata

Execute the importPFIdata command to import the file to the specified data area (e.g. lmdevipa):

cd D:\Infor\LMTST\
enter.cmd
env\bin\importPFIdata.bat lmdevipa -f user-roles.xml

16

2.5. Result

Here is the resulting list of users, tasks, and user-tasks, in the specified data area:
21 18 20 17 19

2.6. Repeat per environment

Repeat from step 2.2 for the next environment (e.g. DEV, TST).

2.7. Delete

I do not yet know how to delete via the command line; for future work.

2.8. Update

The importPFIdata command will automatically update the record if it already exists.

Source code

I made a unified PowerShell script m3users.ps1 that I put on my GitHub.

Conclusion

That was the initial mass load of users from M3 to IPA using the command lines secadm for identities and actors in the gen data area, and importPFIdata for users and tasks in each environment data area (e.g. DEV, TST).

See also

See part 1 for the overview of user synchronization between M3 and IPA.

And read the comments by Alain Tallieu where he shares his experience and valuable tips.

Future work

Here is some future work:

  • What to do about the lack of environment and CONO in IPA
  • How to update actors
  • How to delete users and tasks
  • Prevent deleting the administrative users M3SRVADM and lawson.
  • Finish the PowerShell script

To be continued…

I will continue in part 3 for the incremental backup.

User synchronization between M3 and IPA – Part 1

I return to Infor Process Automation (IPA) 😱 to develop approval flows for purchase orders in Infor M3 and to setup the users and roles that will take action in the Infor Smart Office (ISO) Inbasket.

Note: The IPA product is dead and Infor is replacing it with ION, so my endeavor is obsolete; furthermore, the integration of IPA and ION with M3 is flawed by design at many levels, ergo working on it is flawed too; but my customer still uses IPA for M3, so I return.

UserAction activity node

The UserAction activity node defines who can take what actions:
1_

Inbasket

The Inbasket displays the work to the respective users so they take the action (e.g. approve/reject in this purchase order to branch managers):
2_

Email actions

Email addresses are needed to be able to allow actions via emails (e.g. Accept/Reject in this sale approval):
email

PROBLEM 1

If the user is not setup in IPA, they will get the error “Inbasket Error, Unable to populate Inbox for the LPA server”:
3

To avoid the error, we have to setup the user in IPA, even if the user does not need the Inbasket.

There are some workarounds: 1) setup two Smart Office Installation Points, with or without the Infor.PF10 feature, 2) setup two Smart Office profiles, with or without the PF configuration enabled, 3) setup the Mango.Core application settings to enable roaming profiles with or without the Process Server inbasket, 4) maybe use the Category File Administration for profile.xml with or without the Inbasket and Connect Roles and Users, etc. Each workaround has advantages and disadvantages. Nonetheless, we still have to setup at least those users that need the Inbasket.

PROBLEM 2

No users are setup in IPA upon installation, IPA is blank, and M3 and IPA do not have a built-in synchronization mechanism. Hence, for every M3 user that needs the Inbasket, we have to setup the user in IPA. If we have thousands of users, it is too much work to do manually. I need an automatic solution. Note: as a consolation, at least we do not need to manage passwords in IPA as IPA has LDAP binding.

Design

I will implement a solution to automatically synchronize users between M3 and IPA.

My design decisions are:

  • Simple mirror, one-way synchronization, from M3 (primary) to IPA (replica), and ignore changes in IPA
  • One-time initial mass load + incremental backup
  • Per environment (e.g. DEV, TST)
  • Synchronize all users to avoid problem #1 above; otherwise filter to synchronize only the users that will use the Inbasket, not the others; but then must implement one of the workarounds above

Challenges

  • IPA does not have the concept of CONO, whereas M3 does; thus we will have to choose one of the CONOs and discard the rest, i.e. possible data loss
  • IPA stores the email addresses in gen for all data areas, whereas M3 stores the email addresses per environment (e.g. DEV, TST); thus we will have to choose one of the email addresses and discard the others, i.e. possible data loss
  • IPA stores the user’s name in two fields firstname and lastname, whereas M3 stores it in a single field TX40; thus we will have to split the field in two, and make an assumption of where to split for names with more than two words, i.e. possible incorrect split
  • IPA is slower than M3, and there are no concurrency guarantees; thus there could be race conditions where two consecutive changes in M3 are executed in the wrong order by IPA, i.e. possible data inconsistency
  • There is a constraint mismatch between M3 and IPA; e.g. let’s take the case of M3 users and roles (that’s users and tasks in IPA), and let’s suppose a role has one or more users connected to it (one-to-many relationship): in M3 we can delete the role, and M3 will automatically and recursively delete the user-roles associations; whereas if we try to delete the corresponding task in IPA, it will throw an error that associations exist, so we would have to recursively delete the associations ourselves. I just tested that; I have not tested the rest: identities, actors, etc.
  • M3 is encoded in UTF-8, whereas IPA is encoded in ISO-8859-1, i.e. possible data loss

Documentation

There is some documentation about the setup, but it is not everything we need:

Setting up users for Infor Process Automation and Infor Smart Office:
doc1

Mass-loading actors for Infor Process Automation: Overview:
doc2

User management in M3

Here are the programs, tables, and fields for M3:

Users (MNS150, CMNUSR), CONO, DIVI, USID, TX40:
5

Roles (MNS405, CMNROL), ROLL, TX40:
7

User-Roles (MNS410, CMNRUS), USID, ROLL:
8

Email address (CRS111, CEMAIL), CONO, EMTP, EMKY, EMAL:
6

Everything is stored in the M3 database in the respective tables:
sql1

MetaData Publisher (MDP) has information about the tables, columns, and keys:
mdp

User management in IPA

Here are the programs involved for IPA:

In the general data area:

gen

Identity (with Service SSOPV2):
9

Actor (FirstName, LastName, and Email address):
1011

Actor-Identity (one-to-one):
12

Actor-Roles (at least InbasketUser_ST):
13

In each data area (e.g. DEV, TST):
dataarea

Users (one-to-one with Actor):
14

Tasks (equivalent of M3 Roles):
15

User-Tasks:
16

The data is stored in the IPA database in the respective tables (gen: IDENTITY, ACTOR, IDENTITYACTOR, ACTORROLE; DEV/TST: PFIUSERPROFILE, PFITASK, PFIUSERTASK):
17

To be continued…

I will continue in part 2.