User synchronization between M3 and IPA – Part 3

Now I will do the incremental backup of M3 users and roles.

Refer to part 1 and part 2 for the background.

Design decisions: Pull or Push?

We can either make IPA pull the data from M3, or make M3 push the data to IPA.

Pull

We can make a process flow with SQL queries that pull data from M3, creates or updates the respective record in IPA, scheduled to run at a certain frequency, e.g. once an hour.

It is the simplest strategy to implement. But it is not efficient because it re-processes so frequently thousands of records that have already been processed. And it does not reflect the changes of M3 sooner than the scheduled frequency. And each iteration causes unnecessary noise in the IPA logs and database, so the less iterations the better. And IPA is already slow and inefficient for some reason, so the less work in IPA the faster it is. And after an undetermined amount of work, IPA becomes unstable and stops responding after which we have to restart it, so the less work in IPA the more stable it is.

Push

We can also make a series of Event Hub subscriptions and a process flow with Landmark activity nodes. It will reflect the changes in IPA virtually immediately, faster. And it will operate only on the record being affected, not all, which is efficient.

Mix

I will use a mix: I used the pull strategy for the initial mass load (see part 2), and here below I will use the push strategy for the incremental backup.

Process flow

Here is my process flow that does the incremental backup:
gif

  • The top section handles the M3 user (MNS150, CMNUSR) and the respective IPA identity, actor, actor-identity, actor-role, in the gen data area, and the user in the environment data area (e.g. DEV, TST).
  • The second section handles the M3 email address (CRS111, CEMAIL, EMTP=4) and the respective IPA actor email address.
  • The third section handles the M3 role (MNS405, CMNROL) and the respective IPA task.
  • The bottom section handles the M3 user-roles (MNS410, CMNRUS) and the respective IPA user-tasks.
  • Each section handles the M3 operations: create, update, and delete.
  • I merged everything in one big flow, but we could split it in individual flows too.
  • I upload the process with logs disabled to avoid polluting the logs and database.

Source code

You can download the process flow source code on my GitHub here.

Event Hub subscriptions

I created the corresponding Event Hub subscriptions in the IPA Channels Administrator:

  • M3:CMNUSR:CUD
  • M3:CEMAIL:CUD
  • M3:CMNROL:CUD
  • M3:CMNRUS:CUD

channel

Result

Here are the resulting WorkUnits:
result

Repeat per environment

Deploy the process flow, and setup the event hub subscriptions on each environment data area (e.g. DEV, TST).

IMPORTANT

Refer to the challenges section in part 1 for the limitations, notably the data model dissonance which will cause collisions, the out-of-order execution which will cause inconsistencies, and the constraint mismatch which will cause failures.

Future work

  • Prevent deleting the administrative users M3SRVADM and lawson.
  • Recursively delete dependencies (e.g. in IPA we cannot delete a user that has pending WorkUnits)

Conclusion

This three-part blog post was a complete solution to do unidirectional synchronization of users between M3 and IPA, by means of an initial mass load via the command line, and an incremental backup via a process flow and Event Hub subscriptions. Unfortunately, Infor does not have complete documentation about this, there are serious shortcomings, and IPA is defunct anyway. Also, check out the valuable comments of Alain Tallieu whom takes user synchronization to the next level.

Related articles

That’s it!

Please like, comment, follow, share, and come author with us.

User synchronization between M3 and IPA – Part 2

Now I will do the initial mass load of users.

As a reminder, we create the identities and actors in the gen data area, and the users and tasks in the environment data area (e.g. DEV, TST). For more information, refer to part 1.

Design decisions: Command line or process flow?

We could use either the command line or the Landmark activity node in a process flow. I will explore the former now, and the latter next time. Note the command line is not available in Infor CloudSuite.

1. Identities and actors

I will generate a file of identities and actors, reading from M3, and I will use the secadm command to import the file to the gen data area in IPA.

1.1. Documentation

There is some documentation for the secadm command at Infor Landmark Technology Administration Guides > Infor Landmark Technology User Setup and Security > Landmark for Administrators > Using the Administrative Tools > The Security Administration Utility (secadm):
11

1.2. Extract the data

Extract all the users (MNS150) and email addresses (CRS111) from M3, and save them to a file somewhere (e.g. semi-colon separated users.csv):

SELECT DISTINCT JUUSID, JUTX40, CBEMAL
FROM MVXJDTA.CMNUSR U
LEFT OUTER JOIN MVXJDTA.CEMAIL E
ON U.JUUSID=E.CBEMKY AND E.CBEMTP='04'

3

Note: If you already know the subset, you can filter the list to only the users that will participate in approval flows, and discard the rest.

PROBLEM: M3 is environment specific (e.g. DEV, TST), but the gen data area is not. And M3 is company (CONO) specific, whereas IPA is not. So we will have collisions and omissions.

1.3. Transform

Transform the list of users to a list of secadm commands, where for each user, we have the commands to create the identity, actor, actor-identity, and actor-role, e.g.:

identity add SSOPV2 USER123 --password null
actor add USER123 --firstname Thibaud --lastname "Lopez Schneider" --ContactInfo.EmailAddress thibaud.lopez.schneider@example.com
actor assign USER123 SSOPV2 USER123
role assign USER123 InbasketUser_ST

Not all attribute keys are documented, but you can find them all here:
1 2

For the transformation you can use the following DOS batch file (e.g. users.bat):

@echo off
for /f "tokens=1-3 delims=;" %%a in (users.csv) do (
    echo identity add SSOPV2 %%a --password null
    for /f "usebackq tokens=1,* delims= " %%j in ('%%b') do (
        echo actor add %%a --firstname %%j --lastname "%%k" --ContactInfo.EmailAddress %%c
    )
    echo actor assign %%a SSOPV2 %%a
    echo role assign %%a InbasketUser_ST
)

Note 1: Replace delims with the delimiter of your file (e.g. semi-colon in my case).

Note 2: The command will naively split the name TX40 in two, where the first word is the first name and the rest is the last name; this will be an incorrect split in many cultures.

Save the result to a text file (e.g. users.txt):

users.bat > users.txt

We now have a list of commands ready to be executed:
4

1.4. secadm

Execute the secadm command to import the file to the gen data area:

cd D:\Infor\LMTST\
enter.cmd
secadm -f users.txt -d gen

5

1.5. Result

Here is the resulting list of identities, actors, actor-identity, and actor-role, in the gen data area:
6 7 8 9 10

1.6. Repeat per environment

Repeat from step 1.2 for the next environment (e.g. DEV, TST). Due to the data model dissonance between M3 and IPA, there will be collisions; see the challenges section in part 1.

1.7. Delete

To delete the records, proceed in reverse with the remove and delete sub-commands and the –complete argument. Be careful to not delete the administrative users M3SRVADM and lawson.

@echo off
for /f "tokens=1-3 delims=;" %%a in (users.csv) do (
    echo role remove %%a InbasketUser_ST
    echo actor remove %%a SSOPV2 %%a
    echo actor delete %%a --complete
    echo identity delete SSOPV2 %%a
)

1.8. Update

I could not find a command to update the Actor; for future work; meanwhile, delete and re-add.

1.9. More

Here is some more help for the secadm command:

D:\Infor\LMTST>enter.cmd

D:\Infor\LMTST>secadm
Usage: Utility for security administration.
Syntax: secadm [secadm-options] command [command-options]
where secadm-options are global secadm options
(specify --secadm-options for a list of secadm options)
where command is a secadm command
(specify --help-commands for a list of commands
where command-options depend on the specific command
(specify -H followed by a command name for command-specific help)
Specify --help to receive this message
FAILED.

D:\Infor\LMTST>secadm --secadm-options
-c Continue on error
-d dataarea
-? Print help meesage
-i Enter interactive shell mode
-H <command> Command-specific help
-f <filename> File to use as for commands
-r Recover Secadm Password
-q Run quietly
--secadm-options For a list of secadm options
-s Run silently
--help-commands For a list of commands
-m Enter interactive menu mode
-p Password for secadm
--help Print this message
-v Print version information
[-p >password>] -u Upgrade AuthenDat
FAILED.

D:\Infor\LMTST>secadm --help-commands

Valid sub-commands are:
accountlockoutpolicy Maintain system account lockout policies.
actor Maintain system actors
httpendpoint Maintain system HTTP endpoints and HTTP endpoint assignments.
identity Maintain system identities.
load Load data from a file.
provision Provision Lawson users
loginscheme Maintain system login schemes.
migrate Migrate supplier identities from default primary SSO service to domain primary SSO service
passwordresetpolicy Maintain system password reset policies.
role Maintain system roles
secanswer Maintain system security answers.
secquestion Maintain system security questions.
service Maintain system services.
ssoconfig Maintain Single Sign On Configuration
ssodomain Maintain system domain.
security Assign security classes to roles and control Security activation
admin Lawson Security Admin Configuration
passwordpolicy Maintain system password policies.
generate Secadm script generation from data
agent Migrate system agents and actors
principalresolver Maintain custom Principal Resolver code.
report Security Data Reports
mitrustsetup Set up trusted connections for an MI socket service.
keys Key Management
SSOCertificate Manage Federated Server Certificates
wsfederation Manage WS Federation Settings
proxy Proxy
class SecurityClass
FAILED.

D:\Infor\LMTST>secadm -H identity
identity Maintain system identities.

Valid sub-commands are:
privileged Maintain privileged identities.
add Add identity to the system.
update Update identity in the system.
delete Delete identity from the system.
display Display identity in the system.
pwdResetByIdentity Password reset by identity in the system.
pwdResetByService Password reset by service in the system.
listIdentities List all identities in the system.
listBadPasswords List identities with bad passwords by service in the system
overrideBadPasswords Override password for identities with bad password by service in the system
DONE.

D:\Infor\LMTST>secadm -H actor
actor Maintain system actors

Valid sub-commands are:
add Add actor to the system.
delete Delete actor from the system. !This option is temporarily unavailable
assign Assign Identity to an actor.
remove Remove Identity from an actor.
accountenable Enable actor account in the system.
accountdisable Disable actor account in the system.
enablerunas Enable Run As for Actor in the system.
disablerunas Disable Run As for Actor in the system.
actorenable Enable actor in the system.
actordisable Disable actor in the system.
context Actor context maintenance
ctxproperty Context property maintenance
list List all actors in the system.
link Actor to Agent link maintenance
DONE.

D:\Infor\LMTST>

2. Users and tasks

Now, for each M3 environment (e.g. DEV, TST), I will generate a file of users and tasks, and I will call the importPFIdata command to import the file to the respective data area.

2.1. Documentation

There is some documentation for the importPFIdata command at Infor Landmark Technology Installation Guides > Infor Lawson System Foundation Using Infor Process Automation Configuration Guide > Post-Installation Procedures > Run migration scripts:
12

2.2. Extract the data

For each environment (e.g. DEV, TST), extract all the roles (MNS405) and user-roles (MNS410) from M3, and save them to files somewhere (e.g. roles.csv and user-roles.csv):

SELECT KRROLL, KRTX40 FROM MVXJDTA.CMNROL
SELECT KUUSID, KUROLL FROM MVXJDTA.CMNRUS

Note: If you already know the subset, you can filter the list to only the users and roles that will participate in approval flows, and discard the rest.

2.3. Transform

Transform the list of roles and user-roles to the XML syntax of importPFIdata, e.g.:

<?xml version="1.0" encoding="UTF-8"?>
<ImpExpData Version="1">
    <Tables>
        <Table Name="WFTASK">
            <Rows>
                <Row>
                    <Column Name="TASK"><Value>FLEET_MGR</Value></Column>
                    <Column Name="WF-DESCRIPTION"><Value>Fleet manager</Value></Column>
                </Row>
            </Rows>
        </Table>
    </Tables>
</ImpExpData>

Not all table names and columns are documented, but you can find them all here:
13 14

For the transformation you can use the following DOS batch file (e.g. user-roles.bat):

@echo off
echo ^<?xml version="1.0" encoding="UTF-8"?^>
echo ^<ImpExpData Version="1"^>
echo ^<Tables^>
echo ^<Table Name="WFUSRPROFL"^>
echo ^<Rows^>
for /f "tokens=1,* delims=;" %%a in (users.csv) do (
echo ^<Row^>^<Column Name="WF-RM-ID"^>^<Value^>%%a^</Value^>^</Column^>^</Row^>
)
echo ^</Rows^>
echo ^</Table^>
echo ^<Table Name="WFTASK"^>
echo ^<Rows^>
for /f "tokens=1-2 delims=;" %%a in (roles.csv) do (
echo ^<Row^>^<Column Name="TASK"^>^<Value^>%%a^</Value^>^</Column^>^<Column Name="WF-DESCRIPTION"^>^<Value^>%%b^</Value^>^</Column^>^</Row^>
)
echo ^</Rows^>
echo ^</Table^>
echo ^<Table Name="WFUSERTASK"^>
echo ^<Rows^>
for /f "tokens=1-2 delims=;" %%a in (user-roles.csv) do (
echo ^<Row^>^<Column Name="WF-RM-ID"^>^<Value^>%%a^</Value^>^</Column^>^<Column Name="TASK"^>^<Value^>%%b^</Value^>^</Column^>^<Column Name="START-DATE"^>^<Value^>00000000^</Value^>^</Column^>^<Column Name="STOP-DATE"^>^<Value^>00000000^</Value^>^</Column^>^</Row^>
)
echo ^</Rows^>
echo ^</Table^>
echo ^</Tables^>
echo ^</ImpExpData^>

Save the result to an XML file (e.g. user-roles.xml):

user-roles.bat > user-roles.xml

We now have an XML file ready to be imported:
15

 

2.4. importPFIdata

Execute the importPFIdata command to import the file to the specified data area (e.g. lmdevipa):

cd D:\Infor\LMTST\
enter.cmd
env\bin\importPFIdata.bat lmdevipa -f user-roles.xml

16

2.5. Result

Here is the resulting list of users, tasks, and user-tasks, in the specified data area:
21 18 20 17 19

2.6. Repeat per environment

Repeat from step 2.2 for the next environment (e.g. DEV, TST).

2.7. Delete

I do not yet know how to delete via the command line; for future work.

2.8. Update

The importPFIdata command will automatically update the record if it already exists.

Source code

I made a unified PowerShell script m3users.ps1 that I put on my GitHub.

Conclusion

That was the initial mass load of users from M3 to IPA using the command lines secadm for identities and actors in the gen data area, and importPFIdata for users and tasks in each environment data area (e.g. DEV, TST).

See also

See part 1 for the overview of user synchronization between M3 and IPA.

And read the comments by Alain Tallieu where he shares his experience and valuable tips.

Future work

Here is some future work:

  • What to do about the lack of environment and CONO in IPA
  • How to update actors
  • How to delete users and tasks
  • Prevent deleting the administrative users M3SRVADM and lawson.
  • Finish the PowerShell script

To be continued…

I will continue in part 3 for the incremental backup.

User synchronization between M3 and IPA – Part 1

I return to Infor Process Automation (IPA) 😱 to develop approval flows for purchase orders in Infor M3 and to setup the users and roles that will take action in the Infor Smart Office (ISO) Inbasket.

Note: The IPA product is dead and Infor is replacing it with ION, so my endeavor is obsolete; furthermore, the integration of IPA and ION with M3 is flawed by design at many levels, ergo working on it is flawed too; but my customer still uses IPA for M3, so I return.

UserAction activity node

The UserAction activity node defines who can take what actions:
1_

Inbasket

The Inbasket displays the work to the respective users so they take the action (e.g. approve/reject in this purchase order to branch managers):
2_

Email actions

Email addresses are needed to be able to allow actions via emails (e.g. Accept/Reject in this sale approval):
email

PROBLEM 1

If the user is not setup in IPA, they will get the error “Inbasket Error, Unable to populate Inbox for the LPA server”:
3

To avoid the error, we have to setup the user in IPA, even if the user does not need the Inbasket.

There are some workarounds: 1) setup two Smart Office Installation Points, with or without the Infor.PF10 feature, 2) setup two Smart Office profiles, with or without the PF configuration enabled, 3) setup the Mango.Core application settings to enable roaming profiles with or without the Process Server inbasket, 4) maybe use the Category File Administration for profile.xml with or without the Inbasket and Connect Roles and Users, etc. Each workaround has advantages and disadvantages. Nonetheless, we still have to setup at least those users that need the Inbasket.

PROBLEM 2

No users are setup in IPA upon installation, IPA is blank, and M3 and IPA do not have a built-in synchronization mechanism. Hence, for every M3 user that needs the Inbasket, we have to setup the user in IPA. If we have thousands of users, it is too much work to do manually. I need an automatic solution. Note: as a consolation, at least we do not need to manage passwords in IPA as IPA has LDAP binding.

Design

I will implement a solution to automatically synchronize users between M3 and IPA.

My design decisions are:

  • Simple mirror, one-way synchronization, from M3 (primary) to IPA (replica), and ignore changes in IPA
  • One-time initial mass load + incremental backup
  • Per environment (e.g. DEV, TST)
  • Synchronize all users to avoid problem #1 above; otherwise filter to synchronize only the users that will use the Inbasket, not the others; but then must implement one of the workarounds above

Challenges

  • IPA does not have the concept of CONO, whereas M3 does; thus we will have to choose one of the CONOs and discard the rest, i.e. possible data loss
  • IPA stores the email addresses in gen for all data areas, whereas M3 stores the email addresses per environment (e.g. DEV, TST); thus we will have to choose one of the email addresses and discard the others, i.e. possible data loss
  • IPA stores the user’s name in two fields firstname and lastname, whereas M3 stores it in a single field TX40; thus we will have to split the field in two, and make an assumption of where to split for names with more than two words, i.e. possible incorrect split
  • IPA is slower than M3, and there are no concurrency guarantees; thus there could be race conditions where two consecutive changes in M3 are executed in the wrong order by IPA, i.e. possible data inconsistency
  • There is a constraint mismatch between M3 and IPA; e.g. let’s take the case of M3 users and roles (that’s users and tasks in IPA), and let’s suppose a role has one or more users connected to it (one-to-many relationship): in M3 we can delete the role, and M3 will automatically and recursively delete the user-roles associations; whereas if we try to delete the corresponding task in IPA, it will throw an error that associations exist, so we would have to recursively delete the associations ourselves. I just tested that; I have not tested the rest: identities, actors, etc.
  • M3 is encoded in UTF-8, whereas IPA is encoded in ISO-8859-1, i.e. possible data loss

Documentation

There is some documentation about the setup, but it is not everything we need:

Setting up users for Infor Process Automation and Infor Smart Office:
doc1

Mass-loading actors for Infor Process Automation: Overview:
doc2

User management in M3

Here are the programs, tables, and fields for M3:

Users (MNS150, CMNUSR), CONO, DIVI, USID, TX40:
5

Roles (MNS405, CMNROL), ROLL, TX40:
7

User-Roles (MNS410, CMNRUS), USID, ROLL:
8

Email address (CRS111, CEMAIL), CONO, EMTP, EMKY, EMAL:
6

Everything is stored in the M3 database in the respective tables:
sql1

MetaData Publisher (MDP) has information about the tables, columns, and keys:
mdp

User management in IPA

Here are the programs involved for IPA:

In the general data area:

gen

Identity (with Service SSOPV2):
9

Actor (FirstName, LastName, and Email address):
1011

Actor-Identity (one-to-one):
12

Actor-Roles (at least InbasketUser_ST):
13

In each data area (e.g. DEV, TST):
dataarea

Users (one-to-one with Actor):
14

Tasks (equivalent of M3 Roles):
15

User-Tasks:
16

The data is stored in the IPA database in the respective tables (gen: IDENTITY, ACTOR, IDENTITYACTOR, ACTORROLE; DEV/TST: PFIUSERPROFILE, PFITASK, PFIUSERTASK):
17

To be continued…

I will continue in part 2.

X.509 Token policy for M3 Web Services

I finally tested the X.509 Token policy in Infor M3 Web Services (MWS), and I share my results here.

X.509 Token Policy in MWS has been available for at least 7 years. For the setup, we create a public-private key and digital certificate for the SOAP client, and the SOAP client and SOAP server exchange certificates to authenticate each other.

Documentation

For an overview of WS-Security (WSS) in the context of M3, see my previous post.

The MWS Designer (MWSD) User Guide has two modest chapters dedicated to WS-Security and X.509 Token policy, and snippets of source code for a Java client:
doc55 doc77

For more information about the implementation of WS-Security in MWS, read the documentation of Apache CXF and Apache WSS4J (Merlin), and explore the MWS server source code in the lws-server and lws-common JARs:
jar

Enable X.509 Token policy

First, create a web service in MWSD, of any type (API, MDP, SQL), deploy it, and test it to ensure it works correctly, e.g. CRS610MI.GetBasicData:
1

Then, go to Infor ION Grid Management Pages > MWSSecurity > Policy Settings > Service Context (e.g. services), select the web service (e.g. CRS610MI), and click the lock icon to Enable X.509 policy token:2

Server certificate

Then, go to Certificate Management, and download the server certificate, MWSServerCert.cer:
3

Note: Download it over HTTPS (secure) and not HTTP (clear text); otherwise verify in a side channel the key fingerprint received.

Now, import the server certificate into the client keystore so that the client can authenticate the server:

$ keytool -importcert -file MWSServerCert.cer -alias MwsServer -keystore keystoreClient.jks

PROBLEM: Weak crypto

MWS uses weak cryptography [1], MD5 hashing algorithm and RSA key size 1024 bit:

Nowadays, it should use SHA256 and RSA key size 2048 bit. Maybe it is possible to upgrade the keys on the server; for future work:
8

Client keys and certificate

Then, generate a public-private key and digital certificate for the SOAP client; use any tool such as JRE’s keytool or OpenSSL:

$ keytool -genkeypair -keystore keystoreClient.jks -alias trustedclient_host -keyalg RSA

$ keytool -exportcert -keystore keystoreClient.jks -alias trustedclient_host -file trustedclient_host.cer

PROBLEM 1: Do not use the weak crypto recommended by the MWSD User Guide in the parameters keyalg and sigalg. Instead, let the keytool use the default values which today for RSA is 2048 bit key size and SHA256withRSA hashing [2].

PROBLEM 2: Use RSA because with DSA the MWS server throws “java.security.InvalidKeyException: Unsupported key type: Sun DSA Public Key”

PROBLEM 3: The MWSD User Guide uses -alias myalias. But when we upload the certificate to the server, the server changes the alias to “trustedClient_” + hostName regardless. So I use that alias too.

Note: Keep the private key private! If you need to send it somewhere, do so only over a secure channel (regular email is not a secure channel). On the other hand, the public key and certificate are public, so you can shout them in the street no problem.

Here is the client certificate with stronger crypto:
7_

Now, upload the client certificate to the MWS server, so that the server can authenticate the client (peer authentication):

Note: Upload over HTTPS (secure) and not HTTP (clear text); otherwise verify in a side channel the key fingerprint the server received.

Test with SoapUI

Now, test using any SOAP client that supports WS-Security, such as SoapUI.

Create a new SOAP project as usual:
10_

Go to Project View > WS-Security Configurations > Keystores, and add keystoreClient.jks and Password:

Go to Outgoing WS-Security Configurations, and add a configuration, e.g. outgoing.

Add an action Timestamp:

PROBLEM 1: The Timestamp must be non-zero, e.g. 10; if I set it to zero the server throws “Security processing failed (actions mismatch) […] An error was discovered processing the <wsse:Security> header” and I do not know why.

Add an Signature action (it is for the client to sign the message with its private key, and for the server to use the client’s public key and verify the integrity of the message received). Select the client Keystore, select the Alias of the client, set the keystore Password, in Parts add Name Body with the Namespace of <soapenv:Body> which is http://schemas.xmlsoap.org/soap/envelope/ and Encode Content, and leave the rest default:
12

Add an Encryption action (it is for the client to encrypt the message with the server’s public key, and for the server to decrypt it with its private key). Select the client Keystore, select the Alias of the server, set the keystore Password, in Parts add the same as above, and leave the rest default:
13

PROBLEM 2: SoapUI is buggy and does not always seem to immediately pick my changes in the configurations, so I had to close and re-open it to pick my changes.

Now for decryption, go to Incoming WS-Security Configurations, add a configuration, e.g. incoming (the server will encrypt the SOAP response with the client’s public key, so this is for the client to decrypt that using its private key; and the server will sign the SOAP response with its private key, and the client will verify the signature of the message received using the server’s public key). Select the Decrypt Keystore, the Signature Keystore, and set the keystore Password:
14

PROBLEM 3: There is a bug with decryption in SoapUI 5.2.1, and I solved it by replacing the version of wss4j.jar as explained in this post:
bug2

Now, create the sample SOAP request (SoapUI already created a sample request), remove the <soapenv:Header> which we do not need, set your input parameters (e.g. CustomerNumber), add Basic authentication with the M3 Username and Password, select to Authenticate pre-emptively (optional), select the Outgoing WSS and Incoming WSS, and click Submit:
15

The client will encrypt and sign the Body, the server will decrypt it and verify the signature, the server will execute the web service (e.g. CRS610MI.GetBasicData), it will encrypt and sign the response, and return it to the client.

Result

We now have the decrypted and verified SOAP response (e.g. CustomerName, CustomerAddress):
16

PROBLEM: Plain HTTP and authentication

My understanding of WS-Security is that by design it is an option to transport the message over plain HTTP. That scenario will occur when the message passes the TLS termination point and into proxies and gateways over plain HTTP. For that, we could securely set the M3 user/password in the SOAP header at <cred:lws> and add them to the Encryption and Signature actions. However, I tried it, and I removed the user/password from the HTTP Basic authentication, but MWS throws “401 Unauthorized […] WWW-Authenticate: Basic […] fault […] missing_credentials”:
bug

I found some old documentation from 2009 that sheds more light; maybe I have to use the Username Token instead; for future work:
doc126 doc127

Grid best practice

As a general best practice for the Grid, ensure the Configuration Manager > Routers > Default Router, has WWW Authentication Methods disabled for plain HTTP, and enabled for HTTPS only, to prevent sending user/password over plain HTTP:

Troubleshooting

Here are some tips for troubleshooting.

Use SoapUI’s six tabs of logs:
logs5

Set the MWS logs to DEBUG level:
logs
logs3

Set the MWS Debug Settings to create dump files of all the encrypted and signed SOAP requests (_IN.xml) and responses (_OUT.xml) in the MWS\dumps folder:
logs4
logs2

Set your SOAP client to use a proxy like Fiddler:
Fiddler_ Fiddler

Conclusion

That was my result of testing X.509 Token policy for M3 Web Services with SoapUI. It requires quite a good understanding of the public-key cryptography concepts (public-private keys, certificates, keystores, the dance between client and server, encryption, digital signatures), and it opened more questions than it answered.

Future work

I may work on the following in the future:

  • Implement a similar test client in Java
  • Upgrade the MWS server to stronger crypto
  • Call the web service over plain HTTP (instead of HTTPS)
  • Authenticate over plain HTTP (maybe Username Token, instead of Basic authentication or <cred:lws>)
  • Test MWS against WS-Attacker

That’s it.

Please subscribe, comment, like, share with your colleagues, write the next idea with us, or start your own blog.

SOAP WS-Security in the context of M3

Here is my high-level understanding of SOAP Web Services Security (WS-Security, or WSS), at least the WSS X.509 Certificate Token Profile, and how to apply it in the context of Infor M3.

WS-Security

WS-Security is a standard by the OASIS consortium to provide message encryption and digital signature, the usual security properties to prevent eavesdropping and tampering of a message. It uses asymmetric cryptography with public-private keys and digital certificates. There is an additional property which is central to WSS: the security happens at the message level, not at the transport level, i.e. the security follows the message even across proxies and deep packet inspection gateways, for end-to-end security. WSS is common for example in financial institutions that need to inspect and route a message through several nodes that can read the non-secure part of the SOAP envelope yet not reveal the secret in the message, until it reaches the appropriate destination. If a node on the path gets compromised, the security of the message is not compromised. Despite its continued use, WSS has only had few major updates in 10 years, is not considered secure [1] [2], the Internet agrees it is complicated and design-by-committee, and there is no industry momentum behind it.
versus1_

SSL/TLS, on the other hand, provides similar security properties with encryption and digital signature, using public key cryptography as well, but the security happens at the transport level, i.e. before the message level, for point-to-point security only. Thus, intermediate proxies and deep packet inspection gateways are unable to reveal the message to inspect it and route it, unless they have a copy of the destination’s private key. The workaround is to setup a chain of TLS segments, but the compromise of a node on the path, would compromise the message. TLS has additional security properties such as cipher suite negotiation, forward secrecy, and certificate revocation. TLS is constantly being updated with the latest security properties, and is widely supported and documented.

I have seen WSS interfaces used at banks and credit card companies that still have ancient mainframes and old middleware, and WSS is always used in combination with TLS, with peer authentication, thus four sets of public/private keys and digital certificates.
versus3

Infor Grid

Several applications of the Infor Grid expose SOAP web services, but I could not find how to setup WS-Security at the Grid level, so I assume it is not supported at the Grid level, only at the application level; that’s OK as SOAP over HTTPS is sufficient for the Grid’s needs.

grid

M3 Web Services (MWS)

The MWS application does have settings to configure WS-Security (X.509 Token policy); that would be useful for an external partner to call M3 with message-level security (otherwise there is always SOAP over HTTPS); I will explore this in a future post:
MWS

M3 Enterprise Collaborator (MEC)

The MEC application on the Grid does not have built-in SOAP web services. But in MEC Partner Admin the MEC developer can setup SOAP. The SOAP client Send Web Service process does not support WS-Security; this is most unfortunate as here is precisely where I want to setup the secure interfaces with the banks and credit card companies, bummer, I will have to develop my own WS-Security client in Java. On the other hand, the SOAP server WebServiceSyncIn does support WS-Security; I will explore this in a future post:
1

Future work

In future posts, I will:

  • Explore WS-Security in MWS
  • Explore WS-Security in MEC SOAP client
  • Explore WS-Security in MEC SOAP server

Conclusion

That was my high level understanding of WS-Security, at least the WSS X.509 Certificate Token Profile, and how to apply it in the context of M3. I am no expert in WS-Security, but this understanding is sufficient for my needs with M3.

That’s it!

Please subscribe, comment, like, share, and come author with us.

–Thibaud

 

Removing BEGIN statements from Streamfiles with Java

In this example, we have a standard M3 Streamfile (PPS307PF – M3 Put-Away document) that contains separate BEGIN statements (the keyword that signifies a new event to Streamserve) for each item received against a purchase order.  My client would like to see one document with all received items shown as separate lines.  This will save a tremendous amount of paper and allow for easier processing.  With the help of Java and Streamserve filter chains, we can remove unwanted BEGIN statements and quickly achieve a single event document without modification in M3.

In addition to Java knowledge, you’ll also need to have:

  1. Experience with the M3 Output Solution including Streamserve Control Center and Design Center (version 5.6 is used in this example).  Specifically:
    • Creating new Streamserve Applications
    • Modifying, Exporting, and Deploying Streamserve Projects
  2. Administrator access to the Streamserve server.

Now that we’re armed with some basic knowledge, let’s get started.

Step 1:  Examine the Streamfile

Here is my example Streamfile:

*SERVER	server.company.com
*PORT	 22110
*NAME	PPS307PF
*USER	USER
*JOBID	135979716657926744PPS307PF
*MAIL	YES
*TOMAIL1	mnjones@ciber.com
*FRMAIL1	m3apps@hertz.com
*ARCHIVE1	0
*COUNTRY1	US
*ReportFiletype1	PDF
*LANGUAGE	GB
*ZDCONO	111
*ZDDIVI	AAA
*ZDFACI	FAC
*ZDWHLO	DS5
BEGINPPS3070H
*TIME	20160317153921
*LANGUAGE	GB
*ZZCONO	0
*ZZDIVI
*ZZFACI	FAC
*ZZWHLO	WHS
*LAYOUT	LETTER
0HINDCTR	000000000000000000000000000000000000000000000000001000000000000000000000000000000000000000010000000
0HDSUSS	USER
0HMWLNCD	GB
0HPADAT	03-17-16
0HPATIM	15:39:21
0HWDROW3	COMPANY US (111/AAA)
0HZDROW2	M3 13.2 DEV

1MINDCTR	000000000000000000000000000000000000000000000000001000000000000000000000000000000000000000010000000
1MIABUYE	USER
1MIBECVE
1MIBIRCV
1MIBITNO	100000099
1MIBPITD
1MIBPLPN	0000000
1MIBPNLI	010
1MIBPNLS	000
1MIBPRCS
1MIBPROD
1MIBPUNO	8888888
1MIBPURC	USER
1MIBPUUN	EA
1MIBRORC	0
1MIBRORL
1MIBRORN
1MIBSITE
1MIBSUFI
1MIBSUNO	99999999
1MIBWHLO	WHS
1MICREPN	   2111030001
1MMLBANO	7777777
1MMLBREF
1MMLBRE2
1MMMHAC1
1MMMHAC2
1MMMHAC3
1MMMITDS	Excavator
1MMMPPUN
1MMMSLDY
1MMMUNMS
1MMUCOFA
1MMWWHNM	Main Warehouse
1MWTBUYE	Patty Purchaser
1MWTHAC1
1MWTHAC2
1MWTHAC3
1MWTPROD
1MWTPURC	Patty Purchaser
1MWWCAWE
1MWWFUDS	Excavator
1MWWPQTY
1MWWRPQA	         1
1MWWRPQT
1MWWSUNM	Excavator Supply
1MWWWHSL	SERVICE

0NINDCTR	000000000000000000000000000000000000000000000000001000000000000000001000000000000000000000000000000
0NIBGRMT	DP0
0NP7QCLV	4
0NWTGRMT	Direct Put-Away (No Doc)
0NWWNEAC	No more activity
0NWWQCQT	

CFINDCTR	000000000000000000000000000000000000000000000000001000000000000000001000000000000000000000000000000
CFWWTX60	MAKE BOBCAT

CFINDCTR	000000000000000000000000000000000000000000000000001000000000000000001000000000000000000000000000000
CFWWTX60	MODEL D12A
BEGINPPS3070H
*TIME	20160317153921
*LANGUAGE	GB
*ZZCONO	0
*ZZDIVI
*ZZFACI	FAC
*ZZWHLO	WHS
*LAYOUT	LETTER
0HINDCTR	000000000000000000000000000000000000000000000000001000000000000000001000000000000000000000010000000
0HDSUSS	USER
0HMWLNCD	GB
0HPADAT	03-17-16
0HPATIM	15:39:21
0HWDROW3	COMPANY US (111/AAA)
0HZDROW2	M3 13.2 DEV

1MINDCTR	000000000000000000000000000000000000000000000000001000000000000000000000000000000000000000010000000
1MIABUYE	USER
1MIBECVE
1MIBIRCV
1MIBITNO	100000099
1MIBPITD
1MIBPLPN	0000000
1MIBPNLI	010
1MIBPNLS	000
1MIBPRCS
1MIBPROD
1MIBPUNO	8888888
1MIBPURC	USER
1MIBPUUN	EA
1MIBRORC	0
1MIBRORL
1MIBRORN
1MIBSITE
1MIBSUFI
1MIBSUNO	99999999
1MIBWHLO	WHS
1MICREPN	   2111030002
1MMLBANO	8191811
1MMLBREF
1MMLBRE2
1MMMHAC1
1MMMHAC2
1MMMHAC3
1MMMITDS	Excavator
1MMMPPUN
1MMMSLDY
1MMMUNMS
1MMUCOFA
1MMWWHNM	Main Warehouse
1MWTBUYE	Patty Purchaser
1MWTHAC1
1MWTHAC2
1MWTHAC3
1MWTPROD
1MWTPURC	Patty Purchaser
1MWWCAWE
1MWWFUDS	Excavator
1MWWPQTY
1MWWRPQA	         1
1MWWRPQT
1MWWSUNM	Excavator Supply
1MWWWHSL	SERVICE

0NINDCTR	000000000000000000000000000000000000000000000000001000000000000000001000000000000000000000000000000
0NIBGRMT	DP0
0NP7QCLV	4
0NWTGRMT	Direct Put-Away (No Doc)
0NWWNEAC	No more activity
0NWWQCQT	

CFINDCTR	000000000000000000000000000000000000000000000000001000000000000000001000000000000000000000000000000
CFWWTX60	MAKE BOBCAT

CFINDCTR	000000000000000000000000000000000000000000000000001000000000000000001000000000000000000000000000000
CFWWTX60	MODEL D12B

CFINDCTR	000000000000000000000000000000000000000000000000001000000000000000001000000000000000000000000000000
CFWWTX60	Air Condition:Yes

We want to process PPS307PF as one document with multiple lines as opposed to the standard document which has multiple documents (indicated by the multiple BEGINPPS3070H statements in the Streamfile) each with a single line.

There is only one type of BEGIN statement in this Streamfile, but some Streamfiles have more than one, so we’ll take that into account in the code.

Step 2:  Create Java filter to remove the unwanted BEGIN statements

In the Streamfile above we found two instances of BEGINPPS3070H.  In order to remove the second (and any subsequent appearances) while keeping the first, we’ll create a class that will remove these lines as the Streamfile gets passed into Streamserve.  We’ll also take into account that we may need to filter Streamfiles other than PPS307, so we’ll use a regular expression to find the instances of BEGIN in the file that end in ‘H’ (This way we’re excluding 0A BEGIN statements while including 0H, 1H, AH, etc.).

import java.io.*;
public class SingleEvent {
 public static void main(String[]args) {
  BufferedReader br = null;
  int counter = 0;
  String regex = "BEGIN[a-zA-Z0-9]+H$";
  try {
   String sCurrentLine;
   br = new BufferedReader(new InputStreamReader(System.in));
   while ((sCurrentLine = br.readLine()) != null) {
    if (sCurrentLine.matches(regex)) {
     counter++;
     if (counter > 1) {
      sCurrentLine = "***BEGIN STATEMENT SKIPPED***";
     }
    }
    System.out.println(sCurrentLine);
   }
  } catch (IOException e) {
   e.printStackTrace();
  }
  finally {
   try {
    if (br != null)
     br.close();
   } catch (IOException ex) {
    ex.printStackTrace();
   }
  }
 }
}

Our code creates a class called SingleEvent which reads the incoming Streamfile one line at a time and checks the current line to see if it matches our regular expression.  If so, a counter is increased and if the counter is larger than one, it replaces the current line with new text.  This text can be anything other than BEGINPPS3070H as Streamserve is looking for the BEGIN statements to create new documents.  Each (new) line is then written to the console and Streamserve picks up each of the lines before processing the Streamfile.

When we replace the BEGIN statements with alternative text (in this case replacing them with the text “***BEGIN STATEMENT SKIPPED***”).  Streamserve rejects the line, notes it in the Control Center log, and moves to the next line. Later in the process, we’ll see our skipped lines in the log.

Next, compile the code and create a SingleEvent.jar using your favorite tool.  Copy the SingleEvent.jar to an accessible folder on the Streamserve server.  Now we can begin prepping Streamserve to use Java.

Step 3:  Prep Streamserve to use Java

Because we’ll be adding a filter on both the TCP/IN port and DirScan folder, it’s best to create a new Streamserve Application specifically for our new SingleEvent Streamfiles.  We don’t want to place this filter on our base Streamserve Application and filter all Streamfiles.   Follow the standard steps to create the new application in the correct domain in Control Center.  For this example, we named our application “Dev_SingleEvent_22104” to indicate the domain, application purpose, and incoming port number.   Next, right click on the application and select Java Configuration.

beginsts_31

When the Java Configuration screen opens, we see the properties list.  Click in the Value box and select the appropriate vendor.  In my case, this was Oracle.

beginsts_32

Next, we’ll need to import our SingleEvent.jar and add the new filter chain in the Streamserve Global_Resourceset. In the Global_Resourceset, right click and select “Import…”.

beginsts_33

Navigate to the folder on the Streamserve server where you stored SingleEvent.jar.  Highlight SingleEvent.jar and click Open.

Streamserve will automatically load the correct Resource Type (Java).

beginsts_34

Click OK and your new Java resource will appear in the Global_Resourceset.

Now, let’s create our new filter chain.  The filter chain is the heart of this transaction as Streamserve will filter the Streamfile through the Java program as it comes into the TCP port or is dropped into the DirScan folder.

First, navigate to the Filter Chain folder in the Global_Resourceset, right click and select New > Filter Chain.

beginsts_35

We’ll name the filter chain “Platform – SingleEvent”.  It will be used at both the TCP/IP and Directory input connectors.

beginsts_36

Then, right click and select Start Editor.

There will be an existing Codepage Filter already in place.  We won’t make any changes here.  From the Filter Chain menu, select Add Filter > External Filter.   Type “java.exe –jar ..\data\java\SingleEvent.jar” in the Value box, save and close.

beginsts_37

Navigate to the Platform next, where we’ll copy/paste the standard TCP_MOVEX and DirScan_MOVEX input connectors and rename them as “SingleEvent_TCP_MOVEX” and “SingleEvent_DirScan_MOVEX”, respectively.  Make the necessary changes to ports and input folders for the physical platform you’re using.

Now, we’ll add the Platform – SingleEvent filter chain to each of these input connectors. In the Platform, right click on the SingleEvent_TCP_MOVEX input connector and select Settings.

beginsts_38

From the logical platform, click on the “Filter chain” button and add the new filter chain.

beginsts_39

Repeat these steps for the SingleEvent_DirScan_MOVEX connector.

Verify all the changes, save and export the project and then deploy in Control Center to your new application.

Step 4:  Test

To test, drop the Streamfile into the DirScan folder in the Management Gateway and see the Filter Chain at work in the Control Center log. Here is a snip from the log:

beginsts_41

We can see our “Unknown line (ignored):***BEGIN STATEMENT SKIPPED***” message line and each duplicate header fields that has been ignored.  We also see the “ExternFilter exitCode:0” message indicating the filter completed successfully.

Here is our new document with one event and multiple lines:

beginsts_42

Final Thoughts

We’ve successfully written a Java class that works as a filter to remove unwanted BEGIN statements from a Streamfile, setup Streamserve to use Java and our new filter, and processed a test Streamfile through the filter.  Let me know how it works for your project and any tweaks you made for it to deliver the document you want.

Special thanks to the posters at StreamShare whose excellent variety of posts on filtering and using Java with Streamserve allowed me to combine them to achieve a solution.  If you work with Streamserve, make sure to check out the forums.

Now that we’ve seen how Java works with Streamserve, next time let’s discuss using Java to connect to M3 APIs and add information to a Streamserve PageOut.

Good luck,

Neil

Calling SOAP web services from MEC

Here is a primer on how to invoke SOAP web services from Infor M3 Enterprise Collaborator (MEC) using the Send Web Service process, where MEC is the SOAP client calling SOAP servers.

MEC process

I am using MEC version 11.4.3.

We find the Send Web Service process in Partner Admin > Agreement > Processes:
1

The properties are the following:
1_

The MEC Partner Admin Tool User Guide does not have much information:
3

The MEC training workbook does not have information either.

I decompiled MEC and found the Java class com.intentia.ec.server.process.SendWebServiceProcess. I was expecting it to use a legitimate SOAP client such as Apache CXF, but it uses a mere java.net.HttpURLConnection:
2

Consequently, this process does no more than the HTTPOut process albeit the additional SOAP action property.

Problems

There are many problems with this type of implementation:

  • It does not validate the message against the web service’s WSDL, the XML Schemas, not even against XML syntax
  • It does not have a factory to create client stubs
  • It is byte-based (we could send whatever content) whereas SOAP clients are more RPC-like with setters and getters for parameters
  • It is HTTP-centric, not SOAP-centric
  • It is restricted to HTTP whereas SOAP is agnostic to the underlying transport protocol, e.g. SOAP supports FTP
  • It does not support WS-Security for XML Encryption and XML Signature
  • It does not support the use of HTTP proxy
  • Etcetera

Anyway, let’s give it a try.

Sample web service

I have a sample web service from TMW SystemsLink, a Transportation Management Software. It is available on my customer’s network. It is only setup for HTTP (not HTTPS), without authentication, thus it is insecure, but it is easy for illustration purposes.

First, I ensure I can get to the WSDL:
5_

Test with SoapUI

Then, I test the web service with a SOAP client such as SoapUI:

Then, I get the SOAP action and content type to be used later (we can get them from the WS-A and http log tabs of SoapUI or from Fiddler):

Test with Java – optional

Optionally, I test the web service with the URLConnection in Java, from the same location and JRE as MEC:

javac Test.java && java -cp . Test
import java.io.BufferedReader;
import java.io.InputStreamReader;
import java.io.OutputStreamWriter;
import java.net.URL;
import java.net.URLConnection;

class Test {
	public static void main(String[] args) throws Exception {
		URL url = new URL("http://tmwsl/TMWSystemsLink/APIWCFServices.svc");
		URLConnection con = url.openConnection();
		con.setDoOutput(true);
		con.setRequestProperty("Content-Type", "text/xml;charset=UTF-8");
		con.setRequestProperty("SOAPAction", "http://tempuri.org/IAPIWCFServices/RetrieveCarrier");
		String data = "<soapenv:Envelope xmlns:soapenv=\"http://schemas.xmlsoap.org/soap/envelope/\" xmlns:tem=\"http://tempuri.org/\" xmlns:tmw=\"http://schemas.datacontract.org/2004/07/TMWSystems.SystemsLink.APIClasses\"><soapenv:Header/><soapenv:Body><tem:RetrieveCarrier><tem:criteria><tmw:CarrierID>JONCOL</tmw:CarrierID></tem:criteria></tem:RetrieveCarrier></soapenv:Body></soapenv:Envelope>";
		con.setRequestProperty("Content-Length", "" + data.length());
		OutputStreamWriter out = new OutputStreamWriter(con.getOutputStream());
		out.write(data);
		out.close();
		BufferedReader in = new BufferedReader(new InputStreamReader(con.getInputStream()));
		String s;
		while ((s = in.readLine()) != null) {
			System.out.println(s);
		}
		in.close();		
	}
}

The result is the SOAP response:
7

Test in Partner Admin

Now, I am ready to test in Partner Admin. I create a simple test agreement with the Send Web Service process configured to the end point address, content type, and SOAP action:
8

Then, I add a simple detection such as the default DiskIn, I reload the communication channel in the MEC Grid Management Pages, I create a file with the sample SOAP request, I drop the file in the DiskIn folder, I wait for MEC to process the file, and I check the result; the usual steps in MEC.

Result

Here is the result in MEC Grid Management Pages, it successfully finished sending the SOAP request, and getting the SOAP response:
9

Here is the resulting POST HTTP request: 10

At this point we can use Partner Admin and MEC Mapper to transform an M3 MBM and generate the SOAP request, we can process the SOAP response, etc.

Conclusion

That was an illustration of how to call SOAP web services from Infor M3 Enterprise Collaborator (MEC) using the Send Web Service process, for MEC to be a SOAP client calling SOAP servers. Despite “Web Service” in its name, the process is deceitfully not very SOAP oriented, but we can manage with it.

Future work

In future posts I will:

  • Explore how to securely call a web service over HTTPS (not HTTP); if I use a custom server certificate, I have to setup the JRE keystore, or explore the Partner Admin > Manage > Certificate Keystores:
    future1
  • Explore how to use the Partner Admin > Manage > Web Service Definitions; I do not know what this is for:
    future2
  • Explore how to use the Manage > Advanced > WebServiceSyncIn/Out; I think that is for MEC to be a SOAP server, accept requests, and serve responses:
    future3 future3_

That’s it.

Please leave a comment, click Like, click Follow to subscribe, share around you, and come write the next blog post.