Authorization hierarchies for approval flows in M3

Today I will illustrate authorization hierarchies for approval flows in Infor M3.

Approval flows

A common requirement in M3 projects is to implement approval flows, for example for purchase orders; where a buyer creates a purchase order of a certain amount; where one or more approvers must review the order, and either approve it, one approver after the other, either reject it for some reason; and where the approvers are selected from a hierarchy of managers and their maximum order amount.

There are many variations of these approval flows, each being specific to the requirements of the M3 project. Here is a simple approval flow with a single approver:
Approval flows get complex quickly, with many decisions to take, many levels of approval to have, many design trade-offs to consider, and all sorts of scenarios to support.

Infor Process Automation

To implement the approval flows we use Infor Process Automation (IPA). We can also use the former Lawson ProcessFlow Integrator (PFI). As for Infor ION, it is the new standard for implementing M3 approval flows, but its usage for M3 is still young, and it does not yet have as many features as IPA does.

M3 Purchase Authority – PPS235

To store the hierarchy of approvers and their maximum order amounts, we use the program M3 Purchase Authority – PPS235. It stores the user (AURE), the maximum order amount this user is authorized to approve (MPOA), and their manager who is the next level for authorization (MNGR). For orders that exceed this amount, authorization is required by a user with authorization rights for a higher amount.

1

Huh?

I am not an expert on PPS235, but it looks like it does not enforce integrity, and it is not in normal form, and that can result in logical inconsistencies.

Indeed, the hierarchy of managers and maximum order amount may contradict each other. For instance, PPS235 allowed me to set a user whose maximum order amount was higher than that of its manager, in which case routing the approval to that manager will result in a logical anomaly.

Also, there is a field to set the user’s authorization level (AUTL) which results in an alternate hierarchy of approvers, and PPS235 allowed me to enter illogical values there too.

That results in several possible hierarchies: either based on the managers, either based on the maximum order amounts, either based on the authorization levels, all possibly contradicting each other.

Also, PPS235 erroneously allows cycles in the hierarchy. For instance, it allowed me to set a user to be the manager of its manager. This will cause an infinite loop in the graph traversal.

Also, users can be unreachable if there does not exist a connection to that user in the hierarchy.

There is probably a logical explanation for these design decisions. Meanwhile, you must ensure the integrity of your data before proceeding.

MPAUTD

The data of PPS235 is stored in table MPAUTD – Authorization distribution as an adjacency list of users and their managers, for example:

AURE MNGR
Marie Eric
Keith Daniel
Eric Daniel
Charles Daniel
John Daniel
Daniel Joe
Joe Jeff

The resulting tree looks something like this (I use Graphviz to visualize the hierarchy and ensure it is correct):
input

To retrieve the hierarchy of managers for a certain user we traverse the adjacency list recursively using SQL’s common table expressions (CTE), for example for user Marie and for a purchase order in company 100:

WITH CTE AS (
SELECT ATCONO, ATAURE, ATMPOA, ATMNGR FROM MPAUTD WHERE ATCONO=100 AND ATAURE='Marie'
UNION
SELECT M.ATCONO, M.ATAURE, M.ATMPOA, M.ATMNGR FROM MPAUTD M JOIN CTE C ON M.ATCONO=C.ATCONO AND M.ATAURE=C.ATMNGR
)
SELECT ATAURE FROM CTE
ORDER BY ATMPOA

That results in the hierarchy of approvers and their maximum order amount starting at the specified buyer:
output

You can now create a loop of UserAction activity nodes in IPA to iterate thru that hierarchy. Here is an excerpt (you will need to add the SQL activity node and everything else that may be needed; you can also use the new ForEach activity node instead of my loop with an if-then-else Branch):
x

Conclusion

That was a quick illustration of authorization hierarchies for approval flows for purchase orders in M3, more specifically how to store the hierarchy of approvers in PPS235, how to recursively query it from MPAUTD, and how to do a loop of UserAction activity nodes.

(I meant to write about this many years ago. I finally got around to doing it after I learned how to do the recursive SQL portion for a customer last week. I hope it helps you.)

Custom message process in MEC

Here is an unofficial guide on how to create a custom message process in Infor M3 Enterprise Collaborator (MEC).

What is a message process?

A message process in MEC is one of the steps in a process flow. Technically speaking, it is a Java class that reads a stream of bytes as an input, does some processing on it, and writes a stream of bytes as an output, for example transform a flat file to XML, apply XSLT to an XML, remove an envelope, archive the message, or make a SOAP request. Message processes are chained together in a partner agreement in the Partner Admin Tool.

2_

Documentation

The Partner Admin Tool User Guide has some information about message processes:
1__

Java classes

The message processes are Java classes located in MEC’s core library:

D:\Infor\LifeCycle\host\grid\M3\grids\M3\applications\MECSRVDEV\MecServer\lib\ec-core-x.y.z.jar

Each message process is a Java class in package com.intentia.ec.server.process:4

Each message process may have a configuration dialog box in package com.intentia.ec.partneradmin.swt.agreement: 3

Database

The message processes are declared in the MEC database in table PR_Process:
5

Java code

To create your own message process follow these steps:

  1. Use the following skeleton Java code, fill with your code, set the file extension for the output message (in my example it is .something), use the in input and out output streams to process the message as you need, and eventually use the cat logger to write debug info in the log file:
    package somewhere;
    
    import java.io.InputStream;
    import java.io.OutputStream;
    import org.apache.log4j.Category;
    import com.intentia.ec.server.process.AbstractProcess;
    import com.intentia.ec.server.process.ProcessException;
    
    public class SomeProcess extends AbstractProcess {
    
      private static Category cat = Category.getInstance(SomeProcess.class.getName());
    
      public String getState() {
        return "SomeProcess";
      }
    
      public boolean hasOutput() {
        return true;
      }
    
      public String getFileExtension() {
        return ".something";
      }
    
      public void process(InputStream in, OutputStream out) throws ProcessException {
        // your code here
        cat.info("processing...");
      }
    
    }

    Note: I do not have a sample code for the dialog box, but you can get inspiration from one of the existing classes in package com.intentia.ec.partneradmin.swt.agreement.

  2. Compile the Java code with:
    javac -extdirs D:\Infor\LifeCycle\host\grid\M3\grids\M3\applications\MECSRV\MecServer\lib\ SomeProcess.java
  3. Copy the resulting Java class to the classpath of the MEC server and Partner Admin Tool, in the folder corresponding to the package (in my case it was package somewhere):
    D:\Infor\LifeCycle\host\grid\M3\grids\M3\applications\MECSRV\MecServer\custom\somewhere\SomeProcess.class
    D:\Infor\MECTOOLS\Partner Admin\classes\somewhere\SomeProcess.class

    Note: You can probably also put the Java class in a JAR file; to be tested.

  4. In the MEC database, add the process to the PR_Process table, where ?? is a new ID, for example 27:
    INSERT INTO MECDBDEV.dbo.PR_Process (ID, Name, Description, ConfigurationClass, WorkClass, Standard) VALUES (??, 'Thibaud Process', 'My custom message process', null, 'somewhere.SomeProcess', 1)
  5. In the Infor Grid, restart the MECSRV application to pick up the new Java class:
    6
  6. In the Partner Admin Tool, create a partner agreement and add the message process:
    2
  7. Reload the MEC server to pick up the new agreement:
    7
  8. Run the partner agreement, for example I have a channel detection with a HTTPIn receive channel listening on port 8084, and I make an HTTP request to that port number to trigger the partner agreement.
  9. Check the message received (.rcv) and the message produced (in my case it was extension .something); for that, you will need one Archive process in your partner agreement, before and after your custom process:
    128
  10. You can also open the files directly in the folder specified:
    11
  11. And if you used the logger, you can check your logs in the Event tab:
    10

Real-world example

In my case, I needed a custom message process for the following real-world scenario. My current customer does Procurement PunchOut with its partners using cXML, an old protocol from 1999. In that protocol, there is a step (PunchOutOrderMessage) that sends an XML document in a hidden field cxml-urlencoded of an HTML form. That results in a POST HTTP request (to MEC) with Content-Type: application/x-www-form-urlencoded, with the XML document that is URL-encoded as a value of parameter cxml-urlencoded in the request body. Unfortunately, MEC does not have a message process to extract a specific parameter value of a message, and URL-decode it. So I developed my custom message process as explained above, to take the request body, extract the desired parameter value, URL-decode it, and output the resulting XML. I may write a detailed post about it some day, maybe not.

Conclusion

That was a guide on how to create a custom message process in MEC, doing Java development, to take an input message in a partner agreement, do some custom processing on it, and produce an output message. This is an unofficial solution that I figured out by de-compiling and hacking MEC. There may be a simpler solution, I do not know.

That’s it! Thank you for supporting this blog, please like, subscribe, share around you, and come author the next blog post with us.

Procurement PunchOut with cXML

Hi colleagues. It has been a while since I posted anything. Today I will write a quick post as part of an interface I am currently developing to do procurement PunchOut using cXML, an old protocol from 1999, for my customer and its suppliers. This will eventually end up in their Infor M3 and M3 Enterprise Collaborator implementation.

I only needed to test the Message Authentication Code (MAC) so I wrote a quick prototype in Python.

The cXML User’s Guide describes the MAC algorithm using HMAC-SHA1-96:

Here is my implementation in Python:

# Normalize the values
data = [fromDomain.lower(),
        fromIdentity.strip().lower(),
        senderDomain.lower(),
        senderIdentity.strip().lower(),
        creationDate,
        expirationDate]

# Concatenate the UTF-8-encoded byte representation of the strings, each followed by a null byte (0x00)
data = b''.join([(bytes(x, "utf-8") + b'\x00') for x in data])

# Calculate the Message Authentication Code (MAC)
digest = hmac.new(password.encode("utf-8"), data, hashlib.sha1).digest()

# Truncate to 96 bits (12 bytes)
truncated = digest[0:12]

# Base-64 encode, and convert bytearray to string
mac = str(base64.b64encode(truncated), "utf-8")

# Set the CredentialMac in the XML document
credentialMac = xml.find("Header/Sender/Credential").find("CredentialMac")
credentialMac.attrib["creationDate"] = creationDate
credentialMac.attrib["expirationDate"] = expirationDate
credentialMac.text = mac

Here is my resulting MAC, and it matches that of the cXML User’s Guide, good:

I posted the full source code in my GitHub repository at https://github.com/M3OpenSource/cXML/blob/master/Test.py .

That’s it!

Thank you for continuing to support this blog.

Experimenting with middle-side modifications

With Infor M3, there are server-side modifications, client-side modifications, and the unexplored middle-side modifications. I will experiment with servlet filters for the M3 UI Adapter.

Modification tiers

There are several options to modify M3 functionality:

  • Server-side modifications are M3 Java mods developed with MAK; they propagate to all tiers, including M3 API, but they are often avoided due to the maintenance nightmare during M3 upgrades, and they are banned altogether from Infor CloudSuite multi-tenant. There are also Custom lists made with CMS010 which are great, they are simply configured by power users without requiring any programming, and they survive M3 upgrades.
  • Client-side modifications for Smart Office are Smart Office scripts in JScript.NET, Smart Office SDK features, applications and MForms extensions in C#, Smart Office Mashups in XAML, and Personalizations with wizards. They do not affect M3 upgrades but they apply only to Smart Office. And client-side modifications for H5 Client are H5 Client scripts in JavaScript, and web mashups converted from XAML to HTML5. Likewise, they do not affect M3 upgrades but they apply only to H5 Client.
  • Middle-side modifications are servlet filters for the M3 UI Adapter. They propagate to all user interfaces – Smart Office AND H5 Client – but this is unexplored and perilous. In the old days, IBrix lived in this tier.

M3 UI Adapter

The M3 UI Adapter (MUA), formerly known as M3 Net Extension (MNE), is the J2EE middleware that talks the M3 Business Engine protocol (MEX?) and serves the user interfaces. It was written mostly single-handedly by norpe. It is a simple and elegant architecture that runs As Fast As Fucking Possible (TM) and that is as robust as The Crazy Nastyass Honey Badger [1]. The facade servlet is MvxMCSvt. All the userids/passwords, all the commands, for all interactive programs, for all users, go thru here. It produces an XML response that Smart Office and H5 Client utilize to render the panels. The XML includes the options, the lists, the columns, the rows, the textboxes, the buttons, the positioning, the panel sequence, the keys, the captions, the help, the group boxes, the data, etc.

For example, starting CRS610/B involves:
com.intentia.mc.servlet.MvxMCSvt.doTask()
com.intentia.mc.command.MCCmd.execute()
com.intentia.mc.command.RunCmd.doRunMovexProgram()
com.intentia.mc.engine.ProtocolEngine.startProgram()

The following creates a list with columns:

import com.intentia.mc.metadata.view.ListColumn;
import com.intentia.mc.metadata.view.ListView;

ListView listView = new ListView();
ListColumn listColumn = new ListColumn();
listColumn.setWidth(length);
listColumn.setConstraints(constraints);
listColumn.setCaption(new Caption());
listColumn.setConditionType(1);
listColumn.setHeader(headerSplitterAttr);
listColumn.setName("col" + Integer.toString(columnCount));
listColumn.setJustification(1);
listColumns.add(listColumn);
listView.addFilterField(posField, listColumn);
listView.setListColumns((ListColumn[])listColumns.toArray(new ListColumn[0]));

Here is an excerpt of the XML response for CRS610/B that shows the list columns and a row of data:
Fiddler

Experiment

This experiment involves adding a servlet filter to MvxMCSvt to transform the XML response. Unfortunately, MNE is a one-way function that produces XML in a StringBuffer, but that cannot conversely parse the XML back into its data structures. Thus, we have to transform the XML ourselves. I will not make any technical recommendations for this because it is an experiment. You can refer to the existing MNE filters for examples on how to use the XML Pull Parser (xpp3-1.1.3.4.O.jar) that is included in MNE. And you can use com.intentia.mc.util.CstXMLNames for the XML tag names.

To create a servlet filter:

/*
D:\Infor\LifeCycle\host\grid\XYZ\runtimes\1.11.47\resources\servlet-api-2.5.jar
D:\Infor\LifeCycle\host\grid\XYZ\grids\XYZ\applications\M3_UI_Adapter\lib\mne-app-10.2.1.0.jar
javac -cp servlet-api-2.5.jar;mne-app-10.2.1.0.jar TestFilter.java
*/

package net.company.your;

import java.io.IOException;
import javax.servlet.Filter;
import javax.servlet.FilterChain;
import javax.servlet.FilterConfig;
import javax.servlet.ServletException;
import javax.servlet.ServletRequest;
import javax.servlet.ServletResponse;
import com.intentia.mc.util.Logger;

public class TestFilter implements Filter {

    private static final Logger logger = Logger.getLogger(TestFilter.class);

    public void init(FilterConfig filterConfig) throws ServletException {}

    public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException {
        if (logger.isDebugEnabled()) {
            logger.debug("Hello, World");
        }
        chain.doFilter(request, response);
    }

    public void destroy() {}

}

Add the servlet filter to the MNE deployment descriptor at D:\Infor\LifeCycle\host\grid\XYZ\grids\XYZ\applications\M3_UI_Adapter\webapps\mne\WEB-INF\web.xml:

<filter>
    <filter-name>TestFilter</filter-name>
    <filter-class>net.company.your.TestFilter</filter-class>
</filter>
<filter-mapping>
    <filter-name>TestFilter</filter-name>
    <servlet-name>MvxMCSvt</servlet-name>
</filter-mapping>

Then, reload the M3UIAdapterModule in the Infor Grid. This will destroy the UI sessions, and users will have to logout and logon M3 again.

Optionally, set the log level of the servlet filter to DEBUG.

Limitations

MvxMCSvt is the single point of entry. If you fuck it up, it will affect all users, all programs, on all user interfaces. So this experiment is currently a Frankenstein idea that would require a valid business case, a solid design, and great software developers to make it into production.

Also, changes to the web.xml file will be overriden with a next software update.

Discussion

Is this idea worth pursuing? Is this another crazy idea? What do you think?

Call M3 API from Event Analytics rules

Here is how to call M3 API from a Drools rule in Infor Event Analytics; this is a common requirement.

Sample scenario

Here is my sample business case.

When a user changes the status of an approval line in OIS115 (OOAPRO), I have to find the order type (ORTP) of the order to determine which approval flow to trigger in Infor Process Automation (IPA), but ORTP is not part of the table OOAPRO, for that reason I must previously make a call to OIS100MI.GetHead.

I could call M3 in the approval flow, but false positives would generate noise in the WorkUnits.

Is it possible?

I asked Nichlas Karlsson, Senior Architect – Business Integration at Infor, if it was possible to call M3 API directly in the Drools rule. He is one of the original developers of Event Hub and Event Analytics and very helpful with my projects (thank you) although he does not work with these products any longer. He responded that Event Analytics is a generic software with no specific connection to M3, so unfortunately this is not possible out of the box, however it is a common requirement. He said I could solve it using MvxSockJ to call M3 APIs in my own Java class, included in a jar that I put in the lib folder. He added to not forget that the execution time for all rules within a session must be less than the proxy timeout, i.e. 30s. And I would also need to manage host, port, user, password and other properties in some way.

Instead of MvxSockJ I will use the MI-WS proxy of the Grid as illustrated in my previous post.

Sample Drools rule

Here is my sample Drools rule that works:

package com.lawson.eventhub.analytics.drools;

import java.util.List;
import com.lawson.eventhub.analytics.drools.model.Event;
import com.lawson.eventhub.analytics.drools.model.HubEvent;
import com.lawson.eventhub.EventOperation;
import com.lawson.grid.node.Node;
import com.lawson.grid.proxy.access.SessionId;
import com.lawson.grid.proxy.access.SessionProvider;
import com.lawson.grid.proxy.access.SessionUtils;
import com.lawson.grid.proxy.ProxyClient;
import com.lawson.grid.registry.Registry;
import com.lawson.miws.api.data.MIParameters;
import com.lawson.miws.api.data.MIRecord;
import com.lawson.miws.api.data.MIResult;
import com.lawson.miws.api.data.NameValue;
import com.lawson.miws.proxy.MIAccessProxy;

declare HubEvent
	@typesafe(false)
end

rule "TestSubscription"
	@subscription(M3:OOAPRO:U)
	then
end

rule "TestRule"
	no-loop
	when
		event: HubEvent(publisher == "M3", documentName == "OOAPRO", operation == EventOperation.UPDATE, elementOldValues["STAT"] == 10, elementValues["STAT"] == "20")
	then
		// connect to MI-WS
		Registry registry = Node.getRegistry();
		SessionUtils su = SessionUtils.getInstance(registry);
		SessionProvider sp = su.getProvider(SessionProvider.TYPE_USER_PASSWORD);
		SessionId sid = sp.logon("Thibaud", "******".toCharArray());
		MIAccessProxy proxy = (MIAccessProxy)registry.getProxy(MIAccessProxy.class);
		ProxyClient.setSessionId(proxy, sid);

		// prepare input parameters
		MIParameters p = new MIParameters();
		p.setProgram("OIS100MI");
		p.setTransaction("GetHead");
		MIRecord r = new MIRecord();
		r.add("CONO", event.getElementValue("CONO"));
		r.add("ORNO", event.getElementValue("ORNO"));
		p.setParameters(r);

		// execute and get output
		MIResult s = proxy.execute(p);
		List<MIRecord> records = s.getResult(); // all records
		if (records.isEmpty()) return;
		MIRecord record = records.get(0); // zeroth record
		List<NameValue> nameValues = record.getValues(); // all output parameters
		String ORTP = nameValues.get(4).getValue(); // PROBLEM: somehow nameValues.indexOf("X") returns -1

		// make decision
		if (ORTP.equals("100")) event.postEvent("ApprovalFlowA");
		if (ORTP.equals("200")) event.postEvent("ApprovalFlowB");
		if (ORTP.equals("300")) event.postEvent("ApprovalFlowC");
end

Note: You will need to drop foundation-client-10.1.1.3.0.jar in the lib folder of Event Analytics, and restart the application

Limitations

There are some limitations with this code:

  • The execution time must be less than the 30s proxy timeout
  • Limit the number of return columns; there is currently a bug with Serializable in ColumnList, see Infor Xtreme incident 8629267
  • If the M3 API returns an error message it will throw the bug with Serializable in MITransactionException, see Infor Xtreme incident 8629267
  • Somehow NameValue.indexOf(name) always returned -1 during my tests, it is probably a bug in the class, so I had to hard-code the index value of the output field (yikes)
  • I do not know how to avoid the logon to M3 with user and password to get a SessionId; I wish there was a generic SYSTEM account that Event Analytics could use
  • For simplicity of illustration I did not verify all the null pointers; you should do the proper verifications
  • The code may throw MITransactionException, ProxyException and IndexOutOfBoundsException
  • You can move the Java code to a separate class in the lib folder; for that refer to my previous post

Related articles

That’s it. Let me know what you think in the comments below.

How to call M3 API from the Grid application proxy

Here is how to call M3 API using the MI-WS application proxy of the Infor Grid.

This is useful if we want to benefit from what is already setup in the Grid and not have to deal with creating our own connection to the M3 API server with Java library, hostname, port number, userid, password, connection pool, etc.

Note: For details on what Grid application proxies are, refer to the previous post.

MI-WS application proxy

The MI-WS application is part of the M3 Business Engine Foundation. We will need foundation-client.jar to compile our classes:
1b

Step 1. Logon to the Grid

First, login to the Grid from your application and get a SessionId and optionally a GridPrincipal.

From a Grid application:

import com.lawson.grid.proxy.access.GridPrincipal;
import com.lawson.grid.proxy.access.SessionController;
import com.lawson.grid.proxy.access.SessionId;

// get session id
SessionId sid = ??? // PENDING
GridPrincipal principal = ??? // PENDING;

From a client application outside the Grid:

import com.lawson.grid.proxy.access.GridPrincipal;
import com.lawson.grid.proxy.access.SessionId;
import com.lawson.grid.proxy.access.SessionProvider;
import com.lawson.grid.proxy.access.SessionUtils;
import com.lawson.grid.proxy.ProxyException;

// logon and get session id
SessionUtils su = SessionUtils.getInstance(registry);
SessionProvider sp = su.getProvider(SessionProvider.TYPE_USER_PASSWORD);
SessionId sid;
try {
    sid = sp.logon(userid, password.toCharArray());
} catch (ProxyException e) {
    ...
}
GridPrincipal principal = su.getPrincipal(sid);

Step 2. Call the M3 API

Second, call the M3 API, for example CRS610MI.LstByNumber, and get the result:

import java.util.ArrayList;
import java.util.List;
import com.lawson.grid.proxy.ProxyClient;
import com.lawson.grid.proxy.ProxyException;
import com.lawson.miws.api.data.MIParameters;
import com.lawson.miws.api.data.MIParameters.ColumnList;
import com.lawson.miws.api.data.MIRecord;
import com.lawson.miws.api.data.MIResult;
import com.lawson.miws.api.MITransactionException;
import com.lawson.miws.proxy.MIAccessProxy;

// get the proxy
MIAccessProxy proxy = (MIAccessProxy)registry.getProxy(MIAccessProxy.class);

// login to M3
ProxyClient.setSessionId(proxy, sid);

// prepare the parameters
MIParameters paramMIParameters = new MIParameters();
paramMIParameters.setProgram("CRS610MI");
paramMIParameters.setTransaction("LstByNumber");
paramMIParameters.setMaxReturnedRecords(10);

// set the return columns
ColumnList returnColumns = new ColumnList();
List<String> returnColumnNames = new ArrayList<String>();
returnColumnNames.add("CONO");
returnColumnNames.add("CUNO");
returnColumnNames.add("CUNM");
returnColumns.setReturnColumnNames(returnColumnNames);
paramMIParameters.setReturnColumns(returnColumns);

// execute
MIResult result;
try {
	result = proxy.execute(paramMIParameters);
} catch (MITransactionException e) {
	...
} catch (ProxyException e) {
	...
}

// show the result
List<MIRecord> records = result.getResult();
for (MIRecord record: records) {
	record.toString();
}

Note: When I use ColumnList it throws java.io.NotSerializableException: com.lawson.miws.api.data.MIParameters$ColumnList. It appears to be a bug in that the ColumnList class is missing implements Serializable. I reported it in Infor Xtreme incident 8629267.

That’s it. Please let me know what you think in the comments below.

Hosting a Custom Web Service with the M3 API Toolkit

There are a few tools that can be used to communicate with M3 outside of smart office including report writers like DB2 or MySQL for reading, M3 Enterprise Collaborator (MEC) for running transactions and of course my favorite the M3 API toolkit. Out of all these options there are drawbacks to each. The report writing is limited to reading data unless you are living life dangerously. The MEC tool can be complicated and time-consuming to set up and pretty much can’t be done without training or a consultant. The M3 API is not all that user-friendly and can be time-consuming especially with long transactions (like adding new items) and deployment can be a bit of a nightmare.

As mentioned above the M3 API toolkit is by far my favorite way of interacting with M3 outside of smart office typically with some added functionality of table lookups which is a much better way to get info out rather than an API call. The reason for choosing the API is simple. The documentation is excellent and the possibilities are endless! That being said there are still some drawbacks.

  1. While the API toolkit supports many different languages if you want to use more than one platform transactions will have to be completely rewritten.
  2. Deployment can be difficult. The toolkit needs to be installed on every computer or device that wants to communicate with M3.
  3. If database access is desired drivers are required and permissions will need to be granted for every client.
  4. Some transactions are long and time-consuming to set up.

There is good news though. Hosting your own custom web service using WCF that uses the M3 API toolkit eliminates all of these drawbacks. If your web service is well thought out expanding your functionality and streamlining day-to-day business activities becomes easy.

So let’s get started. Out of all of the transactions in M3 one of the simplest transactions is confirming a pick list because it only requires two inputs. For the sake of getting your feet wet with this new setup without overwhelming you we will start with this transaction. As we run through this example realize that while this transaction is simple the true power of the web service becomes more obvious with more complicated transactions.

Step 1 Start a new project

In Microsoft Visual Studio start a new project using the template WCF Service Application. I’ve named my project M3Ideas. (creative right?)

OpenProject

Once the project opens you will see two important files in the solution explorer on the right. One will be called Service1 and the other will be called IService1. Service1 is a class where all of the code for actually running transactions using the API will take place and IService1 is an example of what our client applications will see and be able to use. Notice that there is both Service Contracts with Operation Contracts which are the functions that our tablets or computer programs will call and there are Data Contracts with Data Members which is how data will be presented to our software. This is what makes the Web Service powerful, we get the ability to create our own objects and essentially make a wrapper class for the M3 API Toolkit that can be used by any program we want that needs to interact with M3.

NewWebService

So lets start renaming the items to suit our needs. Since our goal is to report Pick Lists I’ll chose to rename the IService1 interface to MWS420 after the M3 program for reporting pick lists. Do this in the solution explorer on the right and Visual Studio will rename it everywhere. I’ll also make just one Operation Contract for now called ConfirmPickList which takes two integers, the delivery number and the suffix. Right now I’ll go ahead and delete the CompositeType class below but don’t forget how to make Data Contracts this interface won’t be using them but with longer transactions they are pretty much the greatest thing on earth. At this point my interface looks something like this.

MWS420

Remember this is just a prototype for what the client applications will get to use. You might be wondering why I named the interface after only one program. What if you want to use more than one program in you web service? The reason I did this is simply for organization and clarity when making the client applications. When I go to run transactions in other programs I will make new interfaces which will look just like this one only with their own name. This will make it so that the client has to not only specify which transaction to run but which interface the transaction comes from. This enables me to use similar function names for more than one program and still know exactly which program the transaction goes with. A good example of this is if I wanted to confirm manufacturing operations in PMS070 I can use similar function names and the client application will easily know which program each transaction belongs to even if the name isn’t as descriptive as it probably should be. It will become more clear what this will look like in future posts where we connect to the web service from our various clients.

Step 2 Set up the transactions

Ok lets look at the Service1.svc file now which is where the code for this transaction will be placed. Go ahead and rename this file to M3.svc and rename the class M3 as well. This is where all the code for the transactions will go. The single most important thing in this file is the interface implementation right after the class name. In an effort to be organized we will use several partial classes rather than one class. Each partial class will implement one of the interfaces we set up for our program. The code will look like this.

PartialClasses

Notice that each partial class has a colon before the interface name that it implements. Since I’ve used partial classes each one implements just one of my interfaces. If you really wanted you could use just one regular class that implements all of the interfaces. All you would have to do is list them off and separate them by a comma. I think doing it this way will be a bit more straight forward though.

So now lets get to the fun part and set up the M3 APIs and show the program how to connect to M3 and make the transactions come to life. The first thing we need to do is add a reference to the M3 API. In this example we will use the 64 bit library although you can use whichever one you want. It is interesting to know that the target platform that this service will run on is completely unrelated to the programs that will connect to it. This is another huge advantage to using the web service instead of each client using the API toolkit directly.

To add the reference go ahead and right click on references in the solution explorer and select add reference. On the left select browse and again browse at the bottom and locate the file MVXSockNx64.dll. The file should be located in C:\MvxAPI. Once the file is added you should see the file in the list of references.

Reference

Once the reference has been added you can start using the library to communicate with M3. All you need to do is add the using statement at the top of the file and you can start using the library to run transactions. Don’t forget there is a help file that is well documented that will show you how to set up the transactions. Although running these transactions isn’t that elegant the documentation will tell you how to get it done.

To run transactions you will need the port number that the API uses to connect to M3 (there is one of these for each environment), a username and password that is set up in M3 and has permissions to use the APIs that you want to use, as well as the host name. When we are done our transaction will look like this. Note my port numbers might be the same as yours but they don’t have to be. Yours could be different.

ConfirmPickList

I went ahead and put some of the constant Information in a static class called Info. This will make it so I don’t have to type in the data each time and I can use it in all of my partial classes. I’ve also set up the transaction which is exactly how the documentation says to do it. This includes padding the spaces so that each input is in the correct position of the string.

Step 3 Publish

Now that we have our first transaction set up lets publish it and test it. Once it’s been tested we can change to the port to production. To host the web service you will need a computer or virtual machine that is running IIS. You might need to enable the feature in windows. If you are unsure how to enable the feature a simple google search will walk you through it.

Ok to publish the web service right click on project in the solutions explorer and select publish. Set up a publish configuration to publish to the file system in a folder of your choosing. We’ll copy these files to the computer that will host the service. You will also need to locate the file MvxSockx64.dll file and copy that to folder as well. Go ahead and put it in the bin subfolder with the other libraries that got published. Next copy that folder to the C drive of the computer that will host the service and open IIS. On the left side of the screen expand the tree and right click Default Web Site and select add application. Then show IIS what folder your files will be and name your service.

IISSetup

To verify that your service is up and running expand the tree on the left more and select the application you just added. Then on the far right select browse and it should open a browser. Select the SVC file and it should bring you to a screen with directions how to use it. In the next post I’ll run through some samples on how to use the service to streamline reporting pick lists.

Here is the screen that you should be able to get to. If something happened to go wrong it will be displayed on this screen.

BrowserM3Service

If you have any questions on what the web service can be used for please feel free to ask in the comments. Also if you run into any problems please let me know.

Happy coding.

-The Engineer

Java code in Event Analytics rules

To add custom Java code to a Drools rule in Event Analytics in the Infor Grid:

  1. Write your Java code, compile it, and archive it to a JAR file:
    // javac thibaud\HelloWorld.java
    // jar cvf HelloWorld.jar thibaud\HelloWorld.class
    package thibaud;
    public class HelloWorld {
        public static String hello(String CUNO) {
            return "Hello, " + CUNO;
        }
    }
    

  2. Find the host of Event Analytics as explained here, copy/paste the JAR file to the application’s lib folder in the Infor Grid, and restart the application to load the JAR file:

    4_
  3. Write the Drools Rule that makes use of your Java code, reload the rules, and test:
    5
    6_

Thank you.

M3 Web Services from Infor Process Automation

In order to securely call Infor M3 Web Services (MWS) from Infor Process Automation (IPA) we need to import the Infor Grid’s certificate in IPA’s Java truststore; here is how.

MWS authentication

MWS works with SOAP over HTTP over SSL/TLS with the digital certificate of the Infor Grid.

The Infor Grid router for MWS must have Basic authentication enabled over HTTPS (secure) and have all authentication disabled over HTTP (insecure); you can check in the Infor Grid > Configuration Manager > Routers > Default Router:
1.8

MWS from IPA

In the IPA Configuration > Web Service Connection, we set the Basic authentication with the M3 user and password:
3.6

In Infor Process Designer (IPD), we use the SOAP Web Service activity node to the HTTPS URL of MWS:
3.1

Tip: un-hard-code the scheme://host:port and replace it by a variable <!_configuration.main.MWS> to define.

Problem

When we execute the process we get the following exception:

com.sun.xml.internal.ws.client.ClientTransportException: HTTP transport error: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target

That is because IPA does not know the Infor Grid certificate.

The IPA Configuration for the Web Service Connection does not have settings for an explicit truststore. Instead, IPA implicitly relies on the JVM’s truststore; let’s set it up.

Step 1. Infor Grid certificate

Get the Infor Grid certificate file. It is a signed public key that you can get for example from the main Grid Information at something like https∶//host123.local:26108/grid/info.html
3.2  

Note: Preferably get the certificate of the root CA as it usually signs the certificates for all environments (DEV, TST, PRD, etc.).

Step 2. IPA server truststore

Check the path of the IPA server’s JVM as given in the Landmark Grid > Landmark-LM Application > Configuration > Properties > Java executable:
2.5

Import the certificate into that JVM’s truststore using the Java keytool:

keytool -import -keystore lib\security\cacerts -file grid.cer

3.5

Note: I may have mixed up the keystore and the truststore in the command; to be verified.

Step 3. IPD truststore

The path to the Infor Process Designer (IPD) JVM is given by the IPDesigner.ini file:
3.7 3.8

Import the certificate into that JVM’s truststore as well.

Step 4. Test

Now execute the process. The Web Service activity node should not throw that exception anymore.

Notes

If you have a certificate purchased from a certificate authority that is already trusted by the JVM, such as VeriSign, this setup is not necessary.

That’s it. Let me know what you think in the comments below.

HTTP channels in MEC (part 6)

Here is how to securely receive messages in MEC from partners over the Internet, in this sixth part of the guide on HTTP channels for Infor M3 Enterprise Collaborator (MEC). I will illustrate two goals: how to setup an HTTPIn or HTTPSyncIn channels in MEC over SSL/TLS, and how to expose them securely over the Internet. Previously, for the HTTPIn channel, refer to part 2; for the HTTPSyncIn channel refer to part 3; and for MEC over HTTP Secure (HTTPS) refer to part 5.

Goal

The desired goal is to allow partners to securely send messages to MEC using HTTP over SSL/TLS over the Internet. Also, the idea is to design the architecture in such a way that adding new partners is easy.

Here is the simplified diagram:

Problem

Unfortunately, MEC does not provide incoming channels for HTTPS, there are no HTTPSIn or HTTPSSyncIn channels. There is a WebServiceSyncIn channel that uses WS-Security for XML over SOAP, but it is not what I am interested in. Ideally, I would prefer to use the Infor Grid which already communicates via HTTPS, but unfortunately it does not have a native connection handler for MEC. Surprisingly, most projects I have seen use FTP + PGP, but that is insecure because the FTP username and password transit in clear text, so even though the files are encrypted a man-in-the-middle could intercept the credentials and create havoc like delete files or fill the disk with junk.

Alternatively, I could develop my own HTTPS server in Java on top of a custom MEC channel; the Java Secure Socket Extension (JSSE) is a good reference guide for how to implement SSL/TLS in Java. I have two options. I could use SSLServerSocket, but it uses blocking I/O contrary to MEC that uses non-blocking I/O for scalability and performance, consequently I would have to forgo scalability and performance. Or I could use SSLEngine to have non-blocking I/O for scalability and performance, but I would have to implement the entire TLS state machine which is overkill for my needs.

Design

I will setup a public web server https∶//partners.example.com/ at my sample company.

For that, I will setup a reverse proxy with SSL termination upstream of HTTPIn or HTTPSyncIn channels. Thanks to Rickard Eklind for the tip on using Apache + mod_proxy; I will use nginx + ngx_http_proxy_module instead, as it uses non-blocking I/O similar to HTTPIn and HTTPSyncIn, and I think it is easier to setup; either combination will work. I will need to setup the proxy server on the DMZ, setup DNS records, and generate digital certificates.

If you cannot host your own server on the DMZ, or if you cannot create your own domain name partners.example.com in the DNS records, or if you cannot create your own digital certificate signed by a trusted certificate authority, you may be able to piggy back on an existing public web server in your company and simply add a new virtual directory, like https∶//www.example.com/partners/ that will forward requests to a content-based filtering router, decrypt, filter, re-encrypt and send the requests to your reverse proxy in the LAN.
Alternatively, I could have setup a dedicated secure line per partner – such as a VPN with a filter to restrict access to only a specific destination IP address and port number for MEC on the LAN – but for each new partner that would require a lot of paperwork, security clearance, and setup on both ends, which is possible, it is more sandboxed thus more desirable, but it may not be possible in some companies. And in some clouds it may be easier to setup web servers than VPNs.

Reverse proxy with SSL termination

A reverse proxy is an intermediate server that executes the client’s request to the destination server on behalf of the client without the client being aware of the presence of the proxy; this is unlike a forward proxy that we setup in a browser. In our case, MEC partners will connect to the reverse proxy as if it were MEC, and the proxy will make the requests to MEC.

SSL termination is where the SSL/TLS connection ends. In our case, the partner will initiate the connection to the reverse proxy using the proxy’s digital certificate (which is the proxy’s public key signed by a certificate authority), then the proxy will decrypt the SSL/TLS data using its private key, then the proxy will make the HTTP request in plain text to MEC, and the response will transit back in the opposite direction. The partner will need to previously have verified and added in its keystore the proxy’s certificate or one of the certificate authorities up the chain.

Here is the simplified nginx.conf:

http {
   server {
      server_name partners.example.com;
      listen 443 ssl;
      ssl_certificate cert;
      ssl_certificate_key key;
      location / {
         proxy_pass http://ecollaborator:8080/;
      }
   }
}

Here is the simplified diagram:

Note 1: This scenario assumes the servers are on the same network which is not true for the Internet. I will put the proxy in the DMZ. See the DMZ section below.
Note 2: This scenario assumes the data does not need to be encrypted on the second network segment which is not true either. I will install a second proxy on the same host as MEC. See the end-to-end encryption section below.

Multiplexing

I need to accommodate multiple partners, for example partnerA, partnerB, and partnerC.

I will use virtual hosting to economically share resources on a single server instead of having a dedicated physical server or virtual private server per partner.

Path-based

I will multiplex by URL path, for example /A, /B, and /C. I conjecture this is no less secure than doing it name-based or port-based. Also, I conjecture it is not subject to XSS attacks so long as we enforce client authentication (see the client authentication section below).

Here is the simplified nginx.conf:

location /A {
   # partnerA
}
location /B {
   # partnerB
}
location /C {
   # partnerC
}

Here is the simplified diagram:

Name-based

Alternatively, I could multiplex by domain name, for example partnerA.example.com, partnerB.example.com, and partnerC.example.com. But then for each new partner I would need a new network interface with a new public IP address – which is scarce to obtain – and update the A records of my DNS server. Or to share the same IP address I could use Server Name Indication (SNI) and update the CNAME records of my DNS server. In any case, I would have to issue a new digital certificate with an updated Subject Alternative Name (SAN) extension, or I could use one wildcard certificate but loose the possibility of Extended Validation Certificate, and anyway wildcard certificate is not considered secure per RFC6125#section-7.2. In the end, it is a maintenance nightmare, and relying on the respective teams could be a bottleneck in some companies.

Port-based

As another alternative, I could multiplex by port number, for example partner.example.com:81, partner.example.com:82, and partner.example.com:83, indeed the same digital certificate will work for any port number, but then for each new partner I would have to update the firewall rules, it is possible, but it is more maintenance, and relying on the respective teams could be a bottleneck in some companies.

De-multiplexing

Then, I need to de-multiplex the requests to tell the partners apart in MEC. I will setup as many HTTPIn or HTTPSyncIn channels in MEC as there are partners, for example HTTPSyncIn_A on port 8081, HTTPSyncIn_B on port 8082, and HTTPSyncIn_C on port 8083, and in nginx for each partner I will setup a location block with a proxy_pass directive.

Here is the simplified nginx.conf:

location /A {
   proxy_pass http://ecollaborator:8081/;
}
location /B {
   proxy_pass http://ecollaborator:8082/;
}
location /C {
   proxy_pass http://ecollaborator:8083/;
}

Here is the simplified diagram:

demux

Here are the receive channels in Partner Admin:

Receive

Authentication

I need the client to authenticate the server, and vice versa, I need the server to authentication the client.

One of the properties of SSL/TLS is authentication, using digital certificates to affirm the identity of the entities, where server authentication is mandatory, and client authentication is optional. In my case, client authentication is mandatory.

Server authentication

The server (the reverse proxy) will present its digital certificate to the client (the MEC partner), and the client will do its certificate validation to authenticate the server.

Client authentication

On the other hand, the server (ultimately it is MEC) needs to authenticate the client (the MEC partner).

I could setup peer authentication for the proxy to verify the client’s digital certificate, but I have not tested this.

Instead, I will setup HTTP Basic authentication per path in the proxy. The username and password will be encrypted over SSL/TLS so they will remain confidential.  I will separate the locations and I will forward to each respective HTTPSyncIn channel in MEC.

Here is the simplified nginx.conf:

location /A {
   auth_basic "A";
   auth_basic_user_file A.htpasswd;
   proxy_pass http://ecollaborator:8081/;
}
location /B {
   auth_basic "B";
   auth_basic_user_file B.htpasswd;
   proxy_pass http://ecollaborator:8082/;
}
location /C {
   auth_basic "C";
   auth_basic_user_file C.htpasswd;
   proxy_pass http://ecollaborator:8083/;
}

In addition to that, we could setup rules in the firewall to only allow the source IP addresses of the partners to access the reverse proxy, it is great if combined with Basic authentication, but insufficient on its own.

To setup peer authentication, I would use ssl_verify_client. According to the nginx documentation, the context for the ssl_client_certificate directive is http and server only, not location. So I would have to append the various client certificates into one file; to be verified. And then I could use the $ssl_client_cert variable to tell partners apart; to be tested.
As another alternative, we could setup client authentication in the MEC agreement using a flat file detector to detect a username and password defined in the HTTP request payload. But that has many problems: 1) It would require hard-coding the username and password in clear text in MEC (passwords should be hashed and salted or at least encrypted), 2) if we need to change the password we would have to change and re-deploy the agreement, and 3) it would put the burden of password verification on MEC which is not designed to thwart brute force attacks.

Channel detection

Now, we have to carry over the authentication to MEC because even though nginx can pass the Basic authentication header to MEC, MEC does not use it, and if we do not authenticate partners and tell them apart they risk crossing each other. For that I will use a Channel detector in the MEC agreement of each partner.

Here are the channel detectors in Partner Admin:

detection

A drawback emerges from this setup: the number of possible messages per channel is now limited to only one. If partner A wants to send two different messages 1 and 2, for example new customer order and new rental agreement, MEC is not able to process two messages in one agreement, and it cannot reuse the same receive channel in another agreement. To assist MEC, I would have to discriminate further by path in nginx, for instance /A/message1, and /A/message2, and have as many receive channels as possible messages. I can use nested location blocks (I have not tested this). Here is the simplified nginx.conf:

location /A {
   auth_basic "A";
   auth_basic_user_file A.htpasswd;
   location /message1 {
      proxy_pass http://ecollaborator:8001/;
   }
   location /message2 {
      proxy_pass http://ecollaborator:8002/;
   }
}

I am not trained in MEC Partner Admin so maybe there is a way around it.

…on the Internet

Once a web server is placed on the Internet it will get attacked, so consult with your network and security team to harden your servers. It should at least be in the DMZ between one or two firewalls:

Here is a simplified diagram:

DMZ

Take also into account: high availability, redundancy, fail over, disaster recovery, edge caching, DNS round robin, IDS, content-based firewall, restrict physical access to the servers, restrict permissions to the files, software updates, operating system support, etc.

Note: The HTTP channels in MEC will be the single point of failure in spite of all this setup. The MEC Server runs on the Infor Grid, and the Infor Grid is meant to be distributed, fault tolerant, load balanced, scalable, and redundant. However, the HTTP channels of MEC are not Grid enabled (the HTTPIn and HTTPSyncIn channels manage their port and HTTP server themselves), so they are not distributed, fault tolerant, load balanced, scalable, and redundant, they are a single point of failure. You can learn more about Infor Grid application development on my other post.

End-to-end encryption

Now we need end-to-end encryption to protect the data on the second network segment from the reverse proxy on the DMZ to MEC on the LAN. For that, I will chain two reverse proxies with SSL termination. I will simply install the second proxy on the same host as MEC. And I will issue a second pair of digital certificate and private key for the second proxy that the two proxies will use to encrypt/decrypt. That simplifies the rules of the internal firewall, and I can setup peer authentication between the proxies.

Here is the simplified diagram with the two proxies X and Y:

chain

How to add new partners

To add a new partner D:

  1. Setup a new Receive channel in Partner Admin with a new HTTPIn or HTTPSyncIn channel for example on port 8084
  2. Setup a new agreement with channel detector
  3. Test by making an HTTP request to MEC on port 8084
  4. Setup the inner proxy:
    1. Setup a new location block in nginx.conf for path /D with proxy_pass directive to port 8084 and basic authentication
    2. Setup a new htpasswd file
    3. Restart nginx
    4. Test by making an HTTPS request to the proxy
  5. Setup the outer proxy to pass requests to the inner proxy and test it (I do not have guidelines here as my actual setup for the outer proxy uses a content-based router, not nginx)
  6. Test by making an HTTPS request from the partner to https∶//partners.example.com/D

How to setup multiple environments

To setup multiple environments, such as DEV, TST, PRD, use nested location blocks in nginx.conf, for example /DEV, /TST, /PRD (I have not tested this).

Here is the simplified nginx.conf:

location /DEV {
   location /A {
      # Development partnerA
   }
   location /B {
      # Development partnerB
   }
}
location /TST {
   location /A {
      # Test partnerA
   }
   location /B {
      # Test partnerB
   }
}

Limitations

This is the first time I setup this architecture, I have not tested all the design variations, and I have not validated that my design is a good design nor that it is secure. I am currently using a similar architecture at a major customer of mine for their production environment where they have multiple data centers, high availability, redundancy, fail over, and disaster recovery. One of their technical people reviewed the solution, they approved it, and the only concerns were that this solution might be over engineered (plausible) and that the MEC channels are the single point of failure anyway (true). I conjecture the solution is good enough and secure enough for our needs. Of course I could be completely wrong and not see a major flaw. Nothing is fully secure anyway. Please let me know what you think in the comments below.

Upcoming version of MEC

Johan Löfgren, the component owner for MEC at Infor Product Development, said they are working on a native HTTPSIn channel for an upcoming version of MEC; it is not GA and the release may or may not occur. If and when that happens, you would not need to chain two proxies anymore, you would just keep one proxy in the DMZ and use proxy_pass to send the requests directly to the HTTPSIn channels in MEC.

UPDATE 2015-04-12: What is being released is SFTP, no plans for HTTPS at the moment.

Conclusion

This was one solution to setup incoming HTTP channels in MEC to securely receive messages over SSL/TLS over the Internet. MEC does not have an HTTPSIn or HTTPSSyncIn channel, and I did not want to implement my own HTTP server over SSL/TLS in Java. Instead, I chose to setup a reverse proxy with SSL termination in a DMZ, with digital certificate and private keys, with HTTP basic authentication, with a second proxy in the MEC host for end-to-end encryption. This solution has many properties: it uses standard HTTP and SSL/TLS, and it is easy to add new partners. Also, we simplified the architecture upstream such that we do not have to rely on other teams if we need to add a new partner, which can be a maintenance nightmare and bottleneck in some companies; we can simply add new partners downstream in our proxy and Partner Admin. I conjecture this solution is secure for our needs. But remember it has not been fully reviewed, and the MEC channels are the single point of failure.

Please let me know what you think in the comments below.

Related articles