Poll: Infor e-Commerce Application Installer (IAI)

I have a question regarding Infor e-Commerce (f.k.a. e-Sales): have you ever used the Infor Application Installer (IAI) to deploy e-Commerce applications via ZIP files instead of via the Development Studio?


We have two different environments, development and production, that are physically isolated.

Our development environment is integrated: it has the e-Commerce Development Studio, the Subversion repository, and the e-Commerce Server, all on the same host. Thus, we can simply deploy the e-Commerce application from the Studio directly to the server.


Our production environment, on the other hand, is isolated from that network: it has the e-Commerce server, but it does not have the e-Commerce Development Studio, and it does not have access to the Subversion repository either. Then, how do we deploy the e-Commerce application?

Unskillful solution

We cannot deploy from the Studio because the development environment does not have access to the production network, the two are isolated from each other.

One solution is to install the Studio on the production environment and give it access to the Subversion repository, to mimic the development environment. But because the production environment is isolated, it does not have access to the Subversion repository, so we would have to make a copy of the source code.

I challenge this solution. Indeed, we would end up with double maintenance of the Studio and of the source code. We would risk generating a non-identical version of the application. And we would risk creating an accidental branch of the source code then have to resolve and merge. There ought to be a simpler and more elegant solution.


There are two e-Commerce documents that explain the application deployment process and how it uses the Infor Application Installer (IAI):

Development Studio

According to the documentation, the Studio generates this temporary ZIP file:

The ZIP file has a datasources folder with connection information in XML files (e.g. movex.dsc and sqlserver.dsc):

Infor Application Installer (IAI)

The Infor Application Installer (IAI) has the following Servlets and JSP to upload the ZIP file and deploy the application:

Proposed solution

The solution I propose is to use the Infor Application Installer (IAI) to deploy a modified version of the temporary ZIP file.

We would take the temporary ZIP file from the development environment, make a copy of it, unzip it, change the datasources connection information, re-zip it all, copy the resulting ZIP file to the production environment, and use the publish JSP to deploy it. We can even write a script to automatically duplicate the file, unzip it, change the connection information, and re-zip it, to reduce the number of manual steps and to avoid possible human errors.

I postulate this new solution is much simpler than the other one as we would just manipulate ZIP files, and we would not need to double maintain another Studio or another source code. And it is elegant because it is part of e-Commerce.

What others think

I asked others for opinion.

The experienced e-Commerce consultant disagrees. He says that all e-Commerce applications MUST be deployed from the Studio in order to make sure they are working properly, that it is the right way, that everyone uses this method, that there is no other method, and that Infor would not support another method.


Similarly, Infor Support reached out to the development team who reached out to the e-Commerce product owner who said the ZIP file deployment can be done for DEVELOPMENT purposes only but it is NOT RECOMMENDED, that it can be explored at your own risk, and that support would NOT be provided if further issues or concerns occur.


I do not believe either of these responses. e-Commerce is about 15 years old, and most of the original developers are no longer part of the company. I believe the responses above are from new developers that lack knowledge, and are not willing to try another way. Or perhaps there is a valid reason that they have not yet articulated.


What about YOU? Do YOU know the answer?

Let me know in the comments below, please. Thank you.

SOAP WS-Security in the context of M3

Here is my high-level understanding of SOAP Web Services Security (WS-Security, or WSS), at least the WSS X.509 Certificate Token Profile, and how to apply it in the context of Infor M3.


WS-Security is a standard by the OASIS consortium to provide message encryption and digital signature, the usual security properties to prevent eavesdropping and tampering of a message. It uses asymmetric cryptography with public-private keys and digital certificates. There is an additional property which is central to WSS: the security happens at the message level, not at the transport level, i.e. the security follows the message even across proxies and deep packet inspection gateways, for end-to-end security. WSS is common for example in financial institutions that need to inspect and route a message through several nodes that can read the non-secure part of the SOAP envelope yet not reveal the secret in the message, until it reaches the appropriate destination. If a node on the path gets compromised, the security of the message is not compromised. Despite its continued use, WSS has only had few major updates in 10 years, is not considered secure [1] [2], the Internet agrees it is complicated and design-by-committee, and there is no industry momentum behind it.

SSL/TLS, on the other hand, provides similar security properties with encryption and digital signature, using public key cryptography as well, but the security happens at the transport level, i.e. before the message level, for point-to-point security only. Thus, intermediate proxies and deep packet inspection gateways are unable to reveal the message to inspect it and route it, unless they have a copy of the destination’s private key. The workaround is to setup a chain of TLS segments, but the compromise of a node on the path, would compromise the message. TLS has additional security properties such as cipher suite negotiation, forward secrecy, and certificate revocation. TLS is constantly being updated with the latest security properties, and is widely supported and documented.

I have seen WSS interfaces used at banks and credit card companies that still have ancient mainframes and old middleware, and WSS is always used in combination with TLS, with peer authentication, thus four sets of public/private keys and digital certificates.

Infor Grid

Several applications of the Infor Grid expose SOAP web services, but I could not find how to setup WS-Security at the Grid level, so I assume it is not supported at the Grid level, only at the application level; that’s OK as SOAP over HTTPS is sufficient for the Grid’s needs.


M3 Web Services (MWS)

The MWS application does have settings to configure WS-Security (X.509 Token policy); that would be useful for an external partner to call M3 with message-level security (otherwise there is always SOAP over HTTPS); I will explore this in a future post:

M3 Enterprise Collaborator (MEC)

The MEC application on the Grid does not have built-in SOAP web services. But in MEC Partner Admin the MEC developer can setup SOAP. The SOAP client Send Web Service process does not support WS-Security; this is most unfortunate as here is precisely where I want to setup the secure interfaces with the banks and credit card companies, bummer, I will have to develop my own WS-Security client in Java. On the other hand, the SOAP server WebServiceSyncIn does support WS-Security; I will explore this in a future post:

Future work

In future posts, I will:

  • Explore WS-Security in MWS
  • Explore WS-Security in MEC SOAP client
  • Explore WS-Security in MEC SOAP server


That was my high level understanding of WS-Security, at least the WSS X.509 Certificate Token Profile, and how to apply it in the context of M3. I am no expert in WS-Security, but this understanding is sufficient for my needs with M3.

That’s it!

Please subscribe, comment, like, share, and come author with us.



Continuous integration of Mashups #4 – Command line

Finally, I was able to develop a command line to deploy Infor Smart Office Mashups. This is important for me to save time when deploying many Mashups, on many environments, many times a day, so I can re-invest that time in the Mashups, instead of wasting time on their deployment in the various graphical user interfaces. Read #1, #2, and #3 for the backstory.


I will use the internals of Smart Office and MangoServer which by definition are not public. Infor Product Development may change any of it at any time.


I will create my C# Console Application project in the free (as in free beer) Microsoft Visual Studio 2015 Community Edition:

Add references to the Smart Office assemblies Mango.UI.dll and Mashup.Designer.dll which you can find in your Smart Office local storage (search for the path in your Log Viewer); you can also get them from the Smart Office product download, or in the Smart Office SDK:

When you Build the project, Visual Studio will automatically grab the additional assemblies Mango.Core.dll, log4net.dll, and DesignSystem.dll.

Web service client

Smart Office already has client code for its web services in the namespace Mango.UI.Services.WS, but it’s missing the method DeployMashup which is precisely the one I need:

It’s easy to generate the code in Visual Studio, select Add Service Reference, and enter the URL to the MangoServer /mangows/InstallationPointManager?wsdl:

In the App.config file, add the <transport clientCredentialType=”Basic” /> and add the name/address of the target M3 environments, e.g. DEV, TST, EDU, PRD:

Source code

Add the C# class Deploy.cs:

 using System;
 using System.IO;
 using System.Reflection;
 using Mango.UI.Services.Mashup;
 using Mango.UI.Services.Mashup.Internal;
 using MangoServer.mangows;
 using Mashup.Designer;
 namespace Mashups
     class Deploy
         // Usage: deploy.exe Mashup1,Mashup2,Mashup3 DEV,TST
         static void Main(string[] args)
             // get the M3 userid/password
             Console.Write("Userid: ");
             string userid = Console.ReadLine();
             Console.Write("Password: ");
             string password = Console.ReadLine();
             // for each directory
             string[] directories = args[0].Split(',');
             foreach (string directory in directories)
                 // for each Mashup
                 string[] files = Directory.GetFiles(directory, "*.manifest", SearchOption.AllDirectories);
                 foreach (string file in files)
                     Console.WriteLine("Opening {0}", file);
                     ManifestDesigner manifest = new ManifestDesigner(new FileInfo(file));
                     string name = manifest.DeploymentName + Defines.ExtensionMashup;
                     Console.WriteLine("Generating {0}", name);
                     // generate the Mashup package
                     if (SaveMashupFile(manifest))
                         // get the resulting binary contents
                         byte[] bytes = File.ReadAllBytes(manifest.File.Directory + "\\" + name);
                         // for each M3 environment
                         string[] environments = args[1].Split(',');
                         foreach (string environment in environments)
                             // deploy
                             Console.WriteLine("Deploying to {0}", environment);
                             DeployMashup(name, bytes, environment, userid, password);
             Create the Mashup package (*.mashup) from the specified Mashup Manifest.
             Inspired by Mashup.Designer.DeploymentHelper.SaveMashupFile
         static bool SaveMashupFile(ManifestDesigner manifest)
                 // validate Manifest
                 typeof(DeploymentHelper).InvokeMember("ValidateManifest", BindingFlags.InvokeMethod | BindingFlags.NonPublic | BindingFlags.Static, null, null, new Object[] { manifest });
                 // create Mashup package
                 string defaultFileName = manifest.DeploymentName + Defines.ExtensionMashup;
                 FileInfo packageFile = new FileInfo(manifest.File.Directory + "\\" + defaultFileName);
                 PackageHelper.CreatePackageFromManifest(manifest, manifest.File, packageFile, PackageHelper.DeployTarget_LSO);
                 // check Mashup profile
                 if (manifest.GetProfileNode() != null)
                     string message = "Please note that this Mashup contains a profile section and should be deployed using Life Cycle Manager. Warning - Using the Mashup File Administration tool will not result in a merge of profile information into the profile.";
                 return true;
             catch (Exception exception)
                 return false;
             Deploy the specified Mashup name and file (*.mashup binary contents) to the specified M3 environment (see App.config) authenticating with the specified M3 userid and password.
             Inspired by /mangows/InstallationPointManager/DeployMashup
         static void DeployMashup(string Name, byte[] MashupFile, string environment, string userid, string password)
             InstallationPointManagerClient client = new MangoServer.mangows.InstallationPointManagerClient(environment);
             client.ClientCredentials.UserName.UserName = userid;
             client.ClientCredentials.UserName.Password = password;
             client.DeployMashup(Name, MashupFile);

Final project:

Here is what the folder looks like:

Build the solution, the resulting command application will be Deploy.exe with the required assemblies and config file:b11


Suppose you have the following Mashups\ folder with Mashups inside, e.g. Mashup1, Mashup2, Mashup3, Mashup4, Mashup5, Mashup6:

And suppose you want to deploy only Mashup1, Mashup2, and Mashup3.

The usage is:

Mashups\> Deploy.exe Mashup1,Mashup2,Mashup3 DEV,TST

Where Mashup1,Mashup2,Mashup3 is the comma-separated list of Mashup folders, and where DEV,TST is the comma-separated list of target M3 environments as defined in Deploy.exe.config.

The program will recursively descend the specified folders. So you can also specify the parent folder – Mashups\ – and the program will deploy every Mashup that’s inside.

The program will ask for the M3 userid and password at the command prompt, as it’s more secure to not hard-code them in source code nor in a config file.


Here is an example of deploying three Mashups on two environments:

Here is the result in the MangoServer database of one of the environments:

Here is the result in Smart Office:


  • Instead of using Microsoft Visual Studio, you can use Xamarin’s MonoDevelop which is free software (as in freedom) and open source and recently acquired by Microsoft, or use Microsoft’s .NET SDK’s C# command line compiler csc.exe
  • If you are more familiar with the JScript.NET programming language than C#, you can re-write the program in that language and compile it with Microsoft’s .NET SDK’s JScript.NET command line compiler jsc.exe (you could also run it as a script in Smart Office but that wouldn’t make sense)
  • Instead of generating web service client code, you can use the Smart Office method Mango.Core.DynamicWs.WSDataService.CallAsync

Future work

  • Remove the Mashups from LifeCycle Manager as Karin instructed
  • Replace our generated web service client code with the built-in Smart Office code, if and when Infor Product Development will provide it
  • Explore the Mango Admin tool
  • Explore the FeatureServlet of the latest Smart Office hotfix
  • Encrypt the password in the config file using a symmetric cipher as in the Grid applications, or use a YubiKey (hey, Sweden)


That was my solution to deploy Mashups at the command line. This is useful when deploying multiple Mashups, on multiple M3 environments, multiple times a day. It can potentially save hundreds of clicks and keystrokes per day compared to using the graphical user interfaces of Smart Office Mashup Designer and LifeCycle Manager. The time saved can be invested in the Mashups rather than on their deployment, aiming for continuous integration.

That’s it!

Please let me know what you think in the comments below, give me a thumbs up, click the Follow button to subscribe to this blog, share with your co-workers, and come write the next blog post with us. Pleeease, click something. Your support keeps this community going. Thank you for your support.

M3 Ideas, now 244 subscribers.

Related posts

Continuous integration of Mashups – HELP WANTED!!

I need help deploying many Smart Office Mashups, on multiple environments, fast, several times a day. Think continuous integration. Currently, it takes cubic time. Ideally, there would be a solution in linear time.


Here is a typical scenario: a user reports an error with a Mashup, I fix the error in the XAML file, and I propagate the fix to the server. For that, I generate the Lawson package in Mashup Designer in Smart Office (ISO), I upload the package in LifeCycle Manager (LCM), and I upgrade the Mashup on each environment.

Here are some screenshots:

My best case scenario is one Mashup and two environments once a day. My average scenario is three Mashups and three environments three times a day. My worst case scenario is 13 Mashups and five environments seven times a day.

(Note: normally we should develop in the DEV environment and push to the TST environment for users to test, but we are in a transition period between on-premise and Infor Cloud, and somehow we have five environments to maintain. Also, we could tell users to install Mashups locally or we could share Mashups with a role, but users got confused with versions and roles, so we have to do global deployments only.)

Clickk counter

I count mouse clicks, mouse moves, and keystrokes as clickks. To optimize as much as possible, I suppose that ISO, Mashup Designer, LCM Client, the Manage Products page, and the Applications tab are all launched and ready to use, and that I don’t close them during the day. The clickk counter is approximately the following:

7 clickks to generate the Lawson package in Mashup Designer
20 clickks to upload the package in LCM
11 clickks to upgrade the Mashup in the environment


x: number of Mashups
y: number of environments
z: number of times per day

The formula is:

(7x + 20x + 11xy)z


  • 49 clickks (7*1+20*1+11*1*2)*1 for my best case scenario
  • 540 clickks (7*3+20*3+11*3*3)*3 for my average scenario
  • 7462 clickks (7*13+20*13+11*13*5)*7 for my worst case scenario

The exact numbers are not important. What matters is that the result has order of n3 time complexity, i.e. it takes cubic time to deploy many Mashups on multiple environments, several times a day !!! In reality the number of environments is pretty constant, so the result will tend to order n2, but that’s still quadratic, not linear. Also, the number of Mashups and the number of times per day will eventually reach a limit, and the result will be constant, but still insanely high (in the range of 7462 clickks in my case).

$ command line ?

The goal is to reduce the number of clickks to a matter of a few double-clicks, ideally via the command line. How? The step in Smart Office can easily be done with a command line, it’s a simple archive of an archive (we can use ZIP tools, command line, .NET System.IO.Packaging.Package, or the Pack.exe tool in the Smart Office SDK). But the steps in LCM do not have a command line. They are Apache Velocity scripts, compiled into Apache Ant tasks, that execute Java code, remotely, to upload files to the server and add records to the Apache Derby database (lcmdb), and also the Mashups contents are saved as blobs in the MangoServer Grid database (GDBC) which is a distributed H2 database (h2db). I think. There is probably a distributed in-memory Grid cache as well. I could not find documentation nor a quick hack for all this.

Ideally there would be a command line like this fake screenshot:

Do you have any suggestions? Please let me know in the comments below.

Thank you.

Related posts

Poll: How to add 80 million item/warehouse records in M3?

My customer is doing data conversion in preparation for go live, of which 80 million item/warehouse records, they want to complete the upload over a week-end, but M3 is being a bottleneck, i.e. it cannot absorb the data as fast as they are feeding it to.

What would you do? I am not an expert at item/warehouse, nor at M3 performance, and from what I have seen in the past, each project handles it at their own sauce.


My customer has about 80,000 items to add to MMS001, and each item needs to be connected to about 1,000 warehouses in MMS002.

The result is about 80 million item/warehouse records to create.

M3 can handle gazillions of records. So this should be a piece of cake, right?

Failed tentative 1

Because the M3 API MMS200MI does not cover all the fields, my customer initially used M3 Web Services (MWS) of type M3 Display Program (MDP) to simulate a user entering data in all the desired fields.

It takes about 10ms per record, or 100 records per second. Loop 80 million times, and that would take about 800ks, which is about 222h, which is about 10 days to complete.

Yikes!!! 10 days to upload???

We need faster. We need one or two orders of magnitude faster.

Failed tentatives 2, 3, 4

I did A/B testing of various alternatives:

  • Use MWS MDP, versus use M3 API (I thought this would be the silver bullet, nope)
  • Create the item from scratch, versus create the item from an item template
  • Call the Add transaction, versus call the Copy transaction
  • Let M3 auto-create (we create the item oursleves but let M3 auto-create the item/warehouse), versus manually create (we create both ourselves).
  • Remote Desktop Connect inside the M3 BE server or database server to avoid any potential network issues
  • etc.

Either way, we only get a marginal, insignificant benefit in some cases, but the order of magnitude is equally bad.


Then, I decided to create the records concurrently (in parallel) rather than sequentially (one after the other).

With the help of some command line Kung fu I was able to automatically split my big CSV file into 80 smaller chunks, create 80 M3 Data Import (MDI) descriptions, and run 80 parallel threads that each upload one million records:

It turns out M3 can comfortably handle the throughput. It created many jobs in the API subsystem to process the records in parallel:

Each job takes about 1% CPU. So that should max out the server to about 80% CPU utilization.

However, M3 seems to have capped well before the 80 jobs. I guess that is Amdahl’s law of maximum expected performance in parallel computing.

The result was about 1,100 records per second, which would complete in about 20h, less than a day. That is one order of magnitude improvement!!!

With some more benchmarking we could plot the curve of Amdahl’s performance and find the optimum number of jobs and have a pretty accurate estimate of the expected duration.

I also automatically generated a cleanup script that deletes everything, and I noticed the delete takes four times longer than the add.

Reality check

By the time I had come up with my results, the customer had completely changed strategy and decided to upload the records directly with SQL INSERT. [Cough] [Cringe] <your reaction here>. They consulted with some folks at Infor, and apparently it is OK as long as the data has been pre-validated by API or by MWS MDP, and as long as M3 is shut down to avoid collisions with the M3 SmartCache. It is important to get clearance from Infor as SQL INSERT will void your Infor Support warranty.

Future work

I have also heard of the Item Data Interface (IDI) and the Warehouse Interface (WHI). I do not know if those could help. To be explored.


What would you do? How have you uploaded millions of records at once? Leave me a comment below.


Do not take these results as a reference. I was just providing some help to my customer and to the colleagues that are handling this.

I am not a reference for item/warehouse in M3, and I am not a reference for M3 performance, there are special teams dedicated to each.

Also, these results are specific to this customer at this time. Results will vary depending on many factors like CPU, memory, disk, tweaks, etc.

M3 API protocol dissector for Wireshark

Have you ever needed to troubleshoot M3 API calls in Wireshark? Unfortunately, the M3 API protocol is a proprietary protocol. Consequently, Wireshark does not understand it, and it just gives us raw TCP data as a stream of bytes.


I implemented a simple protocol dissector for Wireshark that understands the M3 API protocol, i.e. it parses the TCP stream to show the M3 API bytes in a human-readable format, with company (CONO), division (DIVI), user id (USID), and MI program (e.g. CRS610MI). The dissector is currently not complete and only parses the MvxInit request phase of the protocol.

Reverse engineering

I reverse engineered the M3 API protocol thanks to MvxLib, a free and open source client implementation of the protocol in C# by Mattias Bengtsson (now deprecated), and thanks to MvxSockJ, the official and closed-source client implementation of the protocol in Java (up-to-date).

3 2


The MvxInit phase of the protocol is the first phase of the protocol for connection and authentication, and it has the following structure:

struct MvxInit
   struct request {
      int size;
      char Command[5]; // PWLOG
      char CONO_DIVI[32];
      char USID[16];
      char PasswordCiphertext[16]; // password ^ key
      char MIProgram[32]; // e.g. CRS610MI
      char ApplicationName[32]; // e.g. MI-TEST
      char LocalIPAddress[16];
   struct response {
      int size_;
      char message[15];

Wireshark Generic Dissector

I used the Wireshark Generic Dissector (wsgd) to create a simple dissector. It requires two files: a data format description, and a file description.

The data format description m3api.fdesc is very similar to the C struct above, where the header is required by wsgd:

struct header
   byte_order big_endian;
   uint16 id;
   uint16 size;
struct body
      uint32 size;
      string(5) Command;
      string(32) CONO_DIVI;
      string(16) USID;
      string(16) PasswordCiphertext;
      string(32) MIProgram;
      string(32) ApplicationName;
      string(16) LocalIPAddress;
   } MvxInit;

Given the limitations of wsgd and its message identifier, I could not solve how to parse more than one type of message, so I chose the MvxInit request, and the rest will throw errors.

I made a test M3 API call to M3BE, I captured it in Wireshark, and I saved the TCP stream as a binary file (I anonymized it so I can publish it here):

Then, I used wsgd’s byte_interpret.exe (available on the wsgd downloads), using that test binary file, to fine tune my data format description until it was correct:
byte_interpret.exe m3api.fdesc -frame_bin Test.bin

Then, here is the wsgd file description m3api.wsgd; note how I listed the TCP port numbers of my M3 API servers (DEV, TST, PRD):




include m3api.fdesc;

To activate the new dissector in Wireshark, simply drop the wsgd generic.dll file into Wireshark’s plugins folder, and drop the two above files into Wireshark’s profiles folder:

Then, restart Wireshark, and start capturing M3 API calls. Wireshark will automatically parse the TCP stream as M3 API protocol for the port numbers you specified in the data format description.


Here is a resulting capture between MI-Test and M3BE. Note how you can filter the displayed packets by m3api. Note how the protocol dissector understands the phase of the M3 API protocol (MvxInit), the company (CONO), division (DIVI), user id (USID), and MIProgram (CRS610MI). Also, I wrote C code to decrypt the password ciphertext, but I could not solve where to put that code in wsgd, so the dissector does not decrypt the password.

Limitations and future work

  • I am using M3BE 15.1.2. The M3 API protocol may be different for previous or future versions of M3.
  • I am doing user/password authentication. The M3 API protocol supports other methods of authentication.
  • Given the limitations of wsgd and its message identifier, I will most likely discontinue using wsgd.
  • Instead, I would write a protocol dissector in LUA.
  • Ideally, I should write a protocol dissector in C, but that is over-kill for my needs.


That was a simple M3 API protocol dissector for Wireshark that parses and displays M3 API bytes into a human readable format to help troubleshoot M3 API calls between client applications and M3 Business Engine.

About the M3 API protocol

The M3 API protocol is a proprietary client/server protocol based on TCP/IP for third-party applications to make API calls to Infor M3 Business Engine (M3BE). It was created a long time ago when Movex was on AS/400. It is a very simple protocol, lean, efficient, with good performance, it is available in major platforms (IBM System i, Microsoft Windows Intel/AMD, SUN Solaris, Linux, 32-bit, 64-bit, etc.), it is available in major programming languages (C/C++, Win32, Java, .NET, etc.), it supports Unicode, it supports multiple authentication methods, and it has withstood the test of time (since the nineties). It has been maintained primarily by Björn P.

The data transits in clear text. The protocol had an optional encryption available with the Blowfish cipher, but that feature was removed. Now, only the password is encoded with a XOR cipher during MvxInit. If you need to make secure calls, use the M3 API SOAP or REST secure endpoints of the Infor Grid.

For more information about M3 API, refer to the documentation in the M3 API Toolkit:

Experimenting with middle-side modifications

With Infor M3, there are server-side modifications, client-side modifications, and the unexplored middle-side modifications. I will experiment with servlet filters for the M3 UI Adapter.

Modification tiers

There are several options to modify M3 functionality:

  • Server-side modifications are M3 Java mods developed with MAK; they propagate to all tiers, including M3 API, but they are often avoided due to the maintenance nightmare during M3 upgrades, and they are banned altogether from Infor CloudSuite multi-tenant. There are also Custom lists made with CMS010 which are great, they are simply configured by power users without requiring any programming, and they survive M3 upgrades.
  • Client-side modifications for Smart Office are Smart Office scripts in JScript.NET, Smart Office SDK features, applications and MForms extensions in C#, Smart Office Mashups in XAML, and Personalizations with wizards. They do not affect M3 upgrades but they apply only to Smart Office. And client-side modifications for H5 Client are H5 Client scripts in JavaScript, and web mashups converted from XAML to HTML5. Likewise, they do not affect M3 upgrades but they apply only to H5 Client.
  • Middle-side modifications are servlet filters for the M3 UI Adapter. They propagate to all user interfaces – Smart Office AND H5 Client – but this is unexplored and perilous. In the old days, IBrix lived in this tier.

M3 UI Adapter

The M3 UI Adapter (MUA), formerly known as M3 Net Extension (MNE), is the J2EE middleware that talks the M3 Business Engine protocol (MEX?) and serves the user interfaces. It was written mostly single-handedly by norpe. It is a simple and elegant architecture that runs As Fast As Fucking Possible (TM) and that is as robust as The Crazy Nastyass Honey Badger [1]. The facade servlet is MvxMCSvt. All the userids/passwords, all the commands, for all interactive programs, for all users, go thru here. It produces an XML response that Smart Office and H5 Client utilize to render the panels. The XML includes the options, the lists, the columns, the rows, the textboxes, the buttons, the positioning, the panel sequence, the keys, the captions, the help, the group boxes, the data, etc.

For example, starting CRS610/B involves:

The following creates a list with columns:

import com.intentia.mc.metadata.view.ListColumn;
import com.intentia.mc.metadata.view.ListView;

ListView listView = new ListView();
ListColumn listColumn = new ListColumn();
listColumn.setCaption(new Caption());
listColumn.setName("col" + Integer.toString(columnCount));
listView.addFilterField(posField, listColumn);
listView.setListColumns((ListColumn[])listColumns.toArray(new ListColumn[0]));

Here is an excerpt of the XML response for CRS610/B that shows the list columns and a row of data:


This experiment involves adding a servlet filter to MvxMCSvt to transform the XML response. Unfortunately, MNE is a one-way function that produces XML in a StringBuffer, but that cannot conversely parse the XML back into its data structures. Thus, we have to transform the XML ourselves. I will not make any technical recommendations for this because it is an experiment. You can refer to the existing MNE filters for examples on how to use the XML Pull Parser (xpp3- that is included in MNE. And you can use com.intentia.mc.util.CstXMLNames for the XML tag names.

To create a servlet filter:

javac -cp servlet-api-2.5.jar;mne-app- TestFilter.java

package net.company.your;

import java.io.IOException;
import javax.servlet.Filter;
import javax.servlet.FilterChain;
import javax.servlet.FilterConfig;
import javax.servlet.ServletException;
import javax.servlet.ServletRequest;
import javax.servlet.ServletResponse;
import com.intentia.mc.util.Logger;

public class TestFilter implements Filter {

    private static final Logger logger = Logger.getLogger(TestFilter.class);

    public void init(FilterConfig filterConfig) throws ServletException {}

    public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException {
        if (logger.isDebugEnabled()) {
            logger.debug("Hello, World");
        chain.doFilter(request, response);

    public void destroy() {}


Add the servlet filter to the MNE deployment descriptor at D:\Infor\LifeCycle\host\grid\XYZ\grids\XYZ\applications\M3_UI_Adapter\webapps\mne\WEB-INF\web.xml:


Then, reload the M3UIAdapterModule in the Infor Grid. This will destroy the UI sessions, and users will have to logout and logon M3 again.

Optionally, set the log level of the servlet filter to DEBUG.


MvxMCSvt is the single point of entry. If you fuck it up, it will affect all users, all programs, on all user interfaces. So this experiment is currently a Frankenstein idea that would require a valid business case, a solid design, and great software developers to make it into production.

Also, changes to the web.xml file will be overriden with a next software update.


Is this idea worth pursuing? Is this another crazy idea? What do you think?

Walking directions in a warehouse (part 2)

Today I will illustrate how I started implementing my proof-of-concept for walking directions in a warehouse, and I will provide the source code. The goal is to show the shortest path a picker would have to walk in a warehouse to complete a picking list and calculate the distance for it. This is most relevant for big warehouses and for temporary staff that are not yet familiar with a warehouse. The business benefit is to minimize picking time, reduce labor costs, increase throughput, and gather performance metrics. I used this for my demo of M3 picking lists in Google Glass.

A* search algorithm

I used Will Thimbleby’s wonderful illustration of A* shortest path in Javascript. We can drag and drop the start and stock locations to move them around and recalculate the path, and we can draw walls. I made the map bigger, and I put a warehouse image as a background.


Here are the steps I performed:

  1. Double the map’s width/height
  2. Un-hard-code the map width/height
  3. Set the cell size and calculate the canvas width/height
  4. Un-hard-code the cell size
  5. Make a warehouse image in an image editor (I used Gimp)
  6. Add the warehouse image as background of the map
  7. Hide the heat map (search scores)
  8. Patiently draw the map, the walls, the doors, and the stock locations
  9. Save the drawing by serializing the map to JavaScript source code
  10. Replace startMap with the saved drawing
  11. Thicken the path’s stroke
  12. Hide the grid lines
  13. Hide the map
  14. Use diagonals
  15. Emphasize the path length

Here is a video of the making process (watch it in full-screen, HD, and 2x speed):


You can test the result for yourself on my website here.

Here is an animated GIF of the result:

Here is a video of the result for a small warehouse:

Here is a video of the result for a big warehouse:

Source code

I put the resulting HTML, JavaScript source code and images in my GitHub repository for you to download and participate.

Future work

Some of the future work includes:

  • Convert the path length into meters or feet
  • Project the geocoded stock location coordinates to the map’s coordinates
  • Set the start and end locations as input parameters
  • Automatically generate a screenshot of the path for printing alongside the picking list
  • Show the shortest path for an entire picking list using a Traveling Salesman Problem (TSP) algorithm
  • Improve performance for big maps
  • Provide a map editor to more accurately align the warehouse image with the map

Also, a much better implementation would be to use Google Maps Indoors.


That’s it. If you liked this, please thumbs up, leave a comment in the section below, share around you, and come author the next post with me.

Walking directions in a warehouse

Two years ago I had implemented a proof-of-concept that showed walking directions in a warehouse to complete a picking list. The idea is to visually represent what a picker has to do, and where they have to go, while calculating the minimum distance they have to walk. It will make it easier to get the job done for pickers that are not familiar with a warehouse, like temporary staff. And the business benefit is to minimize picking time, to make savings in labor costs, and to increase throughput. Also, we can save performance data on the ground, derive the gap between goals versus results, and use that as a feedback loop for continuous improvement of internal processes.

I had used my previous work on geocoding of stock locations in M3, and I had used Will Thimbleby‘s JavaScript implementation of the A* search algorithm to calculate and show the shortest path between two stock locations. I yet have to integrate the Traveling Sales Problem (TSP) algorithm for the entire picking list similar to my previous work on delivery route optimization for M3.

And there is more work to be done. For instance, the calculation responds quickly on my laptop for about 11 picking list lines, perhaps 13 with the ant colony optimization, but beyond that the calculation time grows exponentially beyond useful. Also, the calculation does not take into account bottleneck or collision avoidance for forklifts. Currently, this project is a great proof-of-concept to be further explored.

When Google Maps launched their outstanding Indoor Maps I had shelved my small project. But Google Indoor Maps requires the maps to be public which our M3 customers are reluctant to do. So for Inforum this week, I un-boxed my project and integrated it in my Google Glass demo. Here below are some screenshots of the project. I will be writing a series of posts soon on how to implement it. And I would like your feedback.

In future work, I will implement the same proof-of-concept using Google Indoor Maps.



x y