Color-coded event graph animations for Mashups

Here is an idea to manually create color-coded event graph animations for Mashups leveraging my previous tool that automatically generates event graphs for Mashup. The idea is to color-code the events in the event graph of a Mashup, and move forward in time to see the animation, in order to have a visual cue of how the Mashup timeline works. See it like a time-lapse of the Mashup.

Let’s take the sample Mashup provided at Mashup Designer > Help > M3 Transactions > List & edit Customers. The sequence of events of that Mashup would be the following:

  1. The Mashup starts and loads the list of customers (Startup event).
  2. Optionally, the user can enter a Customer number and click Search (Click event).
  3. The user selects a customer in the list (CurrentItemChanged event), and the Mashup loads that customer’s details.
  4. Optionally, the user changes the values and clicks Save (Click event), and the Mashup changes the values of that customer record.
  5. The Mashup refreshes the list of customers (UpdateComplete event).

The event graph for this Mashup in plain black & white would be:

graph

The idea is to create a color-coded graph of each step of the sequence, save a colored image of each step, and render the result as an animated GIF.

Why it’s important

Color-coded event graph animations for Mashups will help developers control the quality of their Mashups, it will help users approve Mashup designs, it will be useful as a prototype for demos and useful for usability testing, and it will help new developers better understand how the user interacts with the Mashup. There is a lot of activity in the software industry around software mockups, for example the popular Balsamiq Mockups. This new idea I introduce for Mashups helps bring software mockups a bit closer to M3.

How to create it

Follow these steps to manually create a color-coded event graph animation for Mashups:

  1. Break down the sequence of Mashup events in numbered order, like I did above.
  2. Take the original DOT file of the event graph (refer to my previous tool), and duplicate the file by as many steps, for example MyMashup.gv would become:
    MyMashup1.gv
    MyMashup2.gv
    MyMashup3.gv
    etc.
  3. Open each file in a text editor, and add color to the node and the edges involved in that step. Use the following syntax, for example for color red:
    [color=red; fontcolor=red]

    MyMashup1.gv:

    <Global> -> CustomerList [label="Startup :: List"; color=red; fontcolor=red];
    <Global> [color=red; fontcolor=red];
    CustomerList [color=red; fontcolor=red];

    MyMashup2.gv:

    ButtonSearch -> CustomerList [label="Click :: List"; color=red; fontcolor=red];
    ButtonSearch [color=red; fontcolor=red];
    CustomerList [color=red; fontcolor=red];

    MyMashup3.gv:

    CustomerList -> CustomerDetail [label="CurrentItemChanged :: Get"; color=red; fontcolor=red];
    CustomerList [color=red; fontcolor=red];
    CustomerDetail [color=red; fontcolor=red];

    etc.

  4. Use Graphivz to generate an output file of each step. For example you will have:
    MyMashup1.png
    1
    MyMashup2.png
    2
    MyMashup3.png
    3
    etc.
  5. Use a graphic editor like Gimp to generate a GIF animation from the individual image files. In GIMP, open all the images as Layers, order the layers from last to first, and export the result as an animated GIF with 1000 millisecond delay between frames:
    GIMP4
  6. That’s it!

Result

Here below is the result, a color-coded animation of the event graph of the sample List & edit Customer Mashup of the Mashup Designer (click on the image to see the animation):

animation

Future work

In a future work, I would implement a breadth-first search graph traversal algorithm to automatically traverse the Mashup’s event graph, node by node, call Graphiz’ dot layout engine (C:\Program Files (x86)\Graphviz x.y.z\bin\dot.exe) to produce an image at each iteration, and use another tool to merge all the images into an animated GIF.

Related articles

Google Glass

I just applied to get a pair of Google Glass.

Google Glass is an anticipated product from Google X for bringing Augmented Reality to the masses in a sports fashion pair of glasses containing a video camera, a Heads-Up Display, a processing unit running Android, Wifi connectivity, and a battery (c.f. the patent).

I was at Google I/O 2012 were they accepted pre-orders for Glass Explorer Edition but I made the regretful decision to not apply. Google is now offering a second chance: What would you do if you had Glass? Answer with #ifihadglass.

If I had Glass I would improve the workers job in a warehouse: I would show walking directions to the picking location, I would display information about the item, and I would keep track of the picking list. I’m an enthusiast I/On working on AR in the enterprise.

This would be a continuation of my previous implementation of Augmented Reality for M3.

Here are my three concepts pictures for Google Glass:

1

2

3

Here’s my application:

4

Wish me luck, and see you at Google I/O 2013.

 

M3 + Augmented Reality

In this article I introduce the first implementation that I know of Augmented Reality for Infor M3. Augmented Reality is the ability to superpose digital information on top of real world objects. This is achieved by locating the user’s head in space, by determining the user’s point of view, by registering real world objects, and by projecting virtual 3D objects accordingly. Implementing it has been a deer dream of mine. In this example I use fiducial markers and data coming from Item Master – MMS001.

Applications

Augmented Reality for M3 could be used for many applications. For example, it could help a worker find an Item in the warehouse by showing optimized walking directions and distance to possible picking locations. Also, it could help a worker show contextual information at a glance.

I believe Augmented Reality to be a disruptive technology and one of the next big revolutions in the software industry, with positive impacts similar to those of the Internet and mobile devices, that will reshape entire industries in the next 10 years.

Timeline & motivation

In 1998 I got a summer job in a warehouse for a company that sold car brakes. Every few minutes a printer spit out a picking list of items that I had to collect. As a temporary worker unfamiliar with the place, I spent most of my time wandering through the warehouse, searching for the items, and asking the more seasoned workers for help; I found that inefficient and I wished the computer gave me a map with directions of where to go. Also, the picking lists were un-ordered and I often had to go back to a previous location I had just visited; I found that inefficient and I wished the computer optimized the picking lists. Also, once I found the location, I often discovered the boxes were empty and I had to ask a forklift driver to replenish the stock location from a box of a higher shelf; I found that inefficient and I wished the computer planned replenishment ahead of time. That was in 1998 and nowadays ERP and Warehouse management systems are more common. Yet, I kept my wish to make better systems.

Then, In 2001 I read about Professor Steven Feiner’s Augmented Reality KARMA project from 1992 at Columbia University. The system fit in a backpack and had portable computer, batteries, GPS, compass, and head-mounted display. It would give detailed instructions to a user on how to repair a printer. That was my first exposure to Augmented Reality and ever since I have been wanting to implement it.

In 2007 Apple introduced the iPhone, with a stunning user interface, graphics, and processing power, blowing everybody’s mind about mobility and redefining an industry. And in 2009 Apple added a camera to the iPhone 3GS. The hardware technology for Augmented Reality started becoming accessible to the masses.

In 2009 I met with Brad Neuberg of Google at the Google I/O conference and I started working on a client-side search engine for M3 source code. That was my first exposure to HTML5.

In 2010 I implemented my first Warehouse 3D demo using Google Earth, with real data fed from the ERP, and I projected the result on a large touch screen for an immersive experience. That was my first step towards implemented Augmented Reality for M3.

In 2011 I proposed an idea for an internal project for M3 + Augmented Reality on mobile devices.

In parallel, WHATWG and W3C have been working hard to standardize HTML5 with the ability to use the webcam in JavaScript with WebRTC, to access pixel data, to paint on the canvas, and to use WebGL for 3D rendering. The software technology for Augmented Reality is becoming accessible to the masses.

More recently I started working on geo-locating Stock Locations in M3. This opens the door to new applications for geo-coded data in M3.

Then, at the Google I/O conference this year, I met with Ilmari Heikkinen whom pointed me to his article in HTML5 Rocks on Writing Augmented Reality Applications using JSARToolKit. That was the last push I needed to implement actual Augmented Reality for M3. So I did.

Implementation

I used Ilmari’s source code and I added a few lines of code to call an M3 API using REST in JavaScript when a marker is detected. In this example, the marker is mapped to an Item number (ITNO), but it could also be mapped to a Stock Location (WHSL) for example. Then, for that Item number I call the M3 API MMS200MI.GetItmBasic and I display the Name (ITDS), Description (FUDS), Basic unit of measure (UNMS), Volume (VOL3), Net weight (NEWE), Gross weight (GRWE).

Result

Here is a video of the result. Note the section below the canvas that shows M3 data coming from MMS200MI.GetItmBasic for the detected marker. We can see an activity indicator flickering as the markers are detected. For best viewing, watch the video in YouTube, in HD, and in full screen.

Source code

I provide the result for download at http://ibrix.info/ar/demo.zip with HTML and JavaScript source code, sample fiducial markers, and sample images.

Future work

With the simple example I introduced in this article I illustrate that hardware and software technology for Augmented Reality have have already become accessible for the masses. The technology is still maturing. There are on-going projects to provide registration without the use of markers. Also, sensors are becoming better for indoor location.

That’s it for now.

Please click ‘Follow’ to subscribe to my blog.

Geocoding of Stock Locations in MMS010

Here is a video that illustrates the process to set the Geo Codes XYZ of Stock Locations in MMS010 in Smart Office, i.e. to set the latitude, longitude, and altitude of Stock Locations, a.k.a. geocoding. In my example I determined the coordinates based on an 3D model built in Google SketchUp and geo-located in Google Earth; a GPS receiver with good indoor accuracy would work as well. With geocoded information, we can present data from the Warehouse Management System in a graphical way. This is important for applications such as showing Stock Locations on a map, or finding the shortest path for a picking list.

Demo video

How to proceed

These are the steps I followed in the video to geolocate the Stock Locations in MMS010:

  1. I used this SketchUp model of a 3D warehouse that I had previously geo-located:
  2. I also used this other SketchUp model of the Stock Locations that I had previously uniquely identified:
  3. Then, I used this Ruby script to get the geocoding of the floor plan:
  4. Then, I used this other Ruby script to get the geocoding of each Stock Location:
  5. The result is this CSV file of the floor plan’s geocodes and each Stock Location’s geocodes:
  6. Then, I used this Lawson Web Service of type Display Program to set the values for the fields Geo Code X (GEOX), Geo Code Y (GEOY), and Geo Code Z (GEOZ) in MMS010/F for a specified Warehouse (WHLO) and Stock Location (WHSL):
  7. Then, I used a Visual Basic macro for Microsoft Excel to call the Web Service for all Stock Locations:
  8. Finally, I used this script to display the Geo Codes XYZ in MMS010/B1:

Result

The result is the list of Stock Locations in MMS010/B1 displaying all the Geo Codes XYZ:

Resources

  • Download the SketchUp model of the geo-located 3D warehouse.
  • Download the SketchUp model of the uniquely identified Stock Locations.
  • Download the Ruby script to get the geocoding of the floor plan.
  • Download the Ruby script to get the geocoding of each Stock Location.
  • Download the resulting CSV file of all Stock Locations and their Geo Codes.
  • Download the Lawson Web Service to set the Geo Codes XYZ of a Stock Location.
  • Download the script to display the Geo Codes XYZ in MMS010/B1.
  • Watch the video of the entire process.

Related articles

UPDATE

2012-09-28: I had a bug in the Ruby script that miscalculated the Y and Z geocodes for the Stock Locations. I corrected the script and the resulting CSV file and I updated the links above.

Dependency graphs for data conversion

Dependency graphs show the relationships between M3 programs – how they relate to one another – and are useful during data conversion. In this article I discuss their benefits and how to create them.

Background

Data conversion is the process of transferring data from the customer’s legacy system into M3 with tools such as M3 API, M3 Data Import (MDI), Lawson Web Services (LWS), and Lawson Smart Data Tool (SDT) during the implementation project.

Relationships between programs are governed by M3. The M3 Business Engine ensures the integrity of its programs. For example, to create a Warehouse – MMS005 we need to create a Facility – CRS008, to create a Facility we need to create a Division – MNS100, to create a Division we need to create a Company – MNS095, and so on.

Dependency graphs

Dependency graphs show the relationships between M3 programs in a graphical form, as illustrated in the following subset of a larger graph:

Benefits

Dependency graphs are useful during data conversion for various reasons:

  • They visually provide a lot of information with little cognitive effort
  • They help delimit the scope, and help quantify the amount of work
  • They dictate the order in which to proceed
  • They help build the project plan, and help estimate the duration

Dependencies

Smart Data Tool comes with Configuration Sheets that contain a curated list of M3 dependencies. It’s one of the best sources of dependencies available.

The following screenshot shows the Configuration Sheet for Item – MMS001 where column G tells which programs we need to setup before we can create an Item – MMS001:

Dependencies extraction

We can programmatically extract the dependencies from the Smart Data Tool Configuration Sheets by reading the Excel files with ODBC/JDBC, or with Excel libraries available on the Internet (Java, VB, .NET, PHP, etc.).

Graph production

From those dependencies, we can automatically generate dependency graphs using tools such as Graphviz – an open source graph visualization software – in the DOT language.

Here is a subset of the dependency graph for Customer Addresses – OIS002:

digraph g {
 CRS070 -> OIS002;
 CRS065 -> OIS002;
 OIS002 [label="Customer Addresses\nOIS002"];
 CRS070 [label="Delivery method\nCRS070"];
 CRS065 [label="Delivery terms\nCRS065"];
}

Visualization

We can visualize large graphs in an easy zoomable way with a tool like ZGRViewer.

Conclusion

Dependency graphs greatly facilitate the data conversion effort. We can generate them programmatically with a combination of tools including Lawson Smart Data Tool and the open source Graphviz.

That’s it!

Warehouse 3D demo

I implemented a Warehouse 3D demo that demonstrates the integration capabilities of M3 with cool stuff from the software industry.

The Warehouse 3D demo displays racks and boxes with live data coming from Stock Location – MMS010, and Balance Identity – MMS060. The Location Aisle, Rack, and Level of MMS010 is written dynamically on each box. The Status Balance ID of MMS060 is rendered as the color of the box: 1=yellow, 2=green, 3=red, else brown. And the Item Number is generated dynamically as a real bar code that can be scanned on the front face of the box.

Here is a screenshot:

The demo uses the Google Earth plugin to render a 3D model that was modeled with Google SketchUp, Ruby scripts to geocode the boxes and identify the front faces of the boxes, PHP to make the 3D Collada model dynamic, SOAP-based Lawson Web Services that calls M3 API, and the PEAR and NuSOAP open source PHP libraries.

The result is useful for sales demos, and as a seed for customers interested in implementing such a solution.

Try for yourself

http://ibrix.info/warehouse3d/

You can try the demo for yourself with your own M3 environment. For that, you will need several things. You will need to install the Google Earth plugin in your browser. You will also need to deploy the Lawson Web Service for MMS060MI provided here; note that your LWS server must be in a DMZ so that the http://www.ibrix.info web server can make the SOAP call over HTTP. Also, you will need to follow the Settings wizard to setup your own M3 environment, user, password, CONO, WHLO, etc. The result is a long URL that is specific to your settings.

Constructing the 3D model

I built a 3D model with Google SketchUp.

Here is the video of the 3D model being built in Google SketchUp:

You can download my resulting SketchUp model here.

Identifying Aisles, Racks, Levels

Then, I set the Aisle, Rack, and Level of each box as in MMS010 using a custom Ruby script for Google SketchUp.

Here is a video that shows the script in action:

You can download this Ruby script here.

You can download the resulting SketchUp model here.

Identifying front faces

Then, I identified each front face of each box so as to dynamically overlay information, such as the Item Number, Item Name, etc. For that, I implemented another Ruby script.

Here is a video of that process:

You can also download this Ruby script here.

Exporting the Collada model

The original model is a SKP filetype, which is binary. I exported the model to a Collada DAE filetype, which is XML. The file is very big, 30.000 lines of XML.

The Collada file contains this:

  • Components (racks, boxes, walls, etc.)
  • Homogenous coordinates (X, Y, Z, H) relative to the model
  • Absolute coordinates (latitude, longitude)
  • Orientation (azimut, etc.)
  • Scale
  • Effects (surface, diffusion, textures, etc.)
  • Colors in RGBA

From the top of my head, the Collada hierarchy in XML is something like this:

Node Instance
	Node Definition
		Instance Geometry
			Instance Material
				Material
					Instance Effect
						Color
						Surface
							Image

Making the model dynamic

The goal is to set the color of each box dynamically, based on the Location of the box, and based on the Inventory Status in MMS060.

Unfortunately, Google Earth doesn’t have an API to change the color of a component dynamically. So, I decided to change the XML dynamically on the server. There are certainly better solutions but that’s the one I chose at the time. And I chose PHP because that’s what I had available on my server ibrix.info; otherwise any dynamic web language (ASP, JSP, etc.) would have been suitable.

In the XML, I found the mapping between the box (nodeDefinition) and its color (material). So, I changed the mapping from hard-coded to dynamic with a PHP function getColor() that determines the color based on the Location and based on the result of the web service call.

The color is determined by the Balance ID: 1=yellow, 2=green, 3=red, else brown. The Balance ID is stored in the SOAP Response of the web service.

Lawson Web Service

I created a SOAP-based Lawson Web Service for MMS060MI. I invoke the SOAP Web Service at the top of the PHP script, and store the Response in a global variable. To call SOAP Web Services, I use NuSOAP, an open source PHP library.

Generating front faces

I dynamically generate a texture for each each front face as a PNG image with the Item Number, Item Description, Quantity, and the bar code. I set the True Type Font, the size, the XY coordinates, and the background color.

Bar code

I generate an image of the bar code based on the Item Number using PEAR, an open source PHP library.

Settings wizard

I made a Settings wizard to assist the user in setting up a demo with their own M3 environment, user, password, CONO, WHLO, etc.

Applications

This Warehouse 3D demo illustrates possible applications such as:

  • Monitoring a warehouse
  • Locating a box for item picking
  • Implementing Augmented Reality to overlay relevant data on top of the boxes

Demo

Finally, I made a demo video using the back projection screen at the Lawson Schaumburg office, and using Johny Lee’s Low-Cost Multi-point Interactive Whiteboards Using the Wiimote and my home made IR pens to convert the back projection screen into a big touch screen. The 3D model in the demo has 10 Aisles, 6 Racks per Aisle (except the first aisle which only has 4 racks), and 4 Levels per Rack. That’s 224 boxes. There is also a floor plan that illustrates that structure.

Limitations

The main limitation of this demo is performance. When programming with Google Earth we do not have the capability of dynamically changing a 3D model. I would have liked to dynamically set the color of a box, and dynamically overlay text on the face of a box. Because that capability is lacking – there’s no such API in the Google Earth API – I chose to generate the XML of the 3D model dynamically on the server. As a result, the server has to send 30k lines of XML to the web browser over HTTP, it has to generate 224 PNG images and transfer them over the network, and the Google Earth plugin has to render it all. As a consequence, it takes between one and four minutes to fully download and render the demo. This design turns out to be inadequate for this type of application. Worse, it is not scalable nor improvable. I would have to re-think the design from scratch to get a more performant result.

Future Work

If I had to continue working on this project (which is not planned), I would implement the following:

  • Ideally, we would generate boxes, colors, and text dynamically on the client-side, with JavaScript and WebGL for example. Google Earth doesn’t support that, and generating the model on the server-side turns out to be a bad design. So we need a different technique.
  • Also, we could use a better 3D client, like O3D.
  • Also, we would need to implement easy keyboard navigation, like the First Person Camera demo, and like the Monster Milktruck demo.
  • Also, we would need to implement hit detection, so as to click on a box and display more M3 data in a pop-up for example. Google Earth supports even listeners but doesn’t yet support hit detection.
  • Finally, we would need to improve performance by an order of magnitude.

Thanks

Special thanks to Gunilla A for sponsoring this project and making it possible.

Resources

  • Download Ruby script to set the Aisle, Rack, and Level of each box as in MMS010
  • Download Ruby script to identify each front face of each box so as to dynamically overlay information
  • Download SketchUp model with floor plan, geo-location, racks, and walls
  • Download SketchUp model of boxes identified by Stock Location
  • Watch video of the 3D model being built in Google SketchUp
  • Watch video of the process of setting the Aisle, Rack, and Level of each box as in MMS010
  • Watch video of the process of identifying each front face of each box
  • Watch video of the demo on the large touch screen
  • Download Lawson Web Service for MMS060MI

Related articles

UPDATE

2012-09-27: I added the SketchUp models and Ruby scripts for download.

M3 + Augmented Reality (idea)

Here is an idea that would be great to implement: M3 + Augmented Reality. I believe AR to be one of the next big revolutions in the software industry, and the technology is available today. We have mobile phones with cameras, GPS, compass, and millimetric indoor radio positioning, fast CPU for feature registration, localization and mapping, REST Web Services, etc. It went from being mostly reserved to research labs, to being the hype of emerging start ups. Get ready for the future 🙂