M3 picking lists in Google Glass @ Inforum

I am very pleased to announce that after months of working here and there in the evenings voluntarily after work hours, I finally completed and presented both my demos of M3 picking lists in Google Glass and Augmented Reality at Inforum. They were a success. I showed the demos to about 100 persons per day during six days flawlessly with very positive reception. The goal was to show proof of concepts of wearable computers and augmented reality applied to Infor M3. My feet hurt.


This is my second Glass app after the one for Khan Academy.

This Glass app has the following features:

  • It displays a picking list from Infor M3 as soon as it’s created in M3.
  • For each pick list line it shows the quantity (ALQT), item number (ITNO), item description (ITDS), and stock location (WHSL) as aisle/rack/level.
  • It displays the pick list lines as a bundle for easy grouping and finding.
  • It shows walking directions in the warehouse.
  • It has a custom menu action for the picker to mark an item as picked and to change the status of that pick list line in M3.
  • It uses the built-in text-to-speech capability of Glass to illustrate hands-free picking.
  • It’s bi-directional: from M3 to Google’s servers to push the picking list to Glass, and from Google’s servers to M3 when the picker confirms a line.
  • The images come from Infor Document Management (formerly Document Archive).
  • I developed the app in Java as an Infor Grid application.
  • I created a custom subscriber and added a subscription to Event Analytics to M3:MHPICL:U.
  • It uses the Google Mirror API for simplicity to illustrate the proof-of-concept.

I have been making the resulting source code free and open source on my GitHub repository, and I have been writing the details on this blog. I will soon post the remaining details.


I want to specially thanks Peter A Johansson of Infor GDE Demo Services for always believing in my idea, his manager Robert MacCrate for providing the servers on Infor CloudSuite, Philip Cancino formerly of Infor for helping with the functional understanding of picking lists in M3, Marie-Pascale Authié of Infor Pre-Sales for helping me setup and create picking lists in M3 and for also doing the demo at Inforum, Zack Makris of Infor Labs for providing technical support, Jonathan Amiran of Intentia Israel for helping me write the Grid application, and some people of Infor Product Development that chose to remain anonymous for helping me write a Java application for Event Hub and Document Archive. I also want to specially thank all the participants of Inforum whom saw the demo and provided feedback, and all of you readers for supporting me. And I probably missed some important contributors, thank you too. And thanks to Google X (specially Sergey Brin and Thad Starner) for believing in wearable computers and for accelerating the eyewear market.


Here below are the screenshots from androidcast. They show the bundle cover, the three pick list lines with the items to pick, the Confirm custom menu action, the Read aloud action, and the walking directions in the warehouse:

result0_ result1_ result2_ result3_ result3c_ result3r result4_


Here below are three vignettes of what the result would look like to a picker:

1 2 3



Here are some photos at Inforum:

In the Manufacturing area:



In front of the SN sign:


Holding my Augmented Reality demo:

Playing around with picking lists in virtual reality (Google Cardboard, Photo Spheres, and SketchFab):
bild 3

Playing around with picking lists in Android Wear (Moto 360):


That’s it! If you liked this, please thumbs up, leave a comment, subscribe to this blog, share around you, and come help me write the next blog post, I need you. Thank you!

How to run a Google Glass app in Infor Grid

Today I will detail the steps to run a Google Glass app in Infor Grid. This is part of my project to have M3 Picking Lists in Google Glass.

For that, I will develop a very simple Glassware using the Google Mirror API Java Quick Start Project, and I will use the technique I learned in Hacking Infor Grid application development. The integration will be bi-directional: the Grid app will communicate to the Glass API on Google’s servers to insert cards in the timeline, and conversely when the user replies to a timeline card Google’s servers will send notifications to the Grid app provided it is located at a routable address with a valid SSL certificate.

This is a great demo of the integration capabilities of the Infor Grid. I worked a little bit here and there on evenings and week-ends over several months, and I distilled the resulting steps here and in a 15mn video so you can play along. You will need a pair of Google Glass.

STEP 1: Setup Eclipse with Maven

I will start with the instructions for the Google Mirror API Java Quick Start Project:

For the Prerequisites I need Java 1.6 and Apache Maven for the build process. I will download Eclipse IDE for Java Developers that has the Maven plugin integrated:step1

STEP 2: Setup the Glass Mirror API Java Quick Start Project

Then, I will download the Glass Mirror API Java Quick Start Project from the GitHub repository:

Then, I will import it in Eclipse as an Existing Maven Project with the pom.xml:

I will import the Infor Grid library grid-core.jar:

Then, I will replace some of the source code to adapt it to the Infor Grid, using Eclipse File Search and Replace:

I will replace the code for the Logger in all files (from/to):

import java.util.logging.Logger;
import com.lawson.grid.util.logging.GridLogger;
Logger LOG = Logger.getLogger
GridLogger LOG = GridLogger.getLogger

Then, I will add the context path to the URLs of all files (from/to):

url.setRawPath(req.getContextPath() +
$1httpRequest.getContextPath() + "/

For the subscription to notifications I will replace the callback URL in NewUserBootstrapper.java by a routable FQDN or IP address with a valid SSL certificate to handle the notification:

Subscription subscription = MirrorClient.insertSubscription(credential, WebUtil.buildUrl(req, "/notify").replace("m3app-2013.company.net", ""), userId, "timeline");

Then, I will replace the code in NotifyServlet.java that processes the notification from the HTTP request body because apparently notificationReader.ready() always returns false in the Infor Grid and that throws IllegalArgumentException: no JSON input found. Here is the new code:

int lines = 0;
String line;
while ((line = notificationReader.readLine()) != null) {
	notificationString += line;

Then, I will setup the Project in the Google Developers Console with the Google Mirror API, the client ID and client secret credentials for OAuth 2.0, and the Consent screen:
step2.5a step2.5b step2.5c

Then, I will paste the client ID and secret in the oauth.properties of the project:

Then, I will create and run a new Maven Build Configuration using goal war:war:

That will create a WAR file that I will use to deploy as a web application in my Grid application:

STEP 3: Setup the Infor Grid application

Then, create and install an Infor Grid application GoogleGlass based on the HelloWorld app:
step3.2b_i step3.2b_ii step3.2b_iii step3.2b_iv

STEP 4: Test

Then, launch the app:

Authenticate to the Google account associated with Glass, and click Accept to grant app permissions:

Use the app, insert cards in the timeline:

You can also tap Glass to reply to a timecard:

And the Grid app will receive the notification with a JSON string:

Resulting video

Here is the video with hours of work distilled in 15mn (I recommend watching in full screen, HD, and 2x speed):

STEP 5: Summary

That was how to run a Google Glass app in Infor Grid. The main steps are:

  1. Setup Eclipse with Maven
  2. Setup the Glass Mirror API Quick Start Java project
  3. Setup the Infor Grid application
  4. Test

The integration is bi-directional: the Grid app adds cards to the Glass timeline, and when the user takes action on a card Google’s servers send a JSON notification to the Grid app.

The result is great to demo the integration capabilities of the Infor Grid, and it will be useful for my project to show M3 picking lists in Glass.

Future work

In future work, I will use the bi-directional communication for pickers in a warehouse to tap Glass to confirm picking lists, have Google’s servers send the JSON notification to the Grid app, and have the Grid app call an M3 API MHS850MI AddCOPick and AddCfmPickList to confirm picking.

That’s it. If you liked this, please give it a thumbs up, leave your comments, share around you, and contribute back by writing your own ideas. Thank you.

I believe Google is silently killing Glass

I am starting to believe that Google is silently killing Glass for lack of success, rumors on The Interwebs Inc. are starting to confirm it, so I too am phasing out of my Glass project and moving on to Augmented Reality. I call Glass the Palm Pilot of 2013.


For those of you whom remember 1996 when Palm Pilot was released, it instantly became the new must-have gadget in geek universe, it was the future right now in a world of phone bricks…except it never took off, it came too early for its time, it was clunky, it needed to be manually connected, and it suffered from the what-can-I-do-with-it syndrome. A small faction of resistant users kept it afloat for many years, and then it was strolled and bounced around from bad to worse investment at HP to the surprise of most.

After that, phone manufacturers continued to fool us with so-called innovations for another decade.

And then came the iPhone in 2007, it blew everybody away, and it secured smartphones as obvious and unquestionable in every category.

The rumors

The Intertubes (TM) is starting to confirm the rumors that Glass is going away. I heard from a friend that a friend who works at Google said Google had asked some of their employees to return their Glass devices and had assigned them to other projects. Then, at the I/O conference this year Google played radio silence on Glass, no BMX, no parachutes, not even a peep. Then, the video for the release of Glass in London got a whopping 7k hits when I watched it, and now it is stalling at 300k hits. Then, my Google Searches for my Glass software development questions seem to return less and less hits. Finally, another friend of other Googler friends said the topic of Glass shutting down came up during a conversation. That is definitely solid proof, don’t you think?

My experience

I can speak of my own experience with Glass. I originally bought Glass because I was excited to finally try software development for wearable computers and Augmented Reality. I grew up reading in the 90s about the pioneers of the MIT Media Lab, Steve Mann and Thad Starner of the Wearable Computing Group and Hiroshi Ishii of the Tangible Media Group, and Professor Steven Feiner of Columbia University with his research on Augmented Reality. It seemed Glass was set to be the first of such devices ready for the mass market.

I have had Glass for 9 months, and from this gestation emerged the reality. I sadly came to admit I never wear it, it stays in the drawer. Even since October I continue to feel like Robocop with a thing on my face, it changes my behavior as if everybody was looking suspiciously at me, and that makes me uncomfortable. I get many positive reactions from people whom are curious of this novelty. Mostly I get too many negative reactions from Glass haters misconceiving it as an always on surveillance camera with face recognition. False. I sympathize with them because like them I value the protection of our freedom and privacy, and then I cannot help to satirically warn them I can turn on the X-ray vision. Then, in May at the Augmented World Expo 2014, I understood clearly that everybody had tried Glass for Augmented Reality and everybody gave up; in hindsight, they all admitted Glass had never been intended for Augmented Reality. Reality check. It changed my view of Glass as a wearable device. I kept Glass for software development. And then the technical problems. The battery exhausts too quickly, the device heats up and slows down to a freeze, and it is limited in terms of applications. And I am not that good of an Android developer to squeeze the juice out of it. Now I am using it just for pictures and videos, it is excellent for point-of-view shots. And so it has become the most expensive camera I possess.

I am still glad I had Glass, I killed a fever of want, I boosted my software development skills in the process, I anchored my confidence that I can still implement new technologies at 37, and I confirm wearable devices and Augmented Reality are here.

As for my smartphone, I have this weird dream-like feeling of needing to wrap the phone around me like a cloth, dive jump inside the screen and swim in a giant world of digital information. That is my need for the holodeck and Glass does not come close to an inch of fulfilling that.

What’s next

It is a fact wearable computers are here to stay. It just will not be with Glass. Glass was a milestone in history that will remain in the archives as one of the first general wearable devices. Glass also helped spawn the industry of eyewear, and there are valid niche markets where Glass-like devices fit perfectly, for example this safety device for motorcyclists from FUSAR Technologies that displays a rear camera inside the helmet.

I feel sad for Thad Starner and Sergey Brin whom really believed in Glass; they have other awesomeness in their sleeves. Steve Mann does not seem to be affected as he is doing great sensitizing us to sousveillance and working for META. If I project the analogy of the Palm versus iPhone history to Glass, we will see the natural heir of Glass, the obvious leader of the wearable computers, in 11 years from now, in 2025. Yikes! I say it will be a holodeck, light-guided into the eye, mixed with the Minority Report of John Underkoffler (he was one of Hiroshi’s students), and Tony Stark’s helmet of Iron Man.

Meanwhile, I think Google will push full throttle with Project Tango, after all, anything that Johnny Chung Lee touches becomes a hit. Tango is a dream for Augmented Reality enthusiasts. Also, I keep an eye on castAR and Projective Augmented Reality, and I am eagerly awaiting for their first device; Jeri Ellsworth is a self-taught pioneer with many followers that does not come from academics.

As for me, I will finish my Glass proof-of-concept to honor the commitments I made with my partners, and after that I will learn to implement Augmented Reality with Metaio SDK, Unity3DQualcomm Vuforia, and OpenCV.

Google Glass for Khan Academy

At the Pre-I/O Glass Hackaton in San Francisco last week-end we built an app for Google Glass for Khan Academy. It may well be the first of its kind. This first alpha version for teachers displays a notification on Glass when students are struggling during an exercise. We published it on GitHub at GlassKhan.

About Khan Academy

Khan Academy flipped education inside out. When I was in school, we would passively seat in class and listen to teachers give lectures for an hour, and then they would give us homework to do on our own at home. Khan Academy flipped that system inside out. They tell students to watch the lectures on the Internet at their own pace, where they can pause, accelerate, or deepen a section of interest. And they tell students to do the homework in class in groups with the teacher. That creates more engagement from the students as students are familiar with the technologies, and students can help each other in class between peers with the teacher present to answer questions. And as students are doing the exercises on Khan Academy’s website, teachers have hundreds of data points about students to tell who is behind, who is ahead, on what topics, and more. So we set out to build a Glass app for Khan Academy.

The hackaton

The hackaton was a 24h coding competition with education as the main theme, to “see leaps forward in Education using technologies like Google Glass”. I teamed up with Ross Gruetzemacher and Ryan McCormick. We brainstormed several ideas including an app for Khan Academy.

The first premise was that we would build an app with the teacher in mind since it’s currently easier to justify the cost of one Glass per teacher rather than one Glass per student. In the future the cost of Glass will probably be lower but that future hasn’t happened yet.

The second premise was that such app mustn’t already exist. We quickly confirmed that on the Glassware page. We also did a Google Search for the terms “Google Glass” and “Khan Academy” and we found two relevant hits. The first hit from Forbes, Google Ventures Launches Glass Collective With Andreessen, Kleiner Perkins, To Fund Google Glass Startups, said “Doerr is excited about Glass applications for education as well as health care. He cited companies like Udacity and Coursera and Khan Academy that are working on education but sees Glass as adding a whole new layer to education.” The second hit from Kurzweil Accelerating Intelligence, Will anyone create a killer app for Google Glass? said “Khan Academy software engineer Stephanie Chang, who was at the Foundry events, has ideas such as creating a Glass app for teachers, who could be notified as they give a lecture which students are struggling.” Both articles validated our idea. There’s funding available, and there’s a demand for an app.


Teachers have a dashboard on Khan Academy’s website that shows the roster of students, classes, progress reports, and hundreds of data points.

Here is a screenshot of the coach dashboard:

And Khan Academy has an API that gives access to data about what student is doing what exercise when and at what level. The data is private to the coach and to students that have accepted a teacher as a coach. And the data is protected with OAuth.

Here is a screenshot of the API Explorer:

As soon as a student is struggling with an exercise the API knows about it. The criteria to be considered struggling on a exercise is based on metrics like time spent on the exercise and number of hints used. We setup dummy teacher accounts and dummy student accounts, and we purposely failed at plenty of elementary math exercises to trigger the flag  struggling = true . For that we query the API /api/v1/user/exercises/<exercise_name> . When that event happens, we send a timeline card to the teacher’s Glass with the nickname of the student, the level “struggling”, the name of the exercise, and the time “just now”. That’s four pieces of data.

It’s a useful micro-interaction for the teacher and it follows the principle about the now as explained by Google Developer Expert Allen Firstenberg:

From a technical point of view, we used the Google Mirror API and the Java Quick Start. We built a decorator and an adapter in Java for the Khan Academy API. And we built a loop for each student, for each exercise, test if struggling == true.


Here are two screenshots of the result, we randomized the pairs of colors for the nickname and exercise name so teachers get distinct visual cues:

vignette vignette2


This app will help students who are shy and don’t want to raise their hand and admit they don’t know when their friends are watching. With the notification the teacher can go help the student without the student having to feel embarrassed to ask for help.

Also, this will help teachers not be distracted by the iPad or the computer.

And teachers that have an inclination for programming can take the source code and advance it for their own creative ideas.


This first alpha version of the app contains shameful hard-coded OAuth to Khan Academy that we need to completely re-code. We also lack a notification system; we simply implemented a one iteration loop that pulls from Khan Academy’s website. And there are rough edges to smooth.


Future versions of the app could show other types of events such as levels of success rather than levels of struggle.

We’d like to see this app survive beyond the hackaton. If you’re a developer and would like to contribute, please contact us. And if you work at Khan Academy, please contact us, we need new APIs from you (coach data, notifications, and the servers returned HTTP 500 “Internal Server Error […] the server is overloaded”).


That’s it! Please like, comment, and spread the goodness to your teachers and students.

Glass project hosted by Infor CloudSuite

I’m pleased to announce my Google Glass project is being hosted by Infor CloudSuite.

Project overview

I’m developing an application for Google Glass to have rich interactive picking lists from Infor M3 with:

  • list of items to pick with quantities and stock locations
  • a picture of the item from Document Archive so the user can get a visual cue of what to pick
  • floor plan of the warehouse with walking directions so the user can optimize the picking time
  • tap to confirm picking


It’s a proof-of-concept of wearable computing for M3 and a base for future experiments in Augmented Reality for M3. Also, it’s a great visibility to showcase the integration capabilities of M3, and it’s a way to strengthen the collaboration between all the different actors (management, product development, consultants, colleagues, customers, partners).

I will make the resulting source code free software and open source on this blog and on my GitHub repository. I’m passionate enough about AR and I need to uplift my skills that I’m working independently on my own during evenings and week-ends. My goal is to complete the first set of features before Google I/O 2014 in two weeks from now. After that my next goal will be to complete the second set of features for Inforum 2014 in New Orleans on September 15-18 this year where I will do a demo with Peter.


Peter A Johansson is the manager of the Global Demo Environment (GDE) Demo Services team at Infor. Peter is an appreciable visionary and has the necessary pragmatism and focus to make ideas a reality. I was looking around for an M3 server with Infor Smart Office and Infor Process Automation to do the software development, so I pitched the idea to Peter in April and at once he was attracted. He saw potential for a great demo at Inforum and suggested the idea to his manager. Peter and I have worked together in the past, and knows my drive, so he said: “We all know that if you give Thibaud what he needs then cool-stuff happens :-)” And they approved and made available for this project a full stack of M3 13.2 demo image servers on Infor CloudSuite deployed as virtual machines on Amazon Web Services (AWS).

Servers on Infor CloudSuite

The M3 stack for this project consists of: LifeCycle Manager (LCM), M3 Business Engine (M3 BE), Grid, Enterprise Search (IES), M3 Enterprise Collaborator (MEC), M3 BE BODs, Smart Office (ISO), H5 Client, Ming.le, ION Desk, Graphical Lot Tracker (GLT), Customer Lifecycle Management (CLM), Counter Sales for Distribution, Document Archive (DAF), MetaData Publisher (MDP), StreamServe for MOM, M3 Report Manager (MRM), Business Performance Warehouse (BPW), M3 Analytics, Event Hub, Event Analytics, Process Automation (IPA), Product Configuration Management (PCM), and more. For this project I only need M3 Business Engine, Event Hub, Event Analytics, Process Automation, and Document Archive.

The servers cost money per uptime and I can only work on this three times a week so I need to use the uptime carefully on a schedule we setup together based on my preferences.

Here is a screenshot of the Infor CloudSuite overview page:

Here is a screenshot of the schedule I chose:

Here is a screenshot of the deployment selection (AWS in my case):

Demo @ Inforum

Peter and I will do a demo at the M3 Labs booth at Inforum in September. Come check it out. And if there is a feature you’d like to see at the demo let me know in the comments below.

After the demo at Inforum in September, I’ll re-assess the future of the project.


That’s it for the announcement! Special thanks to Peter A Johansson, his manager, the GDE Demo Services team, and Infor CloudSuite for believing in and sponsoring this project.

Augmented World Expo 2014

Last week I attended the Augmented World Expo (AWE) 2014 [1] in Santa Clara, one of the world conferences on Augmented Reality, Virtual Reality, Augmented Virtuality [2], and smart glasses [3]. There, I saw Steve Feiner, pioneer of Augmented Reality in the 1990s [4] [5], Professor of computer science and director of the Computer Graphics and User Interfaces Lab at Columbia University, and adviser for Space Glasses at Meta [6]. I also saw Mark Billinghurst, director of the HITLab in New Zealand [7] whom created the AR Toolkit which I later used (JavaScript port) for my prototype M3 + Augmented Reality. I didn’t see Steve Mann, also adviser for Meta, and one of the pioneers of the Wearable Computing group in the Media Lab in the 1980s [8]; Thad Starner was in that group and later went on to design Google Glass for Sergey Brin [9]. I got inspiration from their work when I was younger, and I was excited to see them.

I went to the conference to learn more about the future. I’m currently working on a personal project to develop an app to display picking lists in Google Glass with data from Infor M3.

Here are some pictures of me at the conference, dreaming my vision of future picking lists 😉

1 2 AWE2014

M3 Picking Lists in Google Glass

Here is the first tangible result of M3 Picking lists in Google Glass. This is a continuation of my previous posts in the series: Getting and processing an M3 picking list and Hello Google Glass from Infor Process Automation.


As a reminder, I’m developing a proof-of-concept to show rich picking lists from Infor M3 in Google Glass with item number, item description, quantity, stock location, image of the item from Infor Document Archive, and walking directions in a warehouse plan. Also, for interactivity, the picker will be able to tap Glass to confirm the picking. Also, the picker will be able to take a picture of a box at packing.

The result will be useful to showcase the integration capabilities of M3, it’s a first implementation of wearable computing for M3, and it sets a precedent for future Augmented Reality experiments with M3. And the advantages for a picker in a warehouse would be more efficiency and new capabilities. And for me it’s a way to keep my skills up-to-date, and it’s an outlet for my creativity. It’s a lot of software development work, and I’m progressing slowly but steadily on evenings and on week-ends. If you would like to participate please let me know.

Why it matters

According to Gartner, by 2016, wearables will emerge as a $10 billion industry [1].

According to Forbes, “Smart glasses with augmented reality (AR) and head-mounted cameras can increase the efficiency of technicians, engineers and other workers in field service, maintenance, healthcare and manufacturing roles” [2].

According to MarketsandMarkets, the Augmented Reality and Virtual Reality market is expected to grow and reach $1.06 billion by 2018 [3].

Basics of Google Glass

Google Glass is one of the first wearables for the mass market. It is a notification device on the face, an eyewear with all the capabilities of an Android device including camera, head-mounted display, touchpad, network connectivity, voice recognition, location and motion sensors.

It works by displaying a timeline of cards we can swipe back and forth to show past and present events.

To write applications for Google Glass we can use the Mirror API, the Glass Development Kit (GDK), or standard Android development. I will use the Mirror API for simplicity.

I will display an M3 picking list as a series of detailed cards on the timeline that pickers can swipe and interact with like a to-do list as they are progressing in their picking.

Card template

For the template of the timeline card, I will use SIMPLEEVENT from the Google Mirror API Playground per picking list line to easily identify the four pieces of information the picker will need to read per picking list line: item quantity, item number, item description, and stock location:


I will use Bundling with bundleId and isBundleCover to group the cards together by picking list:

Picking list lines

I will get the picking list lines with the SQL of my previous post, and I will sort them in descending order because Glass will display them reversely last in, first out, thus they will appear naturally in-order to the user.


HTML card

I will use the SIMPLEEVENT template’s HTML fragment and replace the sample values with item quantity ALQT, item number ITNO, item description ITDS, and stock location WHSL:

    <div class="text-auto-size">
      <p class="yellow"><!SQL_H6ALQT><sub><!SQL_H6ITNO></sub></p>


I will embed the HTML fragment in the JSON payload for the Mirror API:

	"html": html,
	"bundleId": DLIX

Bundle cover

The bundle cover will be:

	"text": "Picking list <!DLIX>",
	"bundleId": <!DLIX>,
	"isBundleCover": true

Infor Process File

My process file in Infor Process Designer is illustrated in this GIF animation (you can open it in Gimp and see the layers in detail):


I had to solve the following two trivial problems:

  • I had problems parsing new line characters in the MsgBuilder activity node with JavaScript in an Assign activity node. According to the ECMAScript Language Specification – ECMA-262 Edition 5.1 on String Literals the character for new line is simply \n (line feed <LF>). Then Samar explained to me that the MsgBuilder activity node in IPA uses both characters \r\n (carriage return <CR> and line feed <LF>).
  • JSON is not implemented in IPA for JavaScript in the Assign activity node. So I had to manually add it to IPA. I used Douglas Crockford’s json2.js, and I appended it in the two files <IPDesigner>\IPD\pflow.js and <IPALandmark>\system\LPS\pflow.js.

I still have the following problem:

  • The subscription I used in my previous post, M3:MHPICL:U, seems to occur too early in some of my tests, and that is a blocking problem because when my flow runs the SQL to get the picking list lines only gets the first line – which is the only line that exists in the database at that point in time – while the other lines haven’t yet been created in the database at that time and the flow misses them. I must find a solution to this problem. I haven’t been able to reproduce it.


From my previous post, I had the following picking list:

T0101 TLSITEM01 Item 01 11
T0102 TLSITEM02 Item 02 13
T0301 TLSITEM03 Item 03 17
T0302 TLSITEM04 Item 04 19

When I re-run the scenario, here are the resulting timeline cards in my Google Glass where black pixels are see-through pixels:
2.1 2.2 2.3 2.4 2.5

And here is a video capture of the result (I used Android screencast which captures at a slow frame rate):

And here is a picture of what it would look like to the user in Glass with see-through pixels:

Future work

Next, I will implement the following:

  • Programmatically get the OAuth 2.0 token instead of copy/pasting it manually in the WebRun activity nodes.
  • Show the item image from Infor Document Archive.
  • Show walking directions on a warehouse plan.
  • Tap to confirm the picking.
  • Take a picture of the box at packing.


That’s it! Check out my previous posts on the project. Like this post. Tell me what you think in the comments section below. Share with your colleagues, customers, partners. Click the Follow button to subscribe to this blog. Be an author and publish your own ideas. And enjoy.

M3 ideas @ Inforum 2014

I submitted the following session in the Call for Papers of Inforum 2014 in New Orleans in September:

M3 ideas: social media, open source, and Google Glass

This session will talk about:

  • Social media for M3 to help create communities, circulate information, and get the job done more efficiently while needing authors, readers, and engagement.
  • Open source for M3, a collaborative attempt to make M3 greater to the benefit of everyone beyond the confines of a workplace.
  • Google Glass for M3, a proof-of-concept to showcase the integration capabilities of M3 with wearable computing and future experiments in Augmented Reality for M3. The products involved are: M3, Event Analytics, Infor Process Automation, Infor Document Archive, and Infor CloudSuite.

Includes illustrations with M3 customers.

Would you like to see this session at Inforum 2014? Please vote here below and let me know what you think.


Hello Google Glass from Infor Process Automation

As a continuation of my Google Glass project, my next step is to write Hello World on Google Glass using Infor Process Automation (IPA).

As a reminder of my project, I’m writing an application to show M3 picking lists on Google Glass using Event Analytics and Infor Process Automation to have rich interactive picking lists in Glass with a list of items to pick, quantities, stock locations, item image from Infor Document Archive, and walking directions on a warehouse plan.


From a software architecture point of view, we have the following tiers:

  • I’m wearing Glassware and they are connected to the Internet
  • Google’s servers periodically communicate with my Glass
  • My IPA server is located in a Local Area Network (LAN) at the office behind a firewall.

Timeline card

To keep the proof-of-concept simple, I use the Google Mirror API; that’s simpler than using Glass Development Kit (GDK). A sample HTTP Request to insert Hello World in a static card in my Glass timeline looks like this with the proper OAuth 2.0 token:

POST https://www.googleapis.com/mirror/v1/timeline HTTP/1.1
Host: www.googleapis.com
Authorization: Bearer ya29.HQABS3kFLP2b-BsOWMFyGUpUv4JPAhKeEnDbLcjDUAHREBK6mYYVAadIa68S6A
Content-Type: application/json
Content-Length: 29

{ "text": "Hello from IPA" }

Note: For the purposes of this proof-of-concept I bootstrapped the OAuth 2.0 token manually by copy/pasting it from another computer where I already had the Android Development Tools (ADT) and the Glass Java Quick Start Project.

The resulting timeline card looks like this where the black pixels are see-through pixels in Glass:

Process Automation

IPA can make HTTP Requests using the Web Run activity node. With WebRun, we can specify the following parts of an HTTP Request: scheme (HTTP/HTTPS), host, port number, method (GET/POST), path, query, Basic Authentication, HTTP Request Body, Content-Type, and HTTP Request Headers.

Here is my sample WebRun for Glass:

Problems and rescue

At first, I ran into the following problems:

  1. I didn’t know where to set the scheme (HTTP/HTTPS); the Web Root seemed to me like it was only used to set the host and port number.
  2. The User id and Password fields seem to support Basic Authentication only, not OAuth 2.0.
  3. I need to set the OAuth 2.0 token as a variable because it expires and must be renewed every hour, but the WebRun activity node doesn’t support variable substitution in the Authorization header (CTRL+SPACE and <!var1> are not supported).
  4. I didn’t see the Header string field at first because when the input field is empty it has no border which makes it hard to spot.
  5. The Header string occupies only one line of height so it seems to indicate that it only accepts one HTTP Request Header.
  6. The Content-Type field didn’t propose application/json in the drop down list, so I had to hack into the LPD file with Notepad and write it and XML-encode it manually.
  7. The WebRun failed to execute and returned an HTTP 400 error without additional information.
  8. We cannot set a proxy like Fiddler in the WebRun configuration to intercept the HTTP Request (header and body) and HTTP Response (header and body) which makes it hard to troubleshoot.
  9. The WorkUnit log only shows the HTTP Response body which is only a quarter useful.

I sent an email with the problems to James Jeyachandran (j…@infor.com), the Product Manager for IPA at Infor Product Development whom I know from my previous work on Lawson ProcessFlow Integrator (PFI) and IPA. It has always been a pleasure to work with James for his availability and responsiveness. Once more he impressed me as he called me back within four minutes. He was interested in my Glass project, he addressed my questions, and to assist me he graciously made available Samar Upadhyay s…@infor.com, one of the software engineers of IPA. After troubleshooting together, here are the respective answers to my problems:

  1. The scheme can be set in the Web Root (as shown in the screenshot above).
  2. The OAuth 2.0 token can be set in an Authorization header in the Header string (as shown in the screenshot above).
  3. Samar said he will add variable substitution to the Header string.
  4. Samar said he will make the Header string field wider.
  5. Samar said he will make the Header string field multi-line.
  6. Samar said there is a new version of IPA that adds application/json as a possible Content-type.
  7. Samar said that’s a known problem with Content-type application/json and there is a bug fix available to correct it. Meanwhile, James said I can add it as an additional Header string; for that I had to use Notepad again to add the two Header strings on two lines. Also, Samar said he will attempt to upgrade my server with the new version of IPA.
  8. Samar said he will look into adding an optional proxy configuration for host and port number like we can do in web browsers.
  9. Samar said he will look into adding an option to log the full HTTP Request and Response.

After that collaboration we had a working proof-of-concept within 20mn.


Here is the resulting WorkUnit log:

And here is the resulting vignette in my Glass:

Future work

The next step will be to get the OAuth 2.0 token automatically, and to display the M3 picking list onto Glass.


That was a proof-of-concept to write Hello World from Infor Process Automation onto Google Glass using the WebRun activity node to make an HTTP Request to the Mirror API.

That’s it!

I want to specially thank James Jeyachandran and Samar Upadhyay for their support and responsiveness, and for their enthusiasm with this project. They will be at Inforum 2014.

Like. Comment. Share. Subscribe. Enjoy.