M3 picking lists in Google Glass @ Inforum

I am very pleased to announce that after months of working here and there in the evenings voluntarily after work hours, I finally completed and presented both my demos of M3 picking lists in Google Glass and Augmented Reality at Inforum. They were a success. I showed the demos to about 100 persons per day during six days flawlessly with very positive reception. The goal was to show proof of concepts of wearable computers and augmented reality applied to Infor M3. My feet hurt.


This is my second Glass app after the one for Khan Academy.

This Glass app has the following features:

  • It displays a picking list from Infor M3 as soon as it’s created in M3.
  • For each pick list line it shows the quantity (ALQT), item number (ITNO), item description (ITDS), and stock location (WHSL) as aisle/rack/level.
  • It displays the pick list lines as a bundle for easy grouping and finding.
  • It shows walking directions in the warehouse.
  • It has a custom menu action for the picker to mark an item as picked and to change the status of that pick list line in M3.
  • It uses the built-in text-to-speech capability of Glass to illustrate hands-free picking.
  • It’s bi-directional: from M3 to Google’s servers to push the picking list to Glass, and from Google’s servers to M3 when the picker confirms a line.
  • The images come from Infor Document Management (formerly Document Archive).
  • I developed the app in Java as an Infor Grid application.
  • I created a custom subscriber and added a subscription to Event Analytics to M3:MHPICL:U.
  • It uses the Google Mirror API for simplicity to illustrate the proof-of-concept.

I have been making the resulting source code free and open source on my GitHub repository, and I have been writing the details on this blog. I will soon post the remaining details.


I want to specially thanks Peter A Johansson of Infor GDE Demo Services for always believing in my idea, his manager Robert MacCrate for providing the servers on Infor CloudSuite, Philip Cancino formerly of Infor for helping with the functional understanding of picking lists in M3, Marie-Pascale Authié of Infor Pre-Sales for helping me setup and create picking lists in M3 and for also doing the demo at Inforum, Zack Makris of Infor Labs for providing technical support, Jonathan Amiran of Intentia Israel for helping me write the Grid application, and some people of Infor Product Development that chose to remain anonymous for helping me write a Java application for Event Hub and Document Archive. I also want to specially thank all the participants of Inforum whom saw the demo and provided feedback, and all of you readers for supporting me. And I probably missed some important contributors, thank you too. And thanks to Google X (specially Sergey Brin and Thad Starner) for believing in wearable computers and for accelerating the eyewear market.


Here below are the screenshots from androidcast. They show the bundle cover, the three pick list lines with the items to pick, the Confirm custom menu action, the Read aloud action, and the walking directions in the warehouse:

result0_ result1_ result2_ result3_ result3c_ result3r result4_


Here below are three vignettes of what the result would look like to a picker:

1 2 3



Here are some photos at Inforum:

In the Manufacturing area:



In front of the SN sign:


Holding my Augmented Reality demo:

Playing around with picking lists in virtual reality (Google Cardboard, Photo Spheres, and SketchFab):
bild 3

Playing around with picking lists in Android Wear (Moto 360):


That’s it! If you liked this, please thumbs up, leave a comment, subscribe to this blog, share around you, and come help me write the next blog post, I need you. Thank you!

How to run a Google Glass app in Infor Grid

Today I will detail the steps to run a Google Glass app in Infor Grid. This is part of my project to have M3 Picking Lists in Google Glass.

For that, I will develop a very simple Glassware using the Google Mirror API Java Quick Start Project, and I will use the technique I learned in Hacking Infor Grid application development. The integration will be bi-directional: the Grid app will communicate to the Glass API on Google’s servers to insert cards in the timeline, and conversely when the user replies to a timeline card Google’s servers will send notifications to the Grid app provided it is located at a routable address with a valid SSL certificate.

This is a great demo of the integration capabilities of the Infor Grid. I worked a little bit here and there on evenings and week-ends over several months, and I distilled the resulting steps here and in a 15mn video so you can play along. You will need a pair of Google Glass.

STEP 1: Setup Eclipse with Maven

I will start with the instructions for the Google Mirror API Java Quick Start Project:

For the Prerequisites I need Java 1.6 and Apache Maven for the build process. I will download Eclipse IDE for Java Developers that has the Maven plugin integrated:step1

STEP 2: Setup the Glass Mirror API Java Quick Start Project

Then, I will download the Glass Mirror API Java Quick Start Project from the GitHub repository:

Then, I will import it in Eclipse as an Existing Maven Project with the pom.xml:

I will import the Infor Grid library grid-core.jar:

Then, I will replace some of the source code to adapt it to the Infor Grid, using Eclipse File Search and Replace:

I will replace the code for the Logger in all files (from/to):

import java.util.logging.Logger;
import com.lawson.grid.util.logging.GridLogger;
Logger LOG = Logger.getLogger
GridLogger LOG = GridLogger.getLogger

Then, I will add the context path to the URLs of all files (from/to):

url.setRawPath(req.getContextPath() +
$1httpRequest.getContextPath() + "/

For the subscription to notifications I will replace the callback URL in NewUserBootstrapper.java by a routable FQDN or IP address with a valid SSL certificate to handle the notification:

Subscription subscription = MirrorClient.insertSubscription(credential, WebUtil.buildUrl(req, "/notify").replace("m3app-2013.company.net", ""), userId, "timeline");

Then, I will replace the code in NotifyServlet.java that processes the notification from the HTTP request body because apparently notificationReader.ready() always returns false in the Infor Grid and that throws IllegalArgumentException: no JSON input found. Here is the new code:

int lines = 0;
String line;
while ((line = notificationReader.readLine()) != null) {
	notificationString += line;

Then, I will setup the Project in the Google Developers Console with the Google Mirror API, the client ID and client secret credentials for OAuth 2.0, and the Consent screen:
step2.5a step2.5b step2.5c

Then, I will paste the client ID and secret in the oauth.properties of the project:

Then, I will create and run a new Maven Build Configuration using goal war:war:

That will create a WAR file that I will use to deploy as a web application in my Grid application:

STEP 3: Setup the Infor Grid application

Then, create and install an Infor Grid application GoogleGlass based on the HelloWorld app:
step3.2b_i step3.2b_ii step3.2b_iii step3.2b_iv

STEP 4: Test

Then, launch the app:

Authenticate to the Google account associated with Glass, and click Accept to grant app permissions:

Use the app, insert cards in the timeline:

You can also tap Glass to reply to a timecard:

And the Grid app will receive the notification with a JSON string:

Resulting video

Here is the video with hours of work distilled in 15mn (I recommend watching in full screen, HD, and 2x speed):

STEP 5: Summary

That was how to run a Google Glass app in Infor Grid. The main steps are:

  1. Setup Eclipse with Maven
  2. Setup the Glass Mirror API Quick Start Java project
  3. Setup the Infor Grid application
  4. Test

The integration is bi-directional: the Grid app adds cards to the Glass timeline, and when the user takes action on a card Google’s servers send a JSON notification to the Grid app.

The result is great to demo the integration capabilities of the Infor Grid, and it will be useful for my project to show M3 picking lists in Glass.

Future work

In future work, I will use the bi-directional communication for pickers in a warehouse to tap Glass to confirm picking lists, have Google’s servers send the JSON notification to the Grid app, and have the Grid app call an M3 API MHS850MI AddCOPick and AddCfmPickList to confirm picking.

That’s it. If you liked this, please give it a thumbs up, leave your comments, share around you, and contribute back by writing your own ideas. Thank you.

I believe Google is silently killing Glass

I am starting to believe that Google is silently killing Glass for lack of success, rumors on The Interwebs Inc. are starting to confirm it, so I too am phasing out of my Glass project and moving on to Augmented Reality. I call Glass the Palm Pilot of 2013.


For those of you whom remember 1996 when Palm Pilot was released, it instantly became the new must-have gadget in geek universe, it was the future right now in a world of phone bricks…except it never took off, it came too early for its time, it was clunky, it needed to be manually connected, and it suffered from the what-can-I-do-with-it syndrome. A small faction of resistant users kept it afloat for many years, and then it was strolled and bounced around from bad to worse investment at HP to the surprise of most.

After that, phone manufacturers continued to fool us with so-called innovations for another decade.

And then came the iPhone in 2007, it blew everybody away, and it secured smartphones as obvious and unquestionable in every category.

The rumors

The Intertubes (TM) is starting to confirm the rumors that Glass is going away. I heard from a friend that a friend who works at Google said Google had asked some of their employees to return their Glass devices and had assigned them to other projects. Then, at the I/O conference this year Google played radio silence on Glass, no BMX, no parachutes, not even a peep. Then, the video for the release of Glass in London got a whopping 7k hits when I watched it, and now it is stalling at 300k hits. Then, my Google Searches for my Glass software development questions seem to return less and less hits. Finally, another friend of other Googler friends said the topic of Glass shutting down came up during a conversation. That is definitely solid proof, don’t you think?

My experience

I can speak of my own experience with Glass. I originally bought Glass because I was excited to finally try software development for wearable computers and Augmented Reality. I grew up reading in the 90s about the pioneers of the MIT Media Lab, Steve Mann and Thad Starner of the Wearable Computing Group and Hiroshi Ishii of the Tangible Media Group, and Professor Steven Feiner of Columbia University with his research on Augmented Reality. It seemed Glass was set to be the first of such devices ready for the mass market.

I have had Glass for 9 months, and from this gestation emerged the reality. I sadly came to admit I never wear it, it stays in the drawer. Even since October I continue to feel like Robocop with a thing on my face, it changes my behavior as if everybody was looking suspiciously at me, and that makes me uncomfortable. I get many positive reactions from people whom are curious of this novelty. Mostly I get too many negative reactions from Glass haters misconceiving it as an always on surveillance camera with face recognition. False. I sympathize with them because like them I value the protection of our freedom and privacy, and then I cannot help to satirically warn them I can turn on the X-ray vision. Then, in May at the Augmented World Expo 2014, I understood clearly that everybody had tried Glass for Augmented Reality and everybody gave up; in hindsight, they all admitted Glass had never been intended for Augmented Reality. Reality check. It changed my view of Glass as a wearable device. I kept Glass for software development. And then the technical problems. The battery exhausts too quickly, the device heats up and slows down to a freeze, and it is limited in terms of applications. And I am not that good of an Android developer to squeeze the juice out of it. Now I am using it just for pictures and videos, it is excellent for point-of-view shots. And so it has become the most expensive camera I possess.

I am still glad I had Glass, I killed a fever of want, I boosted my software development skills in the process, I anchored my confidence that I can still implement new technologies at 37, and I confirm wearable devices and Augmented Reality are here.

As for my smartphone, I have this weird dream-like feeling of needing to wrap the phone around me like a cloth, dive jump inside the screen and swim in a giant world of digital information. That is my need for the holodeck and Glass does not come close to an inch of fulfilling that.

What’s next

It is a fact wearable computers are here to stay. It just will not be with Glass. Glass was a milestone in history that will remain in the archives as one of the first general wearable devices. Glass also helped spawn the industry of eyewear, and there are valid niche markets where Glass-like devices fit perfectly, for example this safety device for motorcyclists from FUSAR Technologies that displays a rear camera inside the helmet.

I feel sad for Thad Starner and Sergey Brin whom really believed in Glass; they have other awesomeness in their sleeves. Steve Mann does not seem to be affected as he is doing great sensitizing us to sousveillance and working for META. If I project the analogy of the Palm versus iPhone history to Glass, we will see the natural heir of Glass, the obvious leader of the wearable computers, in 11 years from now, in 2025. Yikes! I say it will be a holodeck, light-guided into the eye, mixed with the Minority Report of John Underkoffler (he was one of Hiroshi’s students), and Tony Stark’s helmet of Iron Man.

Meanwhile, I think Google will push full throttle with Project Tango, after all, anything that Johnny Chung Lee touches becomes a hit. Tango is a dream for Augmented Reality enthusiasts. Also, I keep an eye on castAR and Projective Augmented Reality, and I am eagerly awaiting for their first device; Jeri Ellsworth is a self-taught pioneer with many followers that does not come from academics.

As for me, I will finish my Glass proof-of-concept to honor the commitments I made with my partners, and after that I will learn to implement Augmented Reality with Metaio SDK, Unity3DQualcomm Vuforia, and OpenCV.

Google Glass for Khan Academy

At the Pre-I/O Glass Hackaton in San Francisco last week-end we built an app for Google Glass for Khan Academy. It may well be the first of its kind. This first alpha version for teachers displays a notification on Glass when students are struggling during an exercise. We published it on GitHub at GlassKhan.

About Khan Academy

Khan Academy flipped education inside out. When I was in school, we would passively seat in class and listen to teachers give lectures for an hour, and then they would give us homework to do on our own at home. Khan Academy flipped that system inside out. They tell students to watch the lectures on the Internet at their own pace, where they can pause, accelerate, or deepen a section of interest. And they tell students to do the homework in class in groups with the teacher. That creates more engagement from the students as students are familiar with the technologies, and students can help each other in class between peers with the teacher present to answer questions. And as students are doing the exercises on Khan Academy’s website, teachers have hundreds of data points about students to tell who is behind, who is ahead, on what topics, and more. So we set out to build a Glass app for Khan Academy.

The hackaton

The hackaton was a 24h coding competition with education as the main theme, to “see leaps forward in Education using technologies like Google Glass”. I teamed up with Ross Gruetzemacher and Ryan McCormick. We brainstormed several ideas including an app for Khan Academy.

The first premise was that we would build an app with the teacher in mind since it’s currently easier to justify the cost of one Glass per teacher rather than one Glass per student. In the future the cost of Glass will probably be lower but that future hasn’t happened yet.

The second premise was that such app mustn’t already exist. We quickly confirmed that on the Glassware page. We also did a Google Search for the terms “Google Glass” and “Khan Academy” and we found two relevant hits. The first hit from Forbes, Google Ventures Launches Glass Collective With Andreessen, Kleiner Perkins, To Fund Google Glass Startups, said “Doerr is excited about Glass applications for education as well as health care. He cited companies like Udacity and Coursera and Khan Academy that are working on education but sees Glass as adding a whole new layer to education.” The second hit from Kurzweil Accelerating Intelligence, Will anyone create a killer app for Google Glass? said “Khan Academy software engineer Stephanie Chang, who was at the Foundry events, has ideas such as creating a Glass app for teachers, who could be notified as they give a lecture which students are struggling.” Both articles validated our idea. There’s funding available, and there’s a demand for an app.


Teachers have a dashboard on Khan Academy’s website that shows the roster of students, classes, progress reports, and hundreds of data points.

Here is a screenshot of the coach dashboard:

And Khan Academy has an API that gives access to data about what student is doing what exercise when and at what level. The data is private to the coach and to students that have accepted a teacher as a coach. And the data is protected with OAuth.

Here is a screenshot of the API Explorer:

As soon as a student is struggling with an exercise the API knows about it. The criteria to be considered struggling on a exercise is based on metrics like time spent on the exercise and number of hints used. We setup dummy teacher accounts and dummy student accounts, and we purposely failed at plenty of elementary math exercises to trigger the flag  struggling = true . For that we query the API /api/v1/user/exercises/<exercise_name> . When that event happens, we send a timeline card to the teacher’s Glass with the nickname of the student, the level “struggling”, the name of the exercise, and the time “just now”. That’s four pieces of data.

It’s a useful micro-interaction for the teacher and it follows the principle about the now as explained by Google Developer Expert Allen Firstenberg:

From a technical point of view, we used the Google Mirror API and the Java Quick Start. We built a decorator and an adapter in Java for the Khan Academy API. And we built a loop for each student, for each exercise, test if struggling == true.


Here are two screenshots of the result, we randomized the pairs of colors for the nickname and exercise name so teachers get distinct visual cues:

vignette vignette2


This app will help students who are shy and don’t want to raise their hand and admit they don’t know when their friends are watching. With the notification the teacher can go help the student without the student having to feel embarrassed to ask for help.

Also, this will help teachers not be distracted by the iPad or the computer.

And teachers that have an inclination for programming can take the source code and advance it for their own creative ideas.


This first alpha version of the app contains shameful hard-coded OAuth to Khan Academy that we need to completely re-code. We also lack a notification system; we simply implemented a one iteration loop that pulls from Khan Academy’s website. And there are rough edges to smooth.


Future versions of the app could show other types of events such as levels of success rather than levels of struggle.

We’d like to see this app survive beyond the hackaton. If you’re a developer and would like to contribute, please contact us. And if you work at Khan Academy, please contact us, we need new APIs from you (coach data, notifications, and the servers returned HTTP 500 “Internal Server Error […] the server is overloaded”).


That’s it! Please like, comment, and spread the goodness to your teachers and students.

Glass project hosted by Infor CloudSuite

I’m pleased to announce my Google Glass project is being hosted by Infor CloudSuite.

Project overview

I’m developing an application for Google Glass to have rich interactive picking lists from Infor M3 with:

  • list of items to pick with quantities and stock locations
  • a picture of the item from Document Archive so the user can get a visual cue of what to pick
  • floor plan of the warehouse with walking directions so the user can optimize the picking time
  • tap to confirm picking


It’s a proof-of-concept of wearable computing for M3 and a base for future experiments in Augmented Reality for M3. Also, it’s a great visibility to showcase the integration capabilities of M3, and it’s a way to strengthen the collaboration between all the different actors (management, product development, consultants, colleagues, customers, partners).

I will make the resulting source code free software and open source on this blog and on my GitHub repository. I’m passionate enough about AR and I need to uplift my skills that I’m working independently on my own during evenings and week-ends. My goal is to complete the first set of features before Google I/O 2014 in two weeks from now. After that my next goal will be to complete the second set of features for Inforum 2014 in New Orleans on September 15-18 this year where I will do a demo with Peter.


Peter A Johansson is the manager of the Global Demo Environment (GDE) Demo Services team at Infor. Peter is an appreciable visionary and has the necessary pragmatism and focus to make ideas a reality. I was looking around for an M3 server with Infor Smart Office and Infor Process Automation to do the software development, so I pitched the idea to Peter in April and at once he was attracted. He saw potential for a great demo at Inforum and suggested the idea to his manager. Peter and I have worked together in the past, and knows my drive, so he said: “We all know that if you give Thibaud what he needs then cool-stuff happens :-)” And they approved and made available for this project a full stack of M3 13.2 demo image servers on Infor CloudSuite deployed as virtual machines on Amazon Web Services (AWS).

Servers on Infor CloudSuite

The M3 stack for this project consists of: LifeCycle Manager (LCM), M3 Business Engine (M3 BE), Grid, Enterprise Search (IES), M3 Enterprise Collaborator (MEC), M3 BE BODs, Smart Office (ISO), H5 Client, Ming.le, ION Desk, Graphical Lot Tracker (GLT), Customer Lifecycle Management (CLM), Counter Sales for Distribution, Document Archive (DAF), MetaData Publisher (MDP), StreamServe for MOM, M3 Report Manager (MRM), Business Performance Warehouse (BPW), M3 Analytics, Event Hub, Event Analytics, Process Automation (IPA), Product Configuration Management (PCM), and more. For this project I only need M3 Business Engine, Event Hub, Event Analytics, Process Automation, and Document Archive.

The servers cost money per uptime and I can only work on this three times a week so I need to use the uptime carefully on a schedule we setup together based on my preferences.

Here is a screenshot of the Infor CloudSuite overview page:

Here is a screenshot of the schedule I chose:

Here is a screenshot of the deployment selection (AWS in my case):

Demo @ Inforum

Peter and I will do a demo at the M3 Labs booth at Inforum in September. Come check it out. And if there is a feature you’d like to see at the demo let me know in the comments below.

After the demo at Inforum in September, I’ll re-assess the future of the project.


That’s it for the announcement! Special thanks to Peter A Johansson, his manager, the GDE Demo Services team, and Infor CloudSuite for believing in and sponsoring this project.

Augmented World Expo 2014

Last week I attended the Augmented World Expo (AWE) 2014 [1] in Santa Clara, one of the world conferences on Augmented Reality, Virtual Reality, Augmented Virtuality [2], and smart glasses [3]. There, I saw Steve Feiner, pioneer of Augmented Reality in the 1990s [4] [5], Professor of computer science and director of the Computer Graphics and User Interfaces Lab at Columbia University, and adviser for Space Glasses at Meta [6]. I also saw Mark Billinghurst, director of the HITLab in New Zealand [7] whom created the AR Toolkit which I later used (JavaScript port) for my prototype M3 + Augmented Reality. I didn’t see Steve Mann, also adviser for Meta, and one of the pioneers of the Wearable Computing group in the Media Lab in the 1980s [8]; Thad Starner was in that group and later went on to design Google Glass for Sergey Brin [9]. I got inspiration from their work when I was younger, and I was excited to see them.

I went to the conference to learn more about the future. I’m currently working on a personal project to develop an app to display picking lists in Google Glass with data from Infor M3.

Here are some pictures of me at the conference, dreaming my vision of future picking lists 😉

1 2 AWE2014