Saturday, April 12, 2014

The Iron Girl Project - HUD, Part 1

When I was looking into the Iron Girl project, the most intriguing - and scary - part by far was the HUD. The HUD on the Mark II is magnificent, and it only gets better with each suit in the movie series. If we skip the holographic and multi-layer effects (again, tech not financially feasible), we’re left with the concept alone:

A video feed from the outside world, as well as:
Numerical and sensory data about the suit
Corresponding visual references to video feed images (heading, bearing, horizon line)
So originally, I wanted to use some sort of USB webcam > OpenCV (computer vision application), and have the Pi do all the processing. Turns out both of the webcams I had lying around had issues with the Pi - one needed additional power, and the other gave me about 4FPS tops (without any image processing). So I ordered the standalone Raspberry Pi camera board, hoping that would have a better framerate. It definitely did, but the issue there was that it injects video directly to the framebuffer - meaning that the operating system/Python/X has no way of knowing what the heck the camera is seeing. The camera feed just fills the entire screen right over whatever it was that you were doing. Rude, but useful in some circumstances.

Using the raspivid command-line arguments, you can specify x and y coordinates for where you want the video. In doing this, you can kind of fake an X window at specific coordinates or use a non-resizable GUI to get the effect. However, the mouse pointer will always disappear behind this, and for the HUD purposes it didn’t make sense to shrink the video to the center of the screen to display edge-HUD elements.

There is, however, a raspivid command-line option labeled -op for “opacity”. This gives you a range of transparency (0-255) on the video itself, so you can actually have highly visible elements BEHIND the video feed and they’ll show up okay. Color mixing is a bit off, and it’s not the prettiest solution, but short of delving deep into OpenGL and MMAL just for the sake of this one project, it was good enough for me.

Booting into Wheezy, I pulled up a sample HUD image in the picture viewer, made it full-screen and zoomed in till I got the edges to touch the screen. I then started raspivid and played for a while with the command-line. At an opacity level of about 110, I managed to get something reasonable:


So that worked okay, and decently for my proof-of concept. I ended up adjusting the settings a bit since I would be working with a pair of video glasses on the composite output of the Pi, not the HDMI port with a nice monitor. The glasses only have a resolution of 720x480. The specific ones I used were based off the Olympus Eye-Trek FMD-150W, but actually removed from an old Enhanced Visions JORDY unit that had cracked on the side and lost both earpiece hinges:


I switched the output to composite on the RPI and had to adjust the contrast settings and brightness to make it look anywhere near decent, but I manged to get something workable with my test HUD image and the video feed overlay:


So far, so good.

For anybody looking for cheapish HMD’s, the JORDY might not be an entirely bad option. Just make sure you look for a version 2 if you need widescreen. The version 1 has a black cord linking the controller unit and the glasses, and has a notch between the eyepieces (right above the nosepiece). The version 2 has a clear/braided cord and no notch above the nose piece. The only real difference is the resolution of the internal screens - the version 1 is based off the FMD-150 and has a resolution of 640x480 (to the best I can tell). The V2 is the FMD-150W with the 720x480 resolution. I manged to find a V1 for about 60 bucks on eBay at the time of this writing. Adafruit.com also has 320x240 HMD’s for $109, and you won’t have to do any salvage.

For the rest of the overlay, I decided to use Pygame to load all the data off the I2C bus, but that’s for another post.

Cheers for now,

~Lexikitty

No comments:

Post a Comment