Monday, May 26, 2014

The Iron Girl Project - HUD, Part 2

So here's a few updates on the HUD and AHRS (Altitude Heading and Reference System).

I recently got an Adafruit 10-DOF IMU to handle all the head tracking and movement data for the "horizon lock" effect seen in the IM Mark II display. Combining this with the sample code and a few modifications, I managed to get something along the lines of this:


Which doesn't look too shabby at all, at least for a proof-of-concept. After fiddling about with calibration settings and how to display data on-screen in Processing (I'm still somewhat new to it), I managed to get a slightly more informative display up and running. I also added a "horizon lock", since the real Mk II showed that on it's maiden flight.



Still a long way from the original. But still, this is the work of just one girl (who has a full-time job and a girlfriend), and I'm still aiming to get as many of the systems seen int eh IM movies (particularly the initial Mark II test sequence, below) as functional as realistically possible.



It remains to be seen how much of this will be able to carry over to the RPi - if I can at least get the OBJ loader to work, I may be able to simply black out everything else and "fake" keyframing with the opacity switch. Remains to be seen. For now, though, I'm off to model a real HUD model, since that last one was crappy alpha at best.

Cheers,
~Lexikitty

3 comments:

  1. This is great! I'm glad I stumbled on your YouTube videos and blog.

    I'm working on a similar project. I've been building a Mk VII suit and have been tinkering around with a video system for my helmet. I've currently got a hardware overlay solution (MAX7456 chip controlled via Arduino Micro) that works ok, but it's not pretty. It basically overlays white block text (or other sprites) over composite video.

    What I'd love to do is program the HUD overlay in the RasPi, input two cameras (for stereoscopic vision to help with depth perception and peripheral vision), and output into a newer set of HDMI-compliant video glasses. Unfortunately I don't think the RasPi can handle the two cameras and the data processing/HUD overlay all in near-real-time with 30fps or better. I've got a long way to go with that, starting with figuring out how I'm going to code it all up. I just recently made the decision to attempt to move away from the MAX7456 to the RasPi. I've also got to figure out a way to hide bigger cameras. :) Pinhole CMOS apertures are FAR easier to hide on a helmet than what I expect an HD wide-screen webcam would be.

    I'm definitely going to be staying tuned to your progress and very interested in swapping notes if possible.

    Keep it up!
    Jay

    ReplyDelete
    Replies
    1. Well hey! And thanks for stopping by!

      I did see that composite overlay chip over on Sparkfun and considered it, but having played with some older low-vision systems that used the same method, I knew any sort of future animation or CV work wouldn't hold up. So I decided not to go with it.

      The RasPi definitely won't do two cameras - it barely does one over USB. What it does do well is the camera board though, which pours the camera stream straight through the processor. It also streams well, so even though it seems a bit ridiculous, two RasPi units with one camera board each (you can only add one, sadly) can netcat stuff to a third RasPi, which would then set the offset for stereo vision and overlay your Processing or Python application. Elegant? No. But it's one method. I haven't tried this at home, but I would assume you could pull almost the same feat with two IP security cameras - I'd have to play around with the ones I use for my security system, although they're a bit big.

      Thanks again for commenting, and keep me posted on your suit!

      Cheers,
      ~LK

      Delete
  2. Really enjoying the blog. Stumbled on it yesterday while looking for information for a similar idea i am working on. I started building an iron man helmet (mk3) from pep and when it was done i decided due to the limitaions of vision from the eye slit wouldnt it be cool to have a stereo vision system. Im am not technically proficiant in coding or electronics so the learning curve has been large. After reading the above message about multiplexing the raspberry pis i thought what about using an optical solution to the problem using mirrors and prisms to split the incoming camera image in to two? Seems then a single screen and two aspheric lenses oculus rift style could work to produce the stereo effect.

    Daniel

    ReplyDelete