3D on Android (Geekcon 2011)

I’ve recently participated in Geekcon 2011. It is similar to Garage Geeks but the thing is that people actually build stuff during the two and a half days of staying there.
My friends, Ofer Gadish and Gil Shai from Acceloweb and I worked on displaying stereoscopic 3-D images on an Android device. Those were exciting three days of sleeping very little, coding a lot, soldering, drinking beer and having a lot of fun.

When we initially discussed the idea we thought about using 3-D glasses controlled by Bluetooth but we realized that in the short time we had we would probably not be able to study the control protocol and also figure out how we directly control the Bluetooth transmitter of the mobile device, if it is at all possible to do it on such a low level from a user application.

Instead we have chosen to control the glasses through the audio jack output of the mobile phone. We found another pair of glasses controlled using quite an old VESA standard. The glasses are supplied with 3-pin mini-DIN receptacle. The idea is very simple: high voltage means logical “1” means opening the left eye and low voltage means logical “0” means opening the right eye.

To supply ground, +5V and an accurate square wave synchronization signal to the glasses we’ve done some soldering and connected the mini-DIN to an Arduino, that was in turn receiving the output from the mobile’s audio jack.

DSC_0089

The Android software was a bit of a mess, reaching a switching rate of 60 Hz wasn’t very simple considering the slow performance which I don’t exactly know what to attribute to, whether it is a slow refresh rate of the display or the technique we used to draw the images (although we accessed the canvas directly, bypassing any higher level APIs for displaying a picture). On Saturday afternoon we had it running, with some glitches occurring every couple of seconds, but giving some feeling of 3-D depth. Or was it our exhausted imagination after not sleeping too much during this crazy and awesome weekend?

Playlists generation for AudioGalaxy

I use audio galaxy and thought it would be nice to create M3U playlists for all my music folders. The result is this simple Python script:

import os
import string

AUDIOGALAXY_PLAYLIST_DIR = r"C:UsersyanMusicAudiogalaxy Playlists"
MUSIC_ROOT_FOLDER = "D:\Music\"
MUSIC_EXTENSIONS = [".mp3",]
PLAYLIST_EXT = ".m3u"

class Playlist:
    def __init__(self, dir):
        self.__name = string.replace(string.split(dir, MUSIC_ROOT_FOLDER, -1)[1], os.path.sep, "_")
        if not self.__name:
            self.__name = "Songs in root"
        self.__songs = []
        for file in [file for file in os.listdir(dir) if not file in [".", ".."]
                     and os.path.splitext(file)[1] in MUSIC_EXTENSIONS]:
            fullname = os.path.join(dir, file)
            if not os.path.isdir(fullname):
                self.__songs.append(fullname)

    def printme(self):
        print "%s - %d songs" % (self.__name, self.countSongs())
        #for song in self.__songs:
        #    print "t%s" % (song,)

    def countSongs(self):
        return len(self.__songs)    

    def save(self, dir):
        if 0 == self.countSongs():
            return
        file = open(os.path.join(dir, self.__name + PLAYLIST_EXT), "w")
        file.write('#EXTM3Un')
        for song in self.__songs:
            file.write(song + "n")
        file.close()

def handle_dir(target_dir, dirname, fnames):
    p = Playlist(dirname)
    p.printme()
    p.save(target_dir)

def main():
    os.path.walk(MUSIC_ROOT_FOLDER, handle_dir, AUDIOGALAXY_PLAYLIST_DIR)

if __name__ == '__main__':
    main()

Just set MUSIC_ROOT_FOLDER to your music files location and AUDIOGALAXY_PLAYLIST_DIR accordingly and run the script.

Pango parking Android application

I’ve published my first android application! Well, no need to get too excited… It’s a simple proxy for the Pango cellular  parking service. The application uses SMS commands supported by Pango to activate and deactivate parking. Meanwhile only the default parking city and area are supported.
I’m planning to add a combo box for choosing the city and parking area, and later, perhaps add support for automatic location detection using GPS.

Enjoy.

https://market.android.com/details?id=com.pango.mobile

VoiceBrowsing Toolbar for IE

Long after I’ve written the first version of the VoiceBrowsing Toolbar for Internet Explorer, it’s about time to mention it here. Once surprised by not finding a comfortable  browser plugin that enables navigation using voice commands, I’ve decided to implement one. To make things simple I’ve relied on the Microsoft Speech API accessible in Windows Vista and above through the .NET framework.
I’ve started with writing a .NET voice recognition engine that provides a registration  and notification interface. Given a dictionary of phrases to be recognized it invokes a callback function provided by the user. A small Internet Explorer toolbar makes it usable for navigating to websites or performing common operations such as browsing back, forward or going to the homepage. The plugin is configurable and lets the user specify keywords (trigger phrases) and the URL that she would like to go to once that keyword is recognized by the speech recognition engine.
The plugin is currently available only for Internet Explorer, starting with version 6, on Windows Vista and above. In the future I plan to implement Firefox and Chrome extensions based on the same speech technology.

It’s available for download from www.voicebrowinsg.net .
Comments and suggestions for further development would be highly appreciated.

JNI in a multi-process environment

Recently I was working on a project that made use of the Java Native Interface (JNI). JNI is usually used for calling native functions from Java code. For example, if you want to implement some platform specific functionality or some very fast piece of code, you can use JNI to interact with the Java Virtual Machine calling your function. In that particular project we made the opposite, i.e used JNI to call Java code from C environment.
In order to execute Java code we first have to create a Java Virtual Machine to run it. This is done using JNI_CreateJavaVM() function. It returns a pointer to the newly created Java VM and a pointer to the JNI environment through which we do all our Java related operations.

When the code was written and everything should supposedly work I encountered a problem on which I spent a lot of debugging time. Once the Java code was called it ran up to a certain point and then got stuck. I thought it was some problem within the Java code and went on debugging it but everything seemed to be OK. It has taken  us a while to get to the point and here it goes:
The Java VM was created during the initialization of the C module. Later on the process fork()-ed and our Java piece of code was actually activated by the forked process. I assumed accessing the same virtual machine is OK as long as I didn’t use the same pointer that I got from JNI_CreateJavaVM in the parent process, but instead called JNI_GetCreatedJavaVMs to retrieve a pointer to the existing virtual machine. Indeed the call successfully returned me a pointer, but apparently this scenario of multiple processing sharing the same VM is not supported, and calling Java methods that way causes the program to hang. Once we created the Java VM in the context of the process that actually called the Java methods the problem was solved.
I hope you stumble upon this post shortly after you have encountered a similar problem or prior to that, and I managed to save you some time.