Facilitating energy efficiency on mobile devices

I’ve recently read the Bartendr paper (386f5adade0bd7d0970b5ff555d643c6). It explores an algorithm for increased energy efficiency on mobile devices, based on prediction of good conditions for transmitting data to the cellular network.

The Bartendr  approach to communication scheduling is based on the fact that mobile devices consume more power when the signal is weak to compensate for the low SNR of the transmission channel. Applications should preferably communicate when the signal is strong which is indicative of a good transmission channel.
By anticipating good or bad channel conditions it is possible to determine the periods of time that are favorable for energy-saving communication. Since the signal strength is highly correlated with the user’s location relative to the cellular antennas, moving along a certain route the user experiences changes in signal strength. By learning the signal strength profile as a function of location along a certain route, it is possible to anticipate the time windows that are favorable for energy-saving communication.

While the algorithm was evaluated in simulation as well as on an actual device by incorporating the scheduling decisions into the test application, currently there is no practical framework for an actual implementation of Bartendr, or a similar scheme since it requires changes in the related software architecture. This post discusses several architectural changes that are needed in order to accommodate such schemes.

While it is possible for mobile applications to use timer mechanisms to run callbacks asynchronously at specific times, it requires the application to be aware of the right times to schedule the execution.
We can imagine how it would simplify things for the developer to simply push a synchronization request into a queue from which tasks are popped according to a power efficient schedule, maintained by the operating system. This could be a nice feature for a mobile operating system.
It would also be great to have something like a Strategy design pattern implemented for scheduling – allowing plugging-in a different schedule algorithm according to the user’s choice.


Dropbox Recovery Tools: When Dropbox client goes crazy…

It all started a couple of days before a paper submission deadline – when I really needed my data accessible, and my code available. I store all my academic stuff in my Dropbox folder, so it is backed up to the cloud. Lo and behold, Murphy entered the game and caused my Mac to halt, and when I forced a reboot it apparently resulted in some inconsistent state of the Dropbox client. This drove Dropbox  into deleting tens of thousands of files from my account, causing files to be deleted remotely and locally, under the tips of my typing fingers. The only thing I had is previous remote revisions of the files.
I don’t have Dropbox Pro and therefore don’t have PackRat, which anyway I’m not sure would help in this case. Besides, I didn’t actually care about indefinitely long history, but about quick and convenient recovery. I’ve decided to take a look at the Dropbox API. I found it quite easy, using the supplied Python Dropbox SDK, to write several tools to help me search and recover my deleted files.
The Dropbox Python SDK can be either downloaded and installed using setup.py or using

# pip install dropbox

In order to access your Dropbox you need to create a Dropbox application that’s linked to your account. It can be done through the Dropbox application console (https://www.dropbox.com/developers/apps) and gets you an application key and application secret which are later used by the Python scripts to obtain a token that enables accessing your Dropbox storage.
I store the application key and secret in a JSON configuration file, and written a tiny helper script to generate this JSON (make_config.sh):

echo "{
  "app_key" : "$1",
  "app_secret" : "$2"

The code to obtain an access token for all further actions looks like that

import dropbox
def get_token():
  config = open(CONFIG_FILE, 'r')
  config_data = json.load(config)
  app_key = config_data['app_key']
  app_secret = config_data['app_secret']

  flow = dropbox.client.DropboxOAuth2FlowNoRedirect(app_key, app_secret)
  authorize_url = flow.start()

  # Have the user sign in and authorize this token
  authorize_url = flow.start()
  print '1. Go to: ' + authorize_url
  print '2. Click "Allow" (you might have to log in first)'
  print '3. Copy the authorization code.'
  code = raw_input("Enter the authorization code here: ").strip()

  # This will fail if the user enters an invalid authorization code
  access_token, user_id = flow.finish(code)
  return access_token

The access token is stored in a cookie file for later use.

def create_client():
    cookie_content = open(COOKIE_FILE, 'r').read()
    access_token = json.loads(cookie_content)
    client = dropbox.client.DropboxClient(access_token)
  except Exception:
    access_token = get_token()
    client = dropbox.client.DropboxClient(access_token)
    cookie_content = json.dumps(access_token)
    open(COOKIE_FILE, 'w').write(cookie_content)

Once we have initialized a client we have access to all kind of methods such as metadata, revisions,  restore, file_delete, etc. There are several utilities provided:
find_deleted – Finds deleted file under a given path (optionally by date)
ls – Lists files (optionally by date)
restore_file – Restores files. Paths are provided through standard input.
delete_files – Deletes files. Paths are provided through standard input.

Using these utilities I was able to restore all my accidentally deleted files in a matter of hours. One small technical detail I had to apply is to handle rate limiting imposed by Dropbox. Once in a while an operation would result in an exception if too many actions were attempted in a short amount of time. I had to catch this, and retry the operation.

The code is available under https://github.com/ymcrcat/DropboxRecoveryTools.

LED T-Shirt

During the last couple of months I’ve been working on a fun side-project with my friend Shlomoh Oseary. For a long time I wanted to make a T-shirt with an equalizer display on it that will light up in correspondence with surrounding sounds and music, and once I had a buddy excited about this idea too we started working.

We decided to use E-textile dedicated components. Arduino Lilypad with its 8 MHz Atmega processor seemed suitable for the task. Now we had to understand how will will drive the LEDs. The naive approach of connecting each LED to ground and to one of the Lilypad’s outputs would limit the number of LEDs we can drive this way. After searching a bit we found that what we want is to build a LED matrix. The principle in a LED matrix is that all the LEDs in the same row or column are connected. In our case all the minus legs of the LEDs in the same column are shorted and all the plus legs of the LEDs in the same row are shorted. To light up a LED we need to feed positive voltage to the corresponding row and short to ground the corresponding column. To light up multiple LEDs our LED matrix driver code  loops over all the rows and columns and constantly lights up each LED that is required to be turned on for a fraction of a second thus achieving the effect of those LED being constantly turned on.

Testing the microphone and the FFT calculation

Each column of the LED-matrix represents a frequency range with lower frequencies on the right. The more energy is sensed in a certain bin – the more LEDs in this column will be turned on. To find the energy for each frequency range we used FFT over a window of  128 samples. The sampling frequency was chosen to be 4000 Hz providing according to Nyquist theorem coverage for tones up to 2000 Hz. A predefined threshold (which we need to calibrate) is subtracted from the calculated energy to filter out small fluctuations and the outcome is mapped to the number of rows of the LED matrix to represent an energy level.
We used an existing FFT implementation for Arduino from http://www.arduino.cc/cgi-bin/yabb2/YaBB.pl?num=1286718155.
There is still a final touch missing to the algorithm which is applying a low-pass filter to clean frequencies higher then 2000 Hz from the recorded signal prior to FFT calculation.

Connecting the electret microphone and the power supply to the Lilypad.

LED T-shirt @ work

When beauty and electronics meet… (Julia Shteingart modeling)


Project’s code (except for FFT implementation which can be downloaded using the link above and the TimerOne library which can be downloaded from Arduino site) is available through SVN under



To Shlomoh’s mom for sewing.

Playlists generation for AudioGalaxy

I use audio galaxy and thought it would be nice to create M3U playlists for all my music folders. The result is this simple Python script:

import os
import string

AUDIOGALAXY_PLAYLIST_DIR = r"C:UsersyanMusicAudiogalaxy Playlists"

class Playlist:
    def __init__(self, dir):
        self.__name = string.replace(string.split(dir, MUSIC_ROOT_FOLDER, -1)[1], os.path.sep, "_")
        if not self.__name:
            self.__name = "Songs in root"
        self.__songs = []
        for file in [file for file in os.listdir(dir) if not file in [".", ".."]
                     and os.path.splitext(file)[1] in MUSIC_EXTENSIONS]:
            fullname = os.path.join(dir, file)
            if not os.path.isdir(fullname):

    def printme(self):
        print "%s - %d songs" % (self.__name, self.countSongs())
        #for song in self.__songs:
        #    print "t%s" % (song,)

    def countSongs(self):
        return len(self.__songs)    

    def save(self, dir):
        if 0 == self.countSongs():
        file = open(os.path.join(dir, self.__name + PLAYLIST_EXT), "w")
        for song in self.__songs:
            file.write(song + "n")

def handle_dir(target_dir, dirname, fnames):
    p = Playlist(dirname)

def main():

if __name__ == '__main__':

Just set MUSIC_ROOT_FOLDER to your music files location and AUDIOGALAXY_PLAYLIST_DIR accordingly and run the script.

VoiceBrowsing Toolbar for IE

Long after I’ve written the first version of the VoiceBrowsing Toolbar for Internet Explorer, it’s about time to mention it here. Once surprised by not finding a comfortable  browser plugin that enables navigation using voice commands, I’ve decided to implement one. To make things simple I’ve relied on the Microsoft Speech API accessible in Windows Vista and above through the .NET framework.
I’ve started with writing a .NET voice recognition engine that provides a registration  and notification interface. Given a dictionary of phrases to be recognized it invokes a callback function provided by the user. A small Internet Explorer toolbar makes it usable for navigating to websites or performing common operations such as browsing back, forward or going to the homepage. The plugin is configurable and lets the user specify keywords (trigger phrases) and the URL that she would like to go to once that keyword is recognized by the speech recognition engine.
The plugin is currently available only for Internet Explorer, starting with version 6, on Windows Vista and above. In the future I plan to implement Firefox and Chrome extensions based on the same speech technology.

It’s available for download from www.voicebrowinsg.net .
Comments and suggestions for further development would be highly appreciated.

JNI in a multi-process environment

Recently I was working on a project that made use of the Java Native Interface (JNI). JNI is usually used for calling native functions from Java code. For example, if you want to implement some platform specific functionality or some very fast piece of code, you can use JNI to interact with the Java Virtual Machine calling your function. In that particular project we made the opposite, i.e used JNI to call Java code from C environment.
In order to execute Java code we first have to create a Java Virtual Machine to run it. This is done using JNI_CreateJavaVM() function. It returns a pointer to the newly created Java VM and a pointer to the JNI environment through which we do all our Java related operations.

When the code was written and everything should supposedly work I encountered a problem on which I spent a lot of debugging time. Once the Java code was called it ran up to a certain point and then got stuck. I thought it was some problem within the Java code and went on debugging it but everything seemed to be OK. It has taken  us a while to get to the point and here it goes:
The Java VM was created during the initialization of the C module. Later on the process fork()-ed and our Java piece of code was actually activated by the forked process. I assumed accessing the same virtual machine is OK as long as I didn’t use the same pointer that I got from JNI_CreateJavaVM in the parent process, but instead called JNI_GetCreatedJavaVMs to retrieve a pointer to the existing virtual machine. Indeed the call successfully returned me a pointer, but apparently this scenario of multiple processing sharing the same VM is not supported, and calling Java methods that way causes the program to hang. Once we created the Java VM in the context of the process that actually called the Java methods the problem was solved.
I hope you stumble upon this post shortly after you have encountered a similar problem or prior to that, and I managed to save you some time.