Windows Touch vs PyMT: Why multi-touch programming on Windows is too complicated!

Now, I am obviously a bit partial on this topic, having worked in pyMT for quite a while and all.  But I think doing so, we’ve made some valuable observations about what is involved when you program for multi-touch input.  If you want to code multi-touch interactions or applications context becomes very important. 

If you read the rest of the blog post, I’ll show you what I mean about context, and why e.g.  Windows Touch makes life difficult if you want to program multi-touch.  I’ll show you how to rewrite a windows touch example project (5 C# source files and > 400 lines of code) in Python using PyMT (1 source file with 12 lines of code).  Yes 12 lines, you read correctly (and then there is the whole thing about it just running on Linux or OSX as well…but we’ll leave that for another blog post).

The Problem

Here is the problem:  When you start programming with multiple simultaneous input cursors (e.g. multi-touch) things very quickly turn very funky and very different from programming for e.g. a single mouse.  There is a lot of added complexity just in handling the state for multiple inputs without even beginning to talk about interpreting them in relation to each other.  To make things easier, I think we need APIs and SDKs that provide a context in which multi-touch input makes sense.  Instead, Windows (and other APIs) try to simplify things by giving you a more abstract view by default.  That can make things simpler, but these approaches also limit the kinds of things you can do with multi-touch tremendously! 

Windows 7 for example doesn’t give you the raw multi-touch information by default.  Touch or stylus input generates mouse events, and your application can choose to handle some gesture events that windows has pre-defined (like ‘pan’ or ‘zoom’).  Now I think that’s great if that’s all you want.  Especially for legacy support (which being Windows is obviously a priority).  It makes sense to do this, so that old applications have some way of working with the new input devices. 

But let’s say you actually want to write an application that takes full advantage of multi-touch input. There are some videos over at channel9 that talk about the Windows Touch API in Windows 7, and what you have to do if you want to provide the full multi-touch experience.  There is also a nice article about it here.  But if you opt for true multi-touch, it litterally gives you rawinformation.  Even if you are pretty familiar with the Windows API’s and know how to  handle the messaging system and all that jazz, multi-touch programming isn’t going to be much fun with this API.

Let me show you what I’m talking about:

Windows Touch ScratchPad Example App

The Windows 7 SDK comes with am example application called MTScratchPad.  You can read about it here or download it here to take a closer look, if you don’t have the SDK installed.  The application draws a line for every touch on the display while the finger is down.  Here is a screenshot (from the msdn documentation, I hope they don’t mind me using their picture)

Screenshot form teh ScratchPad Example in teh Windows SDK

Screenshot form teh ScratchPad Example in teh Windows SDK

If you take a look at the documentation or code, you will see that you have to do a lot of things as the developer in order to make the application do what you want.  you have to overwrite the WndProc method of your window to catch WM_MESSAGES, you have to decode the touch structures for each message, define your event handlers, call them yourself based on what WM_TOUCH message you got, and keep track of all the strokes yourself in a collection.  The example includes a class to represent a stroke and draws those lines (>170 lines of code).  The main source file “WMTouchForm” has over 250 lines of code (that’s with comments and blank lines removed mind you), and of course visual studio generates a whole bunch of code and some other files.  In PyMT I can make that entire application in 12 lines of code.

PyMT ScratchPad

Here is the PyMT version and a video explaining the code as well as showing a little bit more interesting example:

PyMT Video Tutorial 1 from Thomas Hansen on Vimeo.


from pymt import *
class Tracer(MTWidget):
    def on_touch_down(self, touch):
        touch.userdata['line'] = list(touch.pos)
    def on_touch_move(self, touch):
    def draw(self):
        for touch in getAvailableTouches():
w = MTWindow()

Other SDK’s and multi-touch programming challenges

PyMT tries to give you a context in which multi-touch programming makes sense.  This involves making it easy to handle events, and learn more about them, providing functionality to interpret and deal with those events and offering a collection of classes that encapsulate useful interactions and event processing.  Now, if I’m honest, it’s unfair to compare it to the Windows Touch API, given that Windows Touch has to be part of and fit in with the overall Windows API.  But I think the point I’m trying to make is that we need to rethink the way we program for multi-touch and other novel input devices (just like we changed programming paradigms when we went from text to gui interfaces).  There is some other multi-touch SDK’s out there, that are probably more akin to what PyMT tries to do.  Some are proprietary (e.g. Snowflake), and others open source (e.g. MultiTouchVista).  I haven’t used or explored any of them, but I would welcome any feedback in how they approach some of the things I’m talking about here.

I have to say a little bit about the Surface SDK.  It looks much better than the other multi-touch API’s I’ve gotten to look at. But it works only for the $15,000 surface, and its proprietary, so I can’t really use or test it.  It does seem to have a great collection of surface/multi-touch controls.  I haven’t had a chance to check out how it stacks up if you want to experiment with your own interaction ideas, or more radical interface ideas.  (see e.g. ).  From the video it seems very much like they are providing ready made controls for developers in an attempt to unify the end user experience a lot. PyMT has a whole bunch of controls(widgets) too, but they are aimed more at giving the developer something to work with, rather than unifying the experience.

I do think unifying the interaction is important at some point, but I think it’s a little early for that.  I wish we would focus a little bit more on providing developers with tools and API’s to let them be as creative as possible, instead of giving them tools to build the same kind of applications.  Let’s see what we can come up with first, before deciding what the controls need to be and how they work.  I don’t think we are ready for unifying either the SDK/API approach nor the interactions/controls for multi-touch or surface computing.  The interaction paradigm is so revolutionary, I think we need to adopt our development tools more to it and explore the interaction space.  Instead I think people are jumping the gun on trying to standardize the interface while using the development paradigms we used for the GUI.

This entry was posted in HCI, Uncategorized, multi-touch, programing, python, windows and tagged . Bookmark the permalink. Post a comment or leave a trackback: Trackback URL.


  1. Posted October 22, 2009 at 12:13 pm | Permalink

    This is a great writeup/observation. Thanks Thomas.

  2. Posted November 22, 2009 at 3:36 am | Permalink

    I tried your code which you demonstrated in the video.I am getting following error.I am a newbie so dont know much.
    File “C:\Python26\”, line 23, in
    File “C:\Python26\lib\site-packages\pymt\”, line 265, in runTouchApp
    File “C:\Python26\lib\site-packages\pyglet\app\”, line 63, in run
    self._timer_func(0, 0, timer, 0)
    File “C:\Python26\lib\site-packages\pyglet\app\”, line 84, in _timer_func
    sleep_time = self.idle()
    File “C:\Python26\lib\site-packages\pymt\”, line 153, in idle
    File “C:\Python26\lib\site-packages\pymt\”, line 143, in dispatch_input
    self.post_dispatch_input(type=type, touch=touch)
    File “C:\Python26\lib\site-packages\pymt\”, line 90, in post_dispatch_input
    listener.dispatch_event(‘on_touch_down’, touch)
    File “C:\Python26\lib\site-packages\pyglet\window\”, line 1217, in dispatch_event
    EventDispatcher.dispatch_event(self, *args)
    File “C:\Python26\lib\site-packages\pyglet\”, line 349, in dispatch_event
    getattr(self, event_type)(*args)
    File “C:\Python26\lib\site-packages\pymt\ui\”, line 321, in on_touch_down
    if w.dispatch_event(‘on_touch_down’, touch):
    File “C:\Python26\lib\site-packages\pymt\ui\widgets\”, line 339, in dispatch_event
    if func(*args):
    File “C:\Python26\”, line 15, in on_touch_down
    KeyError: ‘line’

    Your help will be of great importance to me.


  3. Necrocyber
    Posted November 24, 2009 at 4:29 am | Permalink

    Very nice man … this´s really cool ….i gonna try that now \o/

  4. mohy
    Posted November 25, 2009 at 2:09 am | Permalink


    i loved ur pymt frame work, and tried to contact u on but i donno maybe my comment didnt sent

    anyways I tested it on w7 with multi touch IR display panel

    but pymt couldn’t understand the data packet coming from my display, because it has a special API, while i am a flash designer and non-python guy, please help me to implement and use ur pymt with my display

    i am looking forward to learn more about pymt and python

  5. Posted January 4, 2010 at 6:53 am | Permalink

    Are there any .ocx’s that help do multi-touch?

Post a Comment

Your email is never published nor shared. Required fields are marked *


You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Powered by WP Hashcash