I mainly started playing with the Smart Mobile Studio because I wanted to write some stuff for my iPad. OK, there’s a small issue of possibly producing client applications for my job and maybe for some freeware/shareware stuff I’m thinking about, but this is not on the horizon yet. Therefore, iPad. (And I don’t have iPhone and I don’t care about writing stuff for my wife’s Android, so – iPad.) And writing for iPad means supporting touch gestures.
[Yes, I know I could use Objective-C but – bleuch, really? – and I could use FireMonkey but I don’t have a Mac and don’t want to buy one just for playing around and I can’t make Platform Assistant run in a Hackintosh, Therefore, Smart.]
Touch is natively supported in Smart. Gestures are not (but that may change before the release). What’s the difference, you ask? Touch subsystem will give you information about separate fingers like – finger has touched – another finger has touched – first finger was moved – both fingers were moved – while gesture subsystem will give you a simple – zoom gesture in progress. But hey, let’s start with touch. Lots and lots of stuff can be written with just the information about the fingers touching the screen.
And so I dug into demos and into RTL and found out that each control in Smart supports three touch events – OnTouchBegins, OnTouchMoves and OnTouchEnds. It was not hard to guess when each of them is called. I continued my search and found unit w3touch.pas, which defines event handler signatures for those events.
TTouchBeginsEvent = procedure (sender: TObject; Info: TW3TouchData);
TTouchMovesEvent = procedure (sender: TObject; Info: TW3TouchData);
TTouchEndsEvent = procedure (sender: TObject; Info: TW3TouchData);
Sender parameter represents the control being touched while the Info contains two lists, Touches and ChangedTouches which are both lists of TW3Touch objects (I removed some unimportant stuff in the snippet below).
TW3Touch = class(TObject)
property Identifier:Integer read FIdent write FIdent;
property ScreenX:Integer read FScreenX write FScreenX;
property ScreenY:Integer read FScreenY write FScreenY;
property ClientX:Integer read FClientX write FClientX;
property ClientY:Integer read FClientY write FClientY;
property PageX:Integer read FPageX write FPageX;
property PageY:Integer read FPageY write FPageY;
At this point I was deeply confused. Why three pairs of coordinates? And why two lists? In a moment of crisis I even turned to the official Apple documentation!
Jokes aside, Apple documentation is actually quite good but I still didn’t understand every nuance of the touch system and so I wrote a simple program that just logged all that information. You can test it at www.gabrijelcic.org/TouchLog/ if you have a touch-enabled device. [I don’t know why, but it correctly supports multitouch on iOS while only one touch is reported on Android; more than one simultaneous touch starts resizing the browser window.]
Each event is logged in one line. First character represents the type of the event (OnTouchBegins, OnTouchMoves, OnTouchEnds). Then both lists are printed; Touches are prefixed by T: and ChangedTouches are prefixed by M: (as in modified; I don’t know why I didn’t use C:, maybe my DOS roots stopped me :)). Then each touch info in the list is displayed in the form Identifier:ClientX,ClientY for the Touches list and in the form Identifier:ScreenX,ScreenY/ClientX,ClientY/PageX,PageY for the ChangedTouches list. Below is a sample screenshot from this application with some interesting points marked up.
- When a finger touches, both lists contain the same touch event. Somehow, all three coordinate pairs are the same although I don’t believe they should be. I’ve reported this as a bug.
- When a finger moves, both lists also contain the same touch event.
- Then I touched down with the second finger and got one touch event in the ChangedTouches list but two touch events in the Touches lists. It looked like the ChangedTouches list reports only fingers that have changed state since the last touch event and Touches list lists all active (pressed) fingers. More observations have confirmed this guess.
- When I released a finger (at the time I had three fingers on the tablet), released finger was only listed in the ChangedTouches list, which again makes sense as it was no longer an active finger.
- With some trial and error I found out that it is possible to get two (and probably more) new finger presses in one Begins event and it is also possible to remove two (and probably more) fingers in one Ends event.
Armed with that knowledge, I put together a simple paint program called MultiPaint. It supports mouse, single touch (Android) and multiple touches (iOS). And then my kid and I had a field day. :)
The code redirects all three touch events to the same handler. After all, I only wanted to know about the active touches so I can paint them and this information is always available in the Touches list.
procedure TPaintBoxForm.HandleTouch(Sender: TObject; info: TW3TouchData);
for i := 0 to info.Touches.Count - 1 do
procedure TPaintBoxForm.PaintFinger(id, X, Y: integer);
FCanvas.Canvas.ArcF(X, Y, 5, 0, PI*2, false);
FCanvas.Canvas.StrokeStyle := ColorToWebStr(FColors[id mod CMaxFingers]);
FCanvas.Canvas.FillStyle := FCanvas.Canvas.StrokeStyle;
FColors is just an array of random colors and ColorToWebStr is RTL function which creates string like #808080.
At the end I have to give a warning to all alpha testers. If you’ll be playing with touch, replace TW3TouchData.Update (in the w3touch unit) with this fixed version.
if assigned(FTouches) then
if assigned(FChanged) then