Wednesday, December 30, 2009

A gift to all multithreaded Delphi programmers

A (very) prerelease version 1.05, available via SVN or as a ZIP archive.

I’ve managed to produce two interesting data structures:

  • TOmniQueue (existing class TOmniQueue was renamed to TOmniBoundedQueue) is a dynamically allocated, O(1) enqueue and dequeue, threadsafe,  microlocking queue. The emphasys is on dynamically allocated. In other words – it grows and shrinks!
  • TOmniBlockingCollection is a partial clone (with some enhancements) of .NET’s BlockingCollection.

Have fun and happy new year to all Delphi programmers!

Friday, December 18, 2009

OmniThreadLibrary 1.04b – It’s all Embarcadero’s fault

Delphi 2010 Update 2/3 broke OmniThreadLibrary, but as this update was revoked, I didn’t look into the problem at all.

Now that Update 4/5 is out and OTL is still broken I had no choice but to fix it. Luckily for me, ahwux did most of the work in detecting the problem and providing (at least partial) fix.

OTL is written without resorting to ugly hacks (at least whenever possible). So what could they do to break my code?

OTL uses RTTI information to implement ‘call by name’ mechanism. And that’s not the basic RTTI, implemented in TypInfo unit, but extended class-RTTI from ObjAuto. [In case you want to take a peek at the code – the relevant bits can be found in method TOmniTaskExecutor.GetMethodAddrAndSignature inside the OtlTaskControl unit.] The code checks the method signature (number of parameters, their types and the way they are passed to the method) to see if it matches one of three supported signatures.

For example, first parameter must be the Self object and the code checked this by testing (params^.Flags = []) and (paramType^.Kind = tkClass). This worked in Delphi 2007, 2009, and 2010 – but only in the original release and Update 1. Starting with the Update 2, params^.Flags equals [pfAddress] in this case.

Similarly, constant parameters had flags [pfVar] up to D2010 Update 1 while this changed to [pfConst, pfReference] in D2010 Update 2.

I’m not against those changes. After all, the RTTI parameter description is now much more accurate. But why do they have to make this change in an update!? [Yes, I’m screaming.]

The problem here is that I can’t detect during the compilation whether the Update 4 has been installed. I can easily check for Delphi 2010, but that’s all – there’s no way (I’m aware of) of detecting which update is installed. So now my code looks like this:

  function VerifyObjectFlags(flags, requiredFlags: TParamFlags): boolean;
begin
Result := ((flags * requiredFlags) = requiredFlags);
if not Result then
Exit;
flags := flags - requiredFlags;
{$IF CompilerVersion < 21}
Result := (flags = []);
{$ELSEIF CompilerVersion = 21}
// Delphi 2010 original and Update 1: []
// Delphi 2010 while Update 2 and 4: [pfAddress]
Result := (flags = []) or (flags = [pfAddress]);
{$ELSE} // best guess
Result := (flags = [pfAddress]);
{$IFEND}
end; { VerifyObjectFlags }

function VerifyConstFlags(flags: TParamFlags): boolean;
begin
{$IF CompilerVersion < 21}
Result := (flags = [pfVar]);
{$ELSEIF CompilerVersion = 21}
// Delphi 2010 original and Update 1: [pfVar]
// Delphi 2010 Update 2 and 4: [pfConst, pfReference]
Result := (flags = [pfVar]) or (flags = [pfConst, pfReference]);
{$ELSE} // best guess
Result := (flags = [pfConst, pfReference]);
{$IFEND}
end; { VerifyConstFlags }


Ugly!



If anybody from Embarcadero is reading this: Could you please refrain from doing such changes in IDE updates? Thanks in advance.



Oh, I almost forgot – OTL 1.04b is available on the Google Code.

Sunday, December 13, 2009

DsiWin31 1.53a

This release fixes nasty bug (introduced in release 1.51) which caused various TDSiRegistry function (and other DSi code using those functions) to fail on Delphi 2009/2010.

Other changes:

  • Implemented DSiDeleteRegistryValue.
  • Added parameter 'access' to the DSiKillRegistry.
  • [Mitja] Fixed allocation in DSiGetUserName.
  • [Mitja] Also catch 'error' output in DSiExecuteAndCapture.
  • DSiAddApplicationToFirewallExceptionList renamed to DSiAddApplicationToFirewallExceptionListXP.
  • Added DSiAddApplicationToFirewallExceptionListAdvanced which uses Advanced Firewall interface, available on Vista+.
  • DSiAddApplicationToFirewallExceptionList now calls either DSiAddApplicationToFirewallExceptionListXP or DSiAddApplicationToFirewallExceptionListAdvanced, depending on OS version.
  • Implemented functions to remove application from the firewall exception list: DSiRemoveApplicationFromFirewallExceptionList, DSiRemoveApplicationFromFirewallExceptionListAdvanced, DSiRemoveApplicationFromFirewallExceptionListXP.

OmniThreadLibrary 1.04a

This minor release was released mostly because of exception handling problems when thread pool was used in version 1.04. If you’re using thread pool feature and have OTL 1.04 installed, I’d strongly urge you to upgrade.

Besides code fix I sneaked in a small API upgrade. IOmniTask interface now defines methods RegisterWaitObject/UnregisterWaitObject which the task can use to wait on any waitable object when using TOmniWorker approach (no main thread loop). There’s also a new demo application 31_WaitableObjects which demonstrates the use of this feature.

Monday, November 30, 2009

OmniThreadLibrary patterns – Task controller needs an owner

Pop quiz. What’s wrong with this code?

CreateTask(MyWorker).Run;

Looks fine, but it doesn’t work. In most cases, running this code fragment would cause immediate access violation.

This is a common problem amongst new OTL users. Heck, even I have fallen into this trap!

The problem here is that CreateTask returns IOmniTaskControl interface, or task controller. This interface must be stored into some persistent location, or task controller would be destroyed immediately after Run is called (because the reference count would fall to 0).

A common solution is to just store the interface in some field.

FTaskControl := CreateTask(MyWorker).Run;

When you don’t need background worker anymore, you should terminate the task and free the task controller.

FTaskControl.Terminate;

FTaskControl := nil;

This works for background workers with long life span – for example if there’s a background thread running all the time the program itself is running. But what if you are starting a short-term background task? In this case you should monitor it with TOmniEventMonitor and cleanup task controller reference in OnTerminate event handler.

FTaskControl := CreateTask(MyWorker).MonitorWith(eventMonitor).Run;

In eventMonitor.OnTerminate:

FTaskControl := nil;

As it turns out, event monitor keeps task controller interface stored in its own list, which will also keep the task controller alive. That’s why the following code also works.

CreateTask(MyWorker).MonitorWith(eventMonitor).Run;

Since OTL v1.04 you have another possibility – write a method to free the task controller and pass it to the OnTerminated.

FTaskControl := CreateTask(MyWorker).OnTerminated(FreeTaskControl).Run;

procedure FreeTaskControl(const task: IOmniTaskControl);
begin
  FTaskControl := nil;
end;

If you’re using Delphi 2009 or 2010, you can put the cleanup code in anonymous method.

FTaskControl := CreateTask(MyWorker).OnTerminated(
procedure(const task: IOmniTaskControl) begin
  FTaskControl := nil;
end)
.Run;

OnTerminated does its magic by hooking task controller into internal event monitor. Therefore, you can get real tricky and just write “null” OnTerminated.

CreateTask(MyWorker).OnTerminated(DoNothing).Run;

procedure DoNothing(const task: IOmniTaskControl);
begin
end;

As that looks quite ugly, I’ve added method Unobserved just few days before version 1.04 was released. This method does essentially the same as the “null” OnTerminated approach, except that the code looks nicer and programmers intentions are more clearly expressed.

CreateTask(MyWorker).Unobserved.Run;

Monday, November 23, 2009

OmniThreadLibrary 1.04

Stable release is out! Get it while it’s still hot!

Click to download!

New since 1.04 alpha:

  • Bugfixes in the thread pool code.
  • Implemented IOmniTaskControl.Unobserved behaviour modifier.
  • D2010 designtime package fixed.
  • D2009 packages and test project group updated (thanks to mghie).

New since 1.03: read full list.

Tuesday, November 17, 2009

OmniThreadLibrary 1.04 now in beta

I’ve released OTL 1.04 beta, which is functionally the same as the alpha release but contains some bug fixes. You can download it from Google Code.

1.04 final will be released on 2009-11-23, i.e. next Monday.

Friday, November 13, 2009

OmniThreadLibrary 1.04 alpha

Not yet beta as I still have to fix few TODOs …

Get it here.

COMPATIBILITY ISSUES

  • Changed semantics in comm event notifications! When you get the 'new message' event, read all messages from the queue in a loop!
  • Message is passed to the TOmniEventMonitor.OnTaskMessage handler. There's no need to read from Comm queue in the handler.
  • Exceptions in tasks are now visible by default. To hide them, use IOmniTaskControl.SilentExceptions. Test 13_Exceptions was improved to demonstrate this behaviour.

Other changes

  • Works with Delphi 2010.
  • Default communication queue size reduced to 1000 messages.
  • Support for 'wait and send' in IOmniCommunicationEndpoint.SendWait.
  • Communication subsystem implements observer pattern.
  • WideStrings can be send over the communication channel.
  • New event TOmniEventMonitor.OnTaskUndeliveredMessage is called after the task is terminated for all messages still waiting in the message queue.
  • Implemented automatic event monitor with methods IOmniTaskControl.OnMessage and OnTerminated. Both support 'procedure of object' and 'reference to procedure' parameters.
  • New unit OtlSync contains (old) TOmniCS and IOmniCriticalSection together with (new) OmniMREW - very simple and extremely fast multi-reader-exclusive-writer - and atomic CompareAndSwap functions.
  • New unit OtlHooks contains API that can be used by external libraries to hook into OTL thread creation/destruction process and into exception chain.
  • All known bugs fixed.

New demos

  • 25_WaitableComm: Demo for ReceiveWait and SendWait.
  • 26_MultiEventMonitor: How to run multiple event monitors in parallel.
  • 27_RecursiveTree: Parallel tree processing.
  • 28_Hooks: Demo for the new hook system.
  • 29_ImplicitEventMonitor: Demo for OnMessage and OnTerminated, named method approach.
  • 30_AnonymousEventMonitor: Demo for OnMessage and OnTerminated, anonymous method approach.

A teaser from demo 30

procedure TfrmAnonymousEventMonitorDemo.btnHelloClick(Sender: TObject);
begin
btnHello.Enabled := false;
FAnonTask := CreateTask(
procedure (task: IOmniTask) begin
task.Comm.Send(0, Format('Hello, world! Reporting from thread %d',
[GetCurrentThreadID]));
end,
'HelloWorld')
.OnMessage(
procedure(const task: IOmniTaskControl; const msg: TOmniMessage) begin
lbLog.ItemIndex := lbLog.Items.Add(Format('%d:[%d/%s] %d|%s',
[GetCurrentThreadID, task.UniqueID, task.Name, msg.msgID,
msg.msgData.AsString]));
end)
.OnTerminated(
procedure(const task: IOmniTaskControl) begin
lbLog.ItemIndex := lbLog.Items.Add(Format('[%d/%s] Terminated',
[task.UniqueID, task.Name]));
btnHello.Enabled := true;
FAnonTask := nil;
end)
.Run;
end;

Friday, November 06, 2009

Do we need DelphiOverflow.com?

Today I was interviewed for the greatest Delphi podcast of them all and Jim asked me a question I didn’t know how to answer: “Do you think there should be Delphi equivalent of StackOverflow.com?” I’m afraid my answer was somewhere along: “Hmph. Yes. Very good question. Very good. Let’s talk about something else.”

And now I can’t get it out of my head. Should there be delphioverflow.com? What could we get out of it? I would be the first to admit that the StackOverflow model is greatest thing since Belgian waffles and that having Delphi questions and answers in such form would be very useful.

But wait – there already are Delphi questions on StackOverflow! Not that many as C# questions, but still enough that Delphi is seen on the front page and that other users can read about it and see that it is alive and well. Even more – there are enough knowledgeable Delphi programmers on SO and most questions get great answers in less than five minutes.

What other positive result could such site bring? Maybe Embarcadero people would be more eager to participate and answer questions on their own server? Maybe, but not sure. Delphi R&D team is very busy and sometimes they can’t even find time to answer newsgroup questions. And I’m pretty sure that - whatever such change would bring – newsgroups wouldn’t go away.

Let’s take a look from another perspective. What would be negative consequences? Less Delphi questions on StackOverflow. And that’s a Bad Thing because it lowers Delphi’s discoverability. We want to talk about Delphi in public places, not on some secluded server!

Now I know how to answer. No, I don’t think we need DelphiOverflow. We need more Delphi R&D people answering questions on StackOverflow.

(Your comments on the topic are very much welcome, as always!)

Wednesday, November 04, 2009

GpStuff 1.19 & GpLists 1.43

I’ll finish my short overview of changes in various Gp units with new GpStuff and GpLists.

Let’s deal with the latter first. There were only two changes. Firstly, Slice, Walk and WalkKV enumerators got the step parameter. Now Delphi is really as powerful as Basic!

Secondly, I’ve added method FreeObjects to the TStringList helper. It will walk the string list and free all associated objects – something that is not done automatically in the TStringList destructor. Very useful helper, if I can say so.

procedure TGpStringListHelper.FreeObjects;
var
iObject: integer;
begin
for iObject := 0 to Count - 1 do begin
Objects[iObject].Free;
Objects[iObject] := nil;
end;
end; { TGpStringListHelper.FreeObjects }

Changes in GpStuff were more significant.

There are new enumerator factories. EnumStrings allows you do do stuff like this:

for s in EnumStrings(['one', 'two', 'three']) do
// ...

EnumValues will do the same for integer arrays. EnumPairs is similar to EnumStrings but returns (key, value) pairs:

var
kv: TGpStringPair;

for kv in EnumPairs(['1', 'one', '2', 'two']) do
// k.key = '1', k.value = 'one'
// k.key = '2', k.value = 'two'

There is also EnumList, which enumerates lists of items (where the whole list itself is a string):

for s in EnumList('one,two,"one,two,three"', ',', '"') do
// s = 'one'
// s = 'two'
// s = 'one,two,three'

There were some changes in TGp4AlignedInt internals – now all values are integer, not cardinal (because underlying Windows implementation works with integers). There is also new function “Compare and Swap” (CAS) in TGp4AlignedInt and TGp8AlignedInt64 (which was previously called TGp8AlignedInt).

Finally, there are new interface and class - IGpTraceable and TGpTraceable.

type
IGpTraceable = interface(IInterface)
function GetTraceReferences: boolean; stdcall;
procedure SetTraceReferences(const value: boolean); stdcall;
function _AddRef: integer; stdcall;
function _Release: integer; stdcall;
function GetRefCount: integer; stdcall;
property TraceReferences: boolean read GetTraceReferences write SetTraceReferences;
end; { IGpTraceable }

TGpTraceable = class(TInterfacedObject, IGpTraceable)
private
gtTraceRef: boolean;
public
destructor Destroy; override;
function _AddRef: integer; stdcall;
function _Release: integer; stdcall;
function GetRefCount: integer; stdcall;
function GetTraceReferences: boolean; stdcall;
procedure SetTraceReferences(const value: boolean); stdcall;
property TraceReferences: boolean read GetTraceReferences write SetTraceReferences;
end; { TGpTraceable }

The TGpTraceable class helps me debug interface problems. It exposes GetRefCount function which returns reference count, and it can trigger debugger interrupt on each reference count change if TraceReferences property is set.

function TGpTraceable._AddRef: integer;
begin
Result := inherited _AddRef;
if gtTraceRef then
asm int 3; end;
end; { TGpTraceable._AddRef }

function TGpTraceable._Release: integer;
begin
if gtTraceRef then
asm int 3; end;
Result := inherited _Release;
end; { TGpTraceable._Release }
---Published under the Creative Commons Attribution 3.0 license

Monday, November 02, 2009

Read prefetch in GpHugeFile

There is only one big change in the latest GpHugeFile – read prefetch. Most people won’t need it at all and other will only need it occasionally, but for some people, sometimes, it will be a life saver.

The prefetch option is only useful when you read a file mostly sequentially from a relatively slow media. Useless? You never did that before? Did you ever played a video file from the network server or from the YouTube? Well, there you are!

Playing video files (especially HD) over network is not a trivial task. In some occasions (namely, slow networks or high bitrate files) the network speed is only slightly above the minimum required for the seamless video playout. Even more – the network speed is not constant because you share it with other users and at some times it may not be high enough to play the video without stuttering.

To solve this problem, video players use prefetch (or read-ahead) – they will read more data than required and use this buffer when the network slows down. Better said – video will always play from this buffer but the buffer size will vary depending on current network speed.

So how’s this typically done? One way is with a background thread that sequentially reads through the file and buffers the data and another is with asynchronous read operations. This very powerful approach is part of the standard ReadFileEx Win32 API and is relatively easy to use – you just start the read operation and some time later the system will notify you that the data is available. There are some problems, though, the biggest of them the requirement that your reading thread must be in a special alertable sleep state for this notification to occur.

The third option is not to use threads or asynch file ops, but to pass hfoPrefetch and hfoBuffered flags to the ResetEx. In the same call you can also set the number of prefetched buffers. As for the buffer size – it is also settable with a ResetEx parameters and will be rounded up to the next multiplier of the system page size (async file io requirement) or it will be set to 64 KB if you leave the parameter at 0.

When you se hfoPrefetch, TGpHugeFile will create background thread and this thread will issue asynchronous file io calls. Prefetched data is stored in a cache which is shared between the worker thread and the owner. Unfortunately for some, this option is only available in Delphi 2007 and newer because the worker object is implemented using OmniThreadLibrary.

Maybe you’ll wonder why the thread is not issuing normal synchronous reads? For two reasons – I didn’t want the thread to block reading data when owner executes a Seek (file repositioning will immediately tell the prefetcher that it should start reading from a different file offset) and I wanted to issue multiple read commands at the same time (namely 2).

Enough talk – if you want to learn more, look at the code. I’ll only give you the simplest possible demo:

program Project12;

{$APPTYPE CONSOLE}

uses
SysUtils,
GpHugeF;

var
hf : TGpHugeFile;
buf: array [1..65536] of byte;
bytesRead: cardinal;
bytesTotal: int64;

begin
hf := TGpHugeFile.Create(ParamStr(1));
try
if hf.ResetEx(1, 0, 0, 0, [hfoBuffered, hfoPrefetch]) <> hfOK then
Writeln('Fail!')
else begin
bytesTotal := 0;
repeat
hf.BlockRead(buf, SizeOf(buf), bytesRead);
Inc(bytesTotal, bytesRead);
until bytesRead = 0;
Writeln('Total bytes read: ', bytesTotal);
end;
finally FreeAndNil(hf); end;
Readln;
end.

Wednesday, October 28, 2009

GpHugeFile v6 and other updates

Today I finally published all my updated open source units. Some were recently modified and for others the update was long overdue. Some were only slightly modified, in others the changes were bigger. Read through the list below and you’ll see.

In future posts I intend to write some more about changes in GpHugeFile, GpStuff, and GpLists. If you want to know more about changes in other units, drop a comment.

GpHugeFile 6.01

  • Implemented read prefetch, activated by setting hfoPrefetch option flag.
  • Number of buffers to prefetch can be set with a ResetEx parameter.

Unfortunately for some, this option is only available in Delphi 2007 and newer because the worker object is implemented using OmniThreadLibrary.

GpVersion 2.04

  • Updated for Delphi 2009.
  • Extended IVersion interface with IsNotHigherThan, IsNotLowerThan and IsEqualTo.

GpTextFile 4.02

  • Compatible with Delphi 2009.

GpSync 1.22

  • Implemented TGpSWMR.AttachToThread.
  • Added internal check to ensure that TGpSWMR.WaitToRead/WaitToWrite/Done are called from one thread only.

GpStuff 1.19

  • Added EnumPairs string array enumerator.
  • Added EnumList string enumerator.
  • Added EnumStrings enumerator.
  • InterlockedIncrement/InterlockedDecrement deal with integers, therefore TGp4AlignedInt.Increment/Decrement must return integers. All other functions in TGp4AlignedInt also changed to work with integers.
  • Implemented function CAS (compare and swap) in TGp4AlignedInt and TGp8AlignedInt64 records.
  • TGp8AlignedInt renamed to TGp8AlignedInt64.
  • TGp8AlignedInt.Addr must be PInt64, not PCardinal.
  • Implemented IGpTraceable interface.

GpStructuredStorage 2.0a

  • [Erik Berry] Definition of fmCreate in Delphi 2010 has changed and code had to be adjusted.

GpStreams 1.25b

  • Safer TGpFixedMemoryStream.Read.
  • Added setter for TGpFixedMemoryStream.Position so that invalid positions raise exception.

GpSharedMemory 4.12

  • Compatible with Delphi 2009.

GpLists 1.43

  • Added parameter 'step' to various Slice(), Walk() and WalkKV() enumerators.
  • Added method FreeObjects to the TStringList helper.

Tuesday, October 27, 2009

And the award goes to …

image Daniel R. Wolf of the Delphi PRAXiS.

Not the official Spirit of Delphi award, but the Delphi Legends Community Award in the organization of Wings of Wind (who are those people???).

Anyway, somehow I got nominated and even scrapped together the fifth place, which is definitely more than I deserve, but still – thanks for the recognition, folks!

Such things make one work harder and publish more so you can definitely expect plenty of articles on this blog in the near future …

Sunday, October 25, 2009

TDM Rerun #14: A Portable XML

The code unit, OmniXML.pas, which contains the XML representation interfaces, parser and writer, was written by a single programmer, Miha Remec (he is also the guy behind the www.omnixml.com website). He started writing it in 2000, because he was missing a native Delphi DOM parser, one that would represent the DOM the same way as it was designed. The best Delphi parser around at that time was OpenXML, but it used classes to represent XML elements, not interfaces. OmniXML uses interfaces, derived from the IXMLNode (as specified by the DOM). That also makes it almost completely compatible with the MSXML parser, which uses the same approach.

- A Portable XML, The Delphi Magazine 105, May 2004

In the 2004 May issue I wrote about OmniXML, a native Delphi XML parser. I described the OmniXML approach and wrote few short pieces of code that demonstrated its use.

Today, five years later, OmniXML is still strong and I’m still using it, as you can see in my Fluent XML series.

Links: article (PDF, 45 KB), source code (ZIP, 795 KB).

Thursday, October 22, 2009

DSiWin32 1.51

It’s been some time since I’ve last updated my open source units… For example, DSiWin32, a collection of Win32 API helpers, was last updated in August 2008! Bad me!

Time to do some housecleaning, then. Let’s see what’s new in DSiWin32 1.51.

There are new dynamic forwarders: DSiWow64DisableWow64FsRedirection, DSiWow64RevertWow64FsRedirection, DSiGetTickCount64 and DSiGlobalMemoryStatusEx. They call appropriate API functions if they are available and return error on older Windows systems. [Click on the function name to see the API specification on the MSDN.]

Function DSiGetGlobalMemoryStatus is not much more than a wrapper to the GlobalMemoryStatusEx API and returns information about global memory status (paging, virtual memory etc). Information is returned in a TMemoryStatusEx record, which is also defined in the DSiWin32 unit.

We implemented six new function in the files section. DSiGetNetworkResource converts drive letter (created by mapping network location) back to the network path. DSiDisconnectFromNetworkResource disconnects drive letter from a network resource. [DSiConnectToNetworkResource was included in the previous public version 1.41.] DSiGetSubstDrive maps drive letter (created by using Subst command) to the associated folder and DSiGetSubstPath does similar for a file path that starts with a subst’ed letter. DSiDisableWow64FsRedirection disables file system redirection (mapping from \Windows\System32 to \Windows\SysWOW64 for 32-bit applications on 64-bit systems) for the current thread and DSiRevertWow64FsRedirection reverts this change.

There are also two new install functions. DSiAddApplicationToFirewallExceptionList adds application to the firewall exception list and DSiAddPortToFirewallExceptionList does the same for a TCP/IP port.

DSiGetCurrentThreadHandle and DSiGetCurrentProcessHandle return (true) handles for the current thread and process. [In contrast to GetCurrentThread and GetCurrentProcess APIs which return pseudo handle which cannot be used outside of the current thread/process context.]

DSiGetWindowsVersion was extended to detect Windows Server 2008, Windows 7 and Windows Server 2008 R2. DSiGetTrueWindowsVersion was also upgraded to return “Windows Server 2008 or Vista SP1” and “Windows 7 or Server 2008 R2”. It looks like it is not possible to discriminate between those operating systems on the API level :( Record TOSVersionInfoEx was also defined as it was used in the DSiGetWindowsVersion.

'Access' parameter was added to the DSiWriteRegistry methods so that user can request writing to the non-virtualized key when running on 64-bit system (KEY_WOW64_64KEY).

DSiExecuteAndCapture got much deserved workover. Now the caller can be informed of each line outputted by the child process.

Delphi 2009/2010 compatibility was fixed for DSiGetFolderLocation, DSiGetNetworkResource, DSiGetComputerName, DSiGetWindowsFolder, DSiExecuteAndCapture and bugs were fixed in DSiGetTempFileName and DSiGetUserName.

All in all, lots of things were changed and improved. If you’re already using DSiWin32 then this is a good time to upgrade. [And if not, start using it!]

Saturday, October 17, 2009

Open source Computer Vision library

Memo to self: When you play with computer vision next time, check the OpenCV library.

From the OpenCV www:

“OpenCV (Open Source Computer Vision) is a library of programming functions for real time computer vision. Example applications of the OpenCV library are Human-Computer Interaction (HCI); Object Identification, Segmentation and Recognition; Face Recognition; Gesture Recognition; Camera and Motion Tracking, Ego Motion, Motion Understanding; Structure From Motion (SFM); Stereo and Multi-Camera Calibration and Depth Computation; Mobile Robotics.”

Found via IPhone Sudoku Grab via Julian M Bucknall.

Apparently there is no Delphi interface (yet) but as the DLL has simple C interface, such interface could easily (or “easily”?) be implemented.

Tuesday, October 06, 2009

Fluent XML [3]

Few days ago I was answering some OmniXML related question on Slovenian Delphi forum and I tried using GpFluentXML to provide the answer. The emphasis is on “tried” – my unit was just not powerful enough to be useful in this case.

So what else could I do besides improving GpFluentXML?

Modifications were very small. I added two overloads (AddChild and AddSibling) which allow setting node value. Implementation is equally trivial as everything else inside the fluent XML unit.

function TGpFluentXmlBuilder.AddChild(const name: XmlString;
value: Variant): IGpFluentXmlBuilder;
begin
Result := AddChild(name);
SetTextChild(fxbActiveNode, XMLVariantToStr(value));
end; { TGpFluentXmlBuilder.AddChild }

(And similar for AddSibling.)

Now I (and you and everybody else) can write such code:

var
data : TData;
xmlBuilder: IGpFluentXmlBuilder;
begin
xmlBuilder := CreateFluentXml
.AddProcessingInstruction('xml', 'version="1.0" encoding="UTF-8"')
.AddChild('DataTransfer')
['xsi:noNamespaceSchemaLocation', 'http://www.tempuri/schema.xsd']
['xmlns:xsi', 'http://www.w3.org/2001/XMLSchema-instance']
.AddChild('Dokument');
for data in GetData do begin
xmlBuilder
.Mark
.AddChild('ID', data.ID)
.AddSibling(DocumentInfo')
.AddChild('GeneratedBy', data.Originator)
.AddSibling('Title', data.Title)
.Return;
end;
end;

A non-obvious trick – .Mark and .Return are used inside the loop to store/restore proper level at which the child nodes must be inserted.

Full GpFluentXML source is available at http://17slon.com/blogs/gabr/files/GpFluentXml.pas.

Tuesday, September 29, 2009

Unhibernating – with a T-shirt

Dear reader,

I’m fully aware that I was silent for a long long time.

That’s how life goes. Sometimes you have time to think and write and sometimes software has bugs, children are growing, house is telling you that it shouldn’t be in the “fixer-upper” mode anymore and the body starts warning that the guaranty has expired. Plus the days suddenly have only 24 hours and not 28 as it was customary for the last ten years.

In short – I had no time. Life was going on. And around. And over me. Especially over me.

Luckily, everything is turning out fine and I will again write about my Delphi adventures.

Today I visited the RAD Studio 2010 presentation organized by Embarcadero and Slovenian Delphi dealer, Marand. Mark Barrington and Pawel Glowacki were showing RAD Studio 2010, Embarcadero’s database tools and All-Access. Sadly, I only had time to attend the Delphi session which was very good and informative. (Thumbs up, Pawel!)

There were also T-shirts and I was lucky to get one.

P1090662

Nice T-shirt but it’s even better if you look closer at the code.

P1090665

Parallel Erathostenes sieve! That’s definitely the code I can appreciate!

[For the OTL fans – OTL is alive and well and I will publish next version sometime in October. And then I’ll write the documentation. Promise.]

Thursday, April 30, 2009

TDM Rerun #13: Shared Events, Part 2: Redesign

Now we can already guess where the general sluggishness of the shared event system comes from. The trouble lies in the constant XML loading and saving. Most of the shared event system tables are quite static, but that doesn’t hold for the Event Queue table, where new entries are inserted, modified and deleted all the time.

- Shared Events, Part 2: Redesign, The Delphi Magazine 102, February 2004

shared table memory snapshotMy second article on shared events architecture first addressed speed issues (original code was quite slow), then discussed internals of shared counters, shared linked lists and shared tables (all of which are used in the shared events system) and at the end returned to fine-tuning by fixing some remaining speed issues. As you can expect, the basis for the tuning was hard data from the profiler, not some wave-of-hand ideas about where the problem maybe lies.

Links: article (PDF, 99 KB), source code (ZIP, 1,9 MB), current version.

TDM Rerun #12: Shared Events

Shared event system, as I nicknamed this approach, is implemented as a shared set of in-memory tables, which are accessed and manipulated by producers and listeners. The important part is that there is no dedicated server: housekeeping is distributed between the producers and listeners.

- Shared Events, The Delphi Magazine 97, September 2003

imageShared events mechanism was definitely the most complicated architecture based on shared memory I ever put together. The system allowed multiple programs (running on the same computer) to cooperate using an event-based system. First program would publish an event (or more events) and others would subscribe to those events. First program would then broadcast the event, which would trigger notifications in all subscribed programs. The best trick was that there was no privileged part – no server, service or manager. Publishers and consumers shared all the work – tables were created as needed, housekeeping was done on both sides and so on.

Underlying architecture was largely redesigned after this article was published. Original source files are included only for historical reference.

Links: article (PDF, 496 KB), source code (ZIP, 1,9 MB), current version.

Thursday, April 02, 2009

Fluent XML [2]

Yesterday I described my approach to more fluent XML writing. Today I’ll describe the GpFluentXML unit where the ‘fluid’ implementation is stored. If you skipped yesterday’s post you’re strongly encourage to read it now.

Let’s start with the current version of the fluent XML builder interface, which is not completely identical to the yesterday’s version.

Interface

uses
OmniXML_Types,
OmniXML;

type
IGpFluentXmlBuilder = interface ['{91F596A3-F5E3-451C-A6B9-C5FF3F23ECCC}']
function GetXml: IXmlDocument;
//
function AddChild(const name: XmlString): IGpFluentXmlBuilder;
function AddComment(const comment: XmlString): IGpFluentXmlBuilder;
function AddProcessingInstruction(const target, data: XmlString): IGpFluentXmlBuilder;
function AddSibling(const name: XmlString): IGpFluentXmlBuilder;
function Anchor(var node: IXMLNode): IGpFluentXmlBuilder;
function Mark: IGpFluentXmlBuilder;
function Return: IGpFluentXmlBuilder;
function SetAttrib(const name, value: XmlString): IGpFluentXmlBuilder;
function Up: IGpFluentXmlBuilder;
property Attrib[const name, value: XmlString]: IGpFluentXmlBuilder
read SetAttrib; default;
property Xml: IXmlDocument read GetXml;
end; { IGpFluentXmlBuilder }

function CreateFluentXml: IGpFluentXmlBuilder;

The fluent XML builder is designed around the concept of the active node, which represents the point where changes are made. When you call the factory function CreateFluentXml, it creates a new IXMLDocument interface and sets active node to this interface (IXMLDocument is IXMLNode so that is not a problem). When you call other functions, active node may change or it may not, depending on a function.

AddProcessingInstruction and AddComment just create a processing instruction (<?xml … ?> line at the beginning of the XML document) and comment and don’t affect the active node.

AddChild creates a new XML node and makes it a child of the current active node.

Up sets active node to the parent of the active node. Unless, of course, if active node is already at the topmost level in which case it will raise an exception. In the yesterday post this method was called Parent.

AddSibling creates a new XML node and makes it a child of the current active node’s parent. In other words, AddSibling is a shorter version of Up followed by the AddChild.

SetAttrib or it’s shorthand, the default property Attrib, sets value of an attribute.

Mark and Return are always used in pairs. Mark pushes active node on the top of the internal (to the xml builder) stack. Return pops a node from the top of the stack and sets it as the active node. Yesterday this pair was named Here/Back.

Anchor copies the active node into its parameter. That allows you to generate the template code with the fluent xml and store few nodes for later use. Then you can use those nodes to insert programmatically generated XML at those points.

At the end, there’s the Xml property which returns the internal IXMLDocument interface, the one that was created in the CreateFluentXml.

And now let’s move to the implementation.

Implementation

Class TGpFluentXmlBuilder implements the IGpFluentXmlBuilder interface. In addition to the methods from this interface, it declares function ActiveNode and three fields – fxbActiveNode stores the active node, fxbMarkedNodes is a stack of nodes stored with the Mark method and fxbXmlDoc is the XML document.

type
TGpFluentXmlBuilder = class(TInterfacedObject, IGpFluentXmlBuilder)
strict private
fxbActiveNode : IXMLNode;
fxbMarkedNodes: IInterfaceList;
fxbXmlDoc : IXMLDocument;
strict protected
function ActiveNode: IXMLNode;
protected
function GetXml: IXmlDocument;
public
constructor Create;
destructor Destroy; override;
function AddChild(const name: XmlString): IGpFluentXmlBuilder;
function AddComment(const comment: XmlString): IGpFluentXmlBuilder;
function AddProcessingInstruction(const target, data: XmlString): IGpFluentXmlBuilder;
function AddSibling(const name: XmlString): IGpFluentXmlBuilder;
function Anchor(var node: IXMLNode): IGpFluentXmlBuilder;
function Mark: IGpFluentXmlBuilder;
function Return: IGpFluentXmlBuilder;
function SetAttrib(const name, value: XmlString): IGpFluentXmlBuilder;
function Up: IGpFluentXmlBuilder;
end; { TGpFluentXmlBuilder }

Some functions are pretty much trivial – one line to execute the action and another to return Self so another fluent XML action can be chained onto result of the function. Of course, some of those functions are simple because they use wrappers from the OmniXMLUtils unit, not from MS-compatible OmniXML.pas. [By the way, you can download OmniXML at www.omnixml.com.]

function TGpFluentXmlBuilder.AddChild(const name: XmlString): IGpFluentXmlBuilder;
begin
fxbActiveNode := AppendNode(ActiveNode, name);
Result := Self;
end; { TGpFluentXmlBuilder.AddChild }

function TGpFluentXmlBuilder.AddComment(const comment: XmlString): IGpFluentXmlBuilder;
begin
ActiveNode.AppendChild(fxbXmlDoc.CreateComment(comment));
Result := Self;
end; { TGpFluentXmlBuilder.AddComment }

function TGpFluentXmlBuilder.AddProcessingInstruction(const target, data: XmlString):
IGpFluentXmlBuilder;
begin
ActiveNode.AppendChild(fxbXmlDoc.CreateProcessingInstruction(target, data));
Result := Self;end; { TGpFluentXmlBuilder.AddProcessingInstruction }

function TGpFluentXmlBuilder.AddSibling(const name: XmlString): IGpFluentXmlBuilder;
begin
Result := Up;
fxbActiveNode := AppendNode(ActiveNode, name);
end; { TGpFluentXmlBuilder.AddSibling }

function TGpFluentXmlBuilder.GetXml: IXmlDocument;
begin
Result := fxbXmlDoc;
end; { TGpFluentXmlBuilder.GetXml }

function TGpFluentXmlBuilder.Mark: IGpFluentXmlBuilder;
begin
fxbMarkedNodes.Add(ActiveNode);
Result := Self;
end; { TGpFluentXmlBuilder.Mark }
function TGpFluentXmlBuilder.Return: IGpFluentXmlBuilder;
begin
fxbActiveNode := fxbMarkedNodes.Last as IXMLNode;
fxbMarkedNodes.Delete(fxbMarkedNodes.Count - 1);
Result := Self;
end; { TGpFluentXmlBuilder.Return }

function TGpFluentXmlBuilder.SetAttrib(const name, value: XmlString): IGpFluentXmlBuilder;
begin
SetNodeAttrStr(ActiveNode, name, value);
Result := Self;
end; { TGpFluentXmlBuilder.SetAttrib }

OK, Return has three lines, not two. That makes it medium complicated :)

In fact Up is also very simple, except that it checks validity of the active node before returning its parent.

function TGpFluentXmlBuilder.Up: IGpFluentXmlBuilder;
begin
if not assigned(fxbActiveNode) then
raise Exception.Create('Cannot access a parent at the root level')
else if fxbActiveNode = DocumentElement(fxbXmlDoc) then
raise Exception.Create('Cannot create a parent at the document element level')
else
fxbActiveNode := ActiveNode.ParentNode;
Result := Self;
end; { TGpFluentXmlBuilder.Up }

A little more trickstery is hidden inside the ActiveNode helper function. It returns active node when it is set; if not it returns XML document’s document element  or the XML doc itself if document element is not set. I don’t think the second option (document element) can ever occur. That part is just there to future-proof the code.

function TGpFluentXmlBuilder.ActiveNode: IXMLNode;
begin
if assigned(fxbActiveNode) then
Result := fxbActiveNode
else begin
Result := DocumentElement(fxbXmlDoc);
if not assigned(Result) then
Result := fxbXmlDoc;
end;
end; { TGpFluentXmlBuilder.ActiveNode }

Believe it or not, that’s all. The whole GpFluentXml unit with comments and everything is only 177 lines long.

Full GpFluentXML source is available at http://17slon.com/blogs/gabr/files/GpFluentXml.pas.

Wednesday, April 01, 2009

Fluent XML [1]

Few days ago I was writing a very boring piece of code that should generate some XML document. It was full of function calls that created nodes in the XML document and set attributes. Boooooring stuff. But even worse than that – the structure of the XML document was totally lost in the code. It was hard to tell which node is child of which and how it’s all structured.

Then I did what every programmer does when he/she should write some boring code – I wrote a tool to simplify the process. [That process usually takes more time than the original approach but at least it is interesting ;) .]

I started by writing the endcode. In other words, I started thinking about how I want to create this XML document at all. Quickly I decided on the fluent interface approach. I perused it in the OmniThreadLibrary where it proved to be quite useful.

That’s how the first draft looked (Actually, it was much longer but that’s the important part.):

xmlWsdl := CreateFluentXml
.AddProcessingInstruction('xml', 'version="1.0" encoding="UTF-8"')
.AddChild('definitions')
.SetAttr('xmlns', 'http://schemas.xmlsoap.org/wsdl/')
.SetAttr('xmlns:xs', 'http://www.w3.org/2001/XMLSchema')
.SetAttr('xmlns:soap', 'http://schemas.xmlsoap.org/wsdl/soap/')
.SetAttr('xmlns:soapenc', 'http://schemas.xmlsoap.org/soap/encoding/')
.SetAttr('xmlns:mime', 'http://schemas.xmlsoap.org/wsdl/mime/');

This short fragment looks quite nice but in the full version (about 50 lines) all those SetAttr calls visually merged together with AddChild calls and the result was still unreadable (although shorter than the original code with explicit calls to XML interface).

My first idea was to merge at least some SetAttr calls into the AddChild by introducing two versions – one which takes only a node name and another which takes node name, attribute name and attribute value – but that didn’t help the code at all. Even worse – it was hard to see which AddChild calls were setting attributes and which not :(

That got me started in a new direction. If the main problem is visual clutter, I had to do something to make setting attributes stand out. Briefly I considered a complicated scheme which would use smart records and operator overloading but I couldn’t imagine a XML creating code which would use operators and be more readable than this so I rejected this approach. [It may still be a valid approach – it’s just that I cannot make it work in my head.]

Then I thought about arrays. In “classical” code I could easily add array-like support to attributes so that I could write xmlNode[attrName] := ‘some value’, but how can I make this conforming my fluent architecture?

To get or not to get

In order to be able to chain anything after the [], the indexed property hiding behind must return Self, i.e. the same interface it is living in. And because I want to use attribute name/value pairs, this property has to have two indices.

property Attrib[const name, value: XmlString]: IGpFluentXmlBuilder 
read GetAttrib; default;

That would allow me to write such code:

.AddSibling('service')['name', serviceName]
.AddChild('port')
['name', portName]
['binding', 'fs:' + bindingName]
.AddChild('soap:address')['location', serviceLocation];

As you can see, attributes can be chained and I can write attribute assignment in the same line as node creation and it is still obvious which is which and who is who.

But … assignment? In a getter? Why not! You can do anything in the property getter. To make this more obvious, my code calls this ‘getter’ SetAttrib. As a nice side effect, SetAttrib is completely the same as it was defined in the first draft and can even be used insted of the [] approach.

I’ll end today’s instalment with the complete 'fluent xml builder’ interface and with sample code that uses this interface to build an XML document. Tomorrow I’ll wrap things up by describing the interface and its implementation in all boring detail.

type
IGpFluentXmlBuilder = interface ['{91F596A3-F5E3-451C-A6B9-C5FF3F23ECCC}']
function GetXml: IXmlDocument;
//
function Anchor(var node: IXMLNode): IGpFluentXmlBuilder;
function AddChild(const name: XmlString): IGpFluentXmlBuilder;
function AddComment(const comment: XmlString): IGpFluentXmlBuilder;
function AddSibling(const name: XmlString): IGpFluentXmlBuilder;
function AddProcessingInstruction(const target, data: XmlString): IGpFluentXmlBuilder;
function Back: IGpFluentXmlBuilder;
function Here: IGpFluentXmlBuilder;
function Parent: IGpFluentXmlBuilder;
function SetAttrib(const name, value: XmlString): IGpFluentXmlBuilder;
property Attrib[const name, value: XmlString]: IGpFluentXmlBuilder
read SetAttrib; default;
property Xml: IXmlDocument read GetXml;
end; { IGpFluentXmlBuilder }
 
  xmlWsdl := CreateFluentXml
.AddProcessingInstruction('xml', 'version="1.0" encoding="UTF-8"')
.AddChild('definitions')
['xmlns', 'http://schemas.xmlsoap.org/wsdl/']
['xmlns:xs', 'http://www.w3.org/2001/XMLSchema']
['xmlns:soap', 'http://schemas.xmlsoap.org/wsdl/soap/']
['xmlns:soapenc', 'http://schemas.xmlsoap.org/soap/encoding/']
['xmlns:mime', 'http://schemas.xmlsoap.org/wsdl/mime/']
['name', serviceName]
['xmlns:ns1', 'urn:' + intfName]
['xmlns:fs', 'http://online.com/soap/']
['targetNamespace', 'http://online.com/soap/']
.AddChild('message')['name', 'fs:' + baseName + 'Request'].Anchor(nodeRequest)
.AddSibling('message')['name', 'fs:' + baseName + 'Response'].Anchor(nodeResponse)
.AddSibling('portType')['name', baseName]
.Here
.AddChild('operation')['name', baseName]
.AddChild('input')['message', 'fs:' + baseName + 'Request']
.AddSibling('output')['message', 'fs:' + baseName + 'Response']
.Back
.AddSibling('binding')
.Here
['name', bindingName]
['type', 'fs:' + intfName]
.AddChild('soap:binding')
['style', 'rpc']
['transport', 'http://schemas.xmlsoap.og/soap/http']
.AddChild('operation')['name', baseName]
.AddChild('soap:operation')
['soapAction', 'urn:' + baseName]
['style', 'rpc']
.AddSibling('input')
.AddChild('soap:body')
['use', 'encoded']
['encodingStyle', 'http://schemas.xmlsoap.org/soap/encoding/']
['namespace', 'urn:' + intfName + '-' + baseName]
.Parent
.AddSibling('output')
.AddChild('soap:body')
['use', 'encoded']
['encodingStyle', 'http://schemas.xmlsoap.org/soap/encoding/']
['namespace', 'urn:' + intfName + '-' + baseName]
.Back
.AddSibling('service')['name', serviceName]
.AddChild('port')
['name', portName]
['binding', 'fs:' + bindingName]
.AddChild('soap:address')['location', serviceLocation];

What do you think? Does my approach make any sense?

Tuesday, February 10, 2009

OmniThreadLibrary 1.03

OmniThreadLibrary 1.03 was silently released two days ago. Even without the announcement, it was downloaded 133 times to this point. Awesome!

The main new feature is per-thread initialized data in thread pool. That allows you to create a connection pool with OTL. There’s a simple demo included in the distribution (24_ConnectionPool). I wrote few words about it yesterday.

No bugs were fixed so you don’t have to upgrade if you don’t need new thread pool functionality.

As usual, you can get it via SVN or as a ZIP archive.

Other important OTL links:

Monday, February 09, 2009

Building a connection pool

Recently, an OTL user asked me in the forum how to build a connection pool with the OTL. The answer, at the time, was – not possible. There was a crucial component missing.

It turned out that implementing thread-global data was not really hard to do so here it is – a tutorial on how to build a connection pool with the OTL (also included in the latest release as a demo 24_ConnectionPool). To run this code you’ll need OTL 1.03.

Let’s say we want to build a pool of some entities that take some time to initialize (database connections, for example). In a traditional sense, one would build a list of objects managing those entities and would then allocate them to the threads running the code. In practice, we can run in a big problem if such entities expect to always run from the thread in which they were created. (I had such problem once with TDBIB and Firebird Embedded.) To solve this, we would have to associate entities with threads and we’ll also have to monitor thread lifecycle (to deallocate entities when a thread is terminated).

With OTL, the logic is reversed. Threads will be managed by a thread pool and there will be no need for us to create/destroy them. We’ll just create a task and submit it into a thread pool. Thread pool will initialize the pool entity (database connection), associate it with a thread and pass it to all tasks that will run in this thread so that they can use it.

Furthermore, this solution allows you to use all the functionality of the OTL thread pool. You can set maximum number of concurrent tasks, idle thread timeout, maximum time the task will wait for execution and more and more.

So let’s see how we can code this in the OTL. All code was extracted from the demo 24_ConnectionPool.

Connection pool demo

In the OnCreate event the code creates a thread pool, assigns it a name and thread data factory. The latter is a function that will create and initialize new connection for each new thread. In the OnClose event the code terminates all waiting tasks (if any), allowing the application to shutdown gracefully. FConnectionPool is an interface and its lifetime is managed automatically so we don’t have to do anything explicit with it.

procedure TfrmConnectionPoolDemo.FormCreate(Sender: TObject);
begin
FConnectionPool := CreateThreadPool('Connection pool');
FConnectionPool.ThreadDataFactory := CreateThreadData;
end;

procedure TfrmConnectionPoolDemo.FormClose(Sender: TObject; var Action: TCloseAction);
begin
FConnectionPool.CancelAll;
end;

The magic CreateThreadData factory just creates a connection object (which would in a real program establish a database connection, for example).

function CreateThreadData: IInterface;
begin
Result := TConnectionPoolData.Create;
end;

There’s no black magic behind this connection object. It is an object which implements an interface. Any interface. This interface will be used only in your code. In this demo, TConnectionPoolData contains only one field – unique ID, which will help us follow the program execution.

type
IConnectionPoolData = interface ['{F604640D-6D4E-48B4-9A8C-483CA9635C71}']
function ConnectionID: integer;
end;

TConnectionPoolData = class(TInterfacedObject, IConnectionPoolData)
strict private
cpID: integer;
public
constructor Create;
destructor Destroy; override;
function ConnectionID: integer;
end; { TConnectionPoolData }

As this is not a code from a real world application, I didn’t bother connecting it to any specific database. TConnectionPoolData constructor will just notify the main form that it has begun its job, generate new ID and sleep for 5 seconds (to emulate establishing a slow connection). The destructor is even simpler, it just sends a notification to the main form.

constructor TConnectionPoolData.Create;
begin
PostToForm(WM_USER, MSG_CREATING_CONNECTION, integer(GetCurrentThreadID));
cpID := GConnPoolID.Increment;
Sleep(5000);
PostToForm(WM_USER, MSG_CREATED_CONNECTION, cpID);
end;

destructor TConnectionPoolData.Destroy;
begin
PostToForm(WM_USER, MSG_DESTROY_CONNECTION, cpID);
end;

Creating and running a task is really simple with the OTL:

procedure TfrmConnectionPoolDemo.btnScheduleClick(Sender: TObject);
begin
Log('Creating task');
CreateTask(TaskProc).MonitorWith(OTLMonitor).Schedule(FConnectionPool);
end;

We are monitoring the task with the TOmniEventMonitor component because a) we want to know when the task will terminate and b) otherwise we would have to keep reference to the IOmniTaskControl interface returned from the CreateTask.

The task worker procedure TaskProc is again really simple. First it pulls the connection data from the task interface (task.ThreadData as IConnectionPoolData), retrieves the connection ID and sends task and connection ID to the main form (for logging purposes) and then it sleeps for three seconds, indicating some heavy database activity.

procedure TaskProc(const task: IOmniTask);
begin
PostToForm(WM_USER + 1, task.UniqueID,
(task.ThreadData as IConnectionPoolData).ConnectionID);
Sleep(3000);
end;

Then … but wait! There’s no more! Believe it or not, that’s all. OK, there is some infrastructure code that is used only for logging but that you can look up by yourself.

There is also a code assigned to the second button (“Schedule and wait”) but it only demonstrates how you can schedule a task and wait on its execution. Useful if you’re running the task from a background thread (for example, Indy thread, as specified by the author of the original question).

Running the demo

Let’s run the demo and click on the Schedule key.

image

What happened here?

  • Task was created.
  • Immediately, it was scheduled for execution and thread pool called our thread data factory.
  • Thread data waited for five seconds and returned.
  • Thread pool immediately started executing the task.
  • Task waited for three seconds and exited.

OK, nothing special. Let’s click the Schedule button again.

image

Now a new task was created (with ID 4), was scheduled for execution in the same thread as the previous task and reused the connection that was created when the first task was scheduled. There is no 5 second wait, just the 3 second wait implemented in the task worker procedure.

If you now leave the program running for 10 seconds, a message Destroying connection 1 will appear. The reason for this is that the default thread idle timeout in the OTL thread pool is 10 seconds. In other words, if a thread does nothing for 10 seconds, it will be stopped. You are, of course, free to set this value to any number or even to 0, which would disable the idle thread termination mechanism.

If you now click the Schedule button again, new thread will be created in the thread pool and new connection will be created in our factory function (spending 5 seconds doing nothing).

image

Let’s try something else. I was running the demo on my laptop with a dual core CPU, which caused the OTL thread pool to limit maximum number of currently executing threads to two. By default, OTL thread pool uses as much threads as there are cores in the system, but again you can override the value. At the moment, you are limited by a maximum 60 concurrent threads, which should not cause any problems in the next few years, I hope. (The 60 thread limit is not an arbitrary number but is caused by the Windows limitation of allowing only up to 64 handles in the WaitForMultipleObjects function.) Yes, you are allowed to set this limitation to a value higher than the number of CPU cores in the system but still, running 60 active concurrent threads is really not recommended.

To recap – when running the demo, OTL thread pool was limited to two concurrent threads. When I clicked the Schedule button two times in a quick succession, first task was scheduled and first connection started being established (translation: entered the Sleep function). Then the second task was created (as the connection is being established from the worker thread, GUI is not blocked) and second connection started being established in the second thread. Five seconds later, connections are created and task start running (and wait three seconds, and exit).

image

Then I clicked the Schedule button two more times. Two tasks were scheduled and they immediately started execution in two worker threads.

image

For the third demo, I restarted the app and clicked the Shedule button three times. Only two worker threads were created and two connections established and two tasks started execution. The third task entered the thread pool queue and waited for the first task to terminate, after which it was immediately scheduled.

image

So here you have it – a very simple way to build a connection pool. Have fun!

Friday, February 06, 2009

Adding connection pool mechanism to OmniThreadLibrary

I have a problem.

I have this thread pool, which needs to be enhanced a little. And I don’t know how to do it.

Well, actually I know. I have at least three approaches. I just can’t tell which one is the best :(

I’m talking about the OmniThreadLibrary – I know you guessed that already. The problem is the thread pool in OTL doesn’t allow for per-thread resource initialization and that’s something that you need when you’re implementing a connection pool (more background info here). I started adding this functionality but soon found out that I don’t have a good idea on how to implement it. Oh, I have few ideas, they are just not very good :(

Current

At the moment, OTL thread pool functionality is exposed through an interface. This interface is pretty high-level and doesn’t allow the programmer to mess with the underlying thread management. All thread information is hidden in the implementation section. There’s a notification event that’s triggered when pool thread is created or destroyed, but it is only a notification and is triggered asynchronously (and possibly with a delay).

In short:

type
TOTPWorkerThread = class(TThread)
end;

TOmniThreadPool = class(TInterfacedObject, IOmniThreadPool)
end;

Event handlers

The first idea was to add OnThreadInitialization/OnThreadCleanup to the IOmniThreadPool. [Actually something similar already exists - OnWorkerThreadCreated_Asy and OnWorkerThreadDestroyed_Asy – but those events are part of the previous implementation and will be removed very soon.] Those two events would receive a TThread parameter and do the proper initialization there.

There are some big problems though. Let’s say you’ll be implementing database connection pool. You’ll have to open a database connection in OnThreadInitialization. Where would you store that info then? In an external structure, indexed by the TThread? Ugly! Even worse – how would you access the database info from the task that will be executing in the thread pool? By accessing that same structure? Eugh!

Rejected.

Thread subclassing

A better idea is to implement a subclassed thread class in your own code and then tell the thread pool to use this thread class when creating new thread object. You’d then manage database connection in overridden Initialize/Cleanup methods.

type
TDBConnectionPoolThread = class(TOTPWorkerThread)
strict private
FDBConnection: TDBConnectionInfo;
protected
function Initialize: boolean; override;
procedure Cleanup; override;
end;

GlobalOmniThreadPool.ThreadClass := TDBConnectionPoolThread;

Looks much better but there’s again a problem – I’d have to expose TOTPWorkerThread object in the interface section and that’s just plain ugly. Worker thread mechanism should be hidden. Only few people would ever be interested in it.

Thread data subclassing

An even better idea is to add an empty

TOTPWorkerThreadData = class
end;

definition to the interface section of the thread pool unit. IOmniThreadPool would contain a property ThreadDataClass which would point to this definition. And each worker thread would create/destroy an instance of this class in its Execute method.

You’d add database management as

type
TDBConnectionPoolThreadData = class(TOTPWorkerThreadData)
strict private
FDBConnection: TDBConnectionInfo;
protected
constructor Create;
destructor Destroy; override;
end;

GlobalOmniThreadPool.ThreadDataClass := TDBConnectionPoolThreadData;

[Maybe the constructor has to be virtual here? I never know until I try.]

There’s still a question of accessing this information from the task (and it goes the same for the previous attempt – I just skipped the issue then). I’d have to extend the IOmniTask interface with a method to access per-thread data.

Thread data with interfaces

While writing this mind dump a new idea crossed my mind – what if thread data would be implemented as an interface, without the need for subclassing the thread or thread data? In a way it is a first idea just reimplemented to remove all its problems.

Task interface would be extended with thread data access definitions, approximately like this:

type
IOtlThreadData = interface
end;

IOtlTask = interface
property ThreadData: IOtlThreadData;
end;

Thread pool would get a property containing a factory method.

type
TCreateThreadDataProc = function: IOtlThreadData;

IOmniThreadPool = interface
property ThreadDataFactory: TCreateThreadDataProc;
end;

This factory method would be called when thread is created to initialize thread data. Each task would get assigned that same interface into its ThreadData property just before starting its execution in a selected thread. Task would then access ThreadData property to retrieve this information.

In the database connection pool scenario, you’d have to write a connection interface, object and factory.

type
IDBConnectionPoolThreadData = interface(IOtlThreadData)
property ConnectionInfo: TDBConnectionInfo read GetConnectionInfo;
end;

TDBConnectionPoolThreadData = class(TInterfacedObject, IDBConnectionPoolThreadData )
strict private
FDBConnection: TDBConnectionInfo;
protected
constructor Create;
destructor Destroy; override;
end;

function CreateConnectionPoolThreadData: IDBCOnnectionPoolThreadData;
begin
Result := TDBConnectionPoolThreadData.Create;
end;

GlobalThreadPool.ThreadDataFactory := CreateConnectionPoolThreadData;

This approach requires slightly more work from the programmer but I like it most as it somehow seems the cleanest of them all (plus it is implemented with interfaces which is pretty much the approach used in all OTL code).

So, dear reader, what do you think? If you have better idea, or see a big problem with any of those implementations that I didn’t think of, please do tell in the comments!

Tuesday, February 03, 2009

Hassle-free critical section

While writing multithreaded code I sometimes need a fine-grained critical section that will synchronize access to some small (very small) piece of code. In the OmniThreadLibrary, for example, there’s a class TOmniTaskExecutor which has some of its internals (for example a set of Option flags) exposed to both the task controller and the task itself (and those two by definition live in two different threads). Access to those internal fields is serialized with a critical section.

Usually, I need two or more such critical sections. And because I’m lazy and I don’t want to write creation/destruction code every time I need a fine-grained lock, I usually create only one critical section and use if for all such accesses. In other words, when thread 1 is accessing field 1 (protected with that one critical section), thread 2 will be blocked from accessing field 2 (because it is protected with the same critical section). I can live with that, because the frequency of such accesses is very low (or I would not be reusing the same critical section).

Still, I was not happy with this status quo but I didn’t know what to do (except creating more critical sections, of course). Then, while developing the new OTL thread pool, I got a great idea – records! Records need no explicit .Create. Let’s make this new critical section a record!

Let’s start with the use scenario. I want to be able to declare the critical section object …

  TOmniTaskExecutor = class
strict private
oteInternalLock: TOmniCS;
//...
end;

… and then use it without any initialization.

  oteInternalLock.Acquire;
try
if not assigned(oteCommList) then
oteCommList := TInterfaceList.Create;
oteCommList.Add(comm);
SetEvent(oteCommRebuildHandles);
finally oteInternalLock.Release; end;

There are only two problems to be solved. I had to make sure that critical section is created when the record is first used and destroyed when the owning object is destroyed. It turned out that this is quite a big only

Destruction

Let’s start with the simpler problem – destruction. The solution to automatic record cleanup is well-documented (at least if you follow Delphi blogs where they talk about such things …). In general, Delphi compiler doesn’t guarantee what the initial state of record fields will be, but there are two exceptions to this rule – all strings are initialized to an empty string and all interfaces to nil (which in both cases means that the fields holding strings/interfaces are initialized to 0). In addition to that, the compiler will free memory allocated for string fields and destroy interfaces (well, decrease the reference count) when record goes out of scope. If the record is declared inside a method, this will happen when the method exits and if it is declared as a class field, the cleanup will occur when the class is destroyed. In any case, you can be sure that the compiler will take care for strings and interfaces.

So we already know something – TOmniCS record will contain an interface field and an instance of the object implementing this interface will do the actual critical section allocation and access.

  IOmniCriticalSection = interface ['{AA92906B-B92E-4C54-922C-7B87C23DABA9}']
procedure Acquire;
procedure Release;
function GetSyncObj: TSynchroObject;
end; { IOmniCriticalSection }

TOmniCS = record
private
ocsSync: IOmniCriticalSection;
function GetSyncObj: TSynchroObject;
public
procedure Initialize;
procedure Acquire; inline;
procedure Release; inline;
property SyncObj: TSynchroObject read GetSyncObj;
end; { TOmniCS }

The implementation of the IOmniCriticalSection interface is trivial.

  TOmniCriticalSection = class(TInterfacedObject, IOmniCriticalSection)
strict private
ocsCritSect: TSynchroObject;
public
constructor Create;
destructor Destroy; override;
procedure Acquire; inline;
function GetSyncObj: TSynchroObject;
procedure Release; inline;
end; { TOmniCriticalSection }

constructor TOmniCriticalSection.Create;
begin
ocsCritSect := TCriticalSection.Create;
end; { TOmniCriticalSection.Create }

destructor TOmniCriticalSection.Destroy;
begin
FreeAndNil(ocsCritSect);
end; { TOmniCriticalSection.Destroy }

procedure TOmniCriticalSection.Acquire;
begin
ocsCritSect.Acquire;
end; { TOmniCriticalSection.Acquire }

function TOmniCriticalSection.GetSyncObj: TSynchroObject;
begin
Result := ocsCritSect;
end; { TOmniCriticalSection.GetSyncObj }

procedure TOmniCriticalSection.Release;
begin
ocsCritSect.Release;
end; { TOmniCriticalSection.Release }
function CreateOmniCriticalSection: IOmniCriticalSection;
begin
Result := TOmniCriticalSection.Create;
end; { CreateOmniCriticalSection }

Creation

The destruction part was trivial (once you know the trick, of course), but the creation is not. Delphi only guarantees that the ocsSync interface will be initialized to nil (or 0, if you prefer), nothing more than that.

The TOmniCS record offloads all hard work to the Initialize method. It is called from Acquire and GetSyncObj (a method that returns underlying critical section), but not from Release and that’s for a reason. If you call Release before first calling Acquire, it is clearly a programming error and program should crash – and it will because the ocsSync will be nil.

procedure TOmniCS.Acquire;
begin
Initialize;
ocsSync.Acquire;
end; { TOmniCS.Acquire }

function TOmniCS.GetSyncObj: TSynchroObject;
begin
Initialize;
Result := ocsSync.GetSyncObj;
end; { TOmniCS.GetSyncObj }

procedure TOmniCS.Release;
begin
ocsSync.Release;
end; { TOmniCS.Release }

Let’s finally solve the hard work. Before the critical section can be used, ocsSync interface must be initialized. In a single-threaded world we would just create a TOmniCriticalSection object and store it in the ocsSync field. In the multi-threaded world this is not possible.

Let’s think about what can happen if two Acquire calls are made at the same time from two threads. Thread 1 checks if ocsSync is initialized, finds that it’s not and loses its CPU slice. Thread 2 checks if ocsSync is initialized, finds that it’s not, initializes it, calls Acquire and loses its CPU slice. Thread 1 creates another TOmniCriticalSection object, stores it in the ocsSync field (overwriting the previous value, which will get its reference count decremented, which will destroy the implementing object) and calls Acquire. Because this Acquire will be using a critical section different from the Acquire in thread 2, it will succeed and both threads will have access to the protected data. Bad!

The trick is to store TOmniCriticalSecion in the ocsSync field with an atomic operation that will succeed if and only if the ocsSync is empty (nil, zero). And that’s a job for the InterlockedCompareExchange (ICE in short).

ICE takes three parameters – first is an address of the memory area we are trying to modify. Second is the new value and third is the expected value stored in the memory area we are trying to modify. The function returns the current value of the affected memory area. If this memory is not equal to the third parameter than ICE will do nothing.

Quite a mouthful, I know. That’s how it is used in practice:

procedure TOmniCS.Initialize;
var
syncIntf: IOmniCriticalSection;
begin
Assert(cardinal(@ocsSync) mod 4 = 0, 'TOmniCS.Initialize: ocsSync is not 4-aligned!');
while not assigned(ocsSync) do begin
syncIntf := CreateOmniCriticalSection;
if InterlockedCompareExchange(PInteger(@ocsSync)^, integer(syncIntf), 0) = 0 then begin
pointer(syncIntf) := nil;
Exit;
end;
DSiYield;
end;
end; { TOmniCS.Initialize }

Initialize checks if ocsSync is allocated. If not, it will create a new instance of the IOmniCriticalSection interface and store it in the local variable. Then it tries to store it in the ocsSync field with a call to the ICE. The third parameter tells the ICE that we expect ocsSync to contain all zeroes. If this is so, interface will be stored in the ocsSync and ICE will return 0 (otherwise, it will return current value of the ocsSync field). If ICE succeeded, we have to clear the local variable without decrementing interface reference count and we can exit. If ICE failed, we’ll give the other thread a time slice (after all, the other thread just created the critical section, therefore we can assume it will Acquire it, therefore the current thread would not be able to Acquire it and it can sleep a little) and retry.

And that’s how you get a hassle-free critical section. Ugly, I know, but it works.

Just a word of warning – don’t try to pass a TOmniCS record around. Eventually you’ll do an assignment somewhere (newCS := oldCS) and that would screw things out. Just pass the critical section (TOmniCS.SyncObj) and all will be fine.