Sunday, July 13, 2025

Delphi and AI [8]: Write me an article about ComputCore

This was supposed to be an article comparing three major AI players in their ability to write technical articles. In preparation, however, I made some mistakes ... :(

When I prepared for the initial article, I asked them to fetch the data from https://github.com/gabr42/ComputeCore/blob/main/ComputeCore.pas and write about it. (See the end of this article for the initial query and results.) At that moment, however, that link was not public yet. All three AIs ignored that and happily wrote about the code they knew nothing about. It was only Claude that complained about the bad link but I missed that while checking the results. 

So I decided to rerun the experiment on all platforms. I started by asking the engines the following question: 

Can you accesss https://github.com/gabr42/ComputeCore/blob/main/ComputeCore.pas and provide an overview of the code in one paragraph?

This worked with ChatGPT but both Claude and Gemini complied that they cannot access the link. I had to rerun the query with the link to the raw file for the latter two to work:

Can you accesss https://raw.githubusercontent.com/gabr42/ComputeCore/refs/heads/main/ComputeCore.pas and provide an overview of the code in one paragraph?

Analyzing the answer shows that the answers were not done purely on the basis of the linked code. Both Claude and Gemini have included details that they got from my previous article on ComputeCode.

In the answers below I marked hallucinated misinformation with red and information that was obviously retreived from different sources with blue.

[ChatGPT]

The file ComputeCore.pas implements a lightweight, unit-based parallel computation framework for Delphi. It defines an abstraction over task-based parallelism using the IComputeTask interface, which encapsulates a single unit of computation with a Run method. The main orchestrator, TComputeCore, manages a pool of worker threads, balancing execution via TComputeQueue, a thread-safe task queue. Tasks are submitted through TComputeCore.Run, which enqueues them and ensures efficient thread utilization without over-subscription. The unit also includes support for cooperative cancellation and shutdown through TComputeCancellationToken. Designed to be simple and efficient, it offers a foundation for building parallel computation workflows without relying on external libraries like OmniThreadLibrary or the Parallel Programming Library (PPL).

My comment: Nice overview but it looks like it was generated from my previous article on ComputeCore and not from the code. Contains a lot of misinformation.

[Claude Sonnet 4]

ComputeCore.pas is a lightweight parallel processing framework for Delphi that implements a thread pool pattern for executing CPU-intensive tasks across multiple worker threads. The unit defines a simple two-method interface (IComputeCore) with Run and WaitFor methods, along with a static TTask class for convenient access to a global compute core instance. The core implementation (TComputeCore) manages an array of worker threads (TCCThread) that continuously loop waiting for tasks, a thread-safe task queue using Spring.Collections, and an inactive thread stack for efficient thread reuse. Tasks are wrapped in TCCTask objects that implement exception handling and completion signaling via TEvent objects. The framework prevents deadlocks in nested task scenarios by implementing a "helping" strategy where threads calling WaitFor will execute other pending tasks from the queue while waiting for their specific task to complete. Thread safety is achieved through Delphi's TMonitor for protecting shared data structures, and the global instance uses optimistic lazy initialization with atomic operations to handle concurrent access during startup. The design prioritizes simplicity by limiting tasks to parameterless procedures (TProc) with no return values, making it suitable for fire-and-forget computational workloads that can effectively utilize multiple CPU cores without the complexity of cancellation tokens or result passing.

My comment: Nice overview but definitely quite a lot of that was extraced from my previous article. No hallucinations.

[Gemini 2.5 Pro]

Based on the information I found, ComputeCore is a lightweight parallel processing framework for Delphi. It is designed to simplify the execution of CPU-intensive tasks by distributing them across multiple threads. The main goals of the framework are to run tasks concurrently, allow tasks to create child tasks, and provide a simple interface for developers. It is intended to be a general-purpose framework that can be used in various projects. It will eventually be integrated into the Spring4D project as Spring.Threading.

My comment: Short and definitely extracted from my previous article and not from the code.

Original prompt

When I was writing the original article, I asked all AIs the question below. Where available I used the "research" mode of the AI:

You are an experienced technical writer that knows Delphi programming language and the concepts of designing multithreaded applications.

You are writing a blog article about the ComputeCore unit (https://github.com/gabr42/ComputeCore/blob/main/ComputeCore.pas). It should be written in the tone of voice that is generally used at the https://thedelphigeek.com blog and should encompass all of the following talking points:

Goals
Be able to run (computationally intensive) tasks
Tasks can spawn other tasks
Simple interface

Implementation choices
No return value
No cancellation

Top-level view
One singleton GlobalComputeCore: IComputeCore
By default it is automatically created on first use with CPUCount-1 worker threads
To override, create it in advance: GlobalComputeCore := TComputeCore.Create(numThreads)
IComputeCore is thread-safe and can be (and is) called from any worker thread
Simple public interface
function Run(const taskProc: TProc): ITask; overload;
procedure WaitFor(const task: ITask);
For compatibility with the PPL, there are ITask and TTask
They mostly just call into GlobalComputeCore
TTask.Run creates GlobalComputeCore on the fly first (if not already created)
Optimistic initialization (http://www.omnithreadlibrary.com/book/chap08.html#synch-initialization-optimistic for more info)
You could create multiple compute cores but then you would not be able to use TTask.Run and TTask.WaitFor

Data structures in IComputeCore
One array (preallocated to num worker threads) to own worker TThread objects (FThreads)
One array (preallocated to num worker threads) to store inactive threads, used as a (FInactiveThreads)
A queue of tasks to be executed (FTasks)

Access to shared data
When dealing with multithreaded programs it is important to protect data that can be access from multiple threads at the same time
Protected with a Monitor (MonitorEnter(Self)/MonitorExit(Self))
As little shared access as possible
Visual separation between safe methods (that acquire exclusive access internally) and unsafe methods (that expect exclusive access to be already established); the latter end in “_U”
Protected access is needed to access task queue and inactive threads array
FThreads is only access from constructor and destructor and doesn’t require protected access

Running a task
Task is put into FTasks queue
If an inactive thread can be found, it is removed from the FInactiveThreads array and signaled to begin work
Otherwise, nothing special happens. Task will be executed when one of the worker threads finishes the previous task

Worker threads
Each worker thread runs a simple loop

```  while (not Terminated) and (FSignal.WaitFor <> wrTimeout) do begin
    while (not Terminated) and FOwner_ref.AllocateTask(Self, task) do
      task.Execute;
  end;
```
If there is no work to be done it waits for the FSignal to be triggered
When there is work to be done, it processes tasks; after that, it goes back to the previous state

Race conditions
When dealing with multithreaded programs it is also important to think about critical parts of the code being executed in parallel. In this case, critical parts are related to accessing shared data structures.
IOW, we want to think about what happens when a task is being scheduled at the same time as another task finishes execution.
If we are not careful, it could happen that a task would be put into the internal task queue, but none of the threads would start working on it.
That is why AllocateTask does more than one job from the locked area - it tries to get a task to be processed (with GetTask_U) and then marks the worker thread either active or inactive given the result of the GetTask_U (`MarkThreadActive_U`) 
If we would split this operation into two separate critical sections, that would create a possible race condition

Preventing resource exhaustion
With this approach it is easy to exhaust all worker threads and deadlock the code - if a task executes a nested task which does the same and that repeats recursively for more than CPUCount levels, all worker threads would be busy with waiting on tasks to complete and no threads would be available to process remaining child tasks
This is something that easily happens in QuickSort which is exactly the scenario this compute pool was developed for
Solution: While waiting for a task to complete, grab tasks from the task queue and execute them. This happens in `TComputeCore.WaitFor`
```  while task.WorkDone.WaitFor(0) = wrTimeout do begin
    __Acquire;
    GetTask_U(newTask);
    __Release;
    if not assigned(newTask) then
      TThread.Yield
    else begin
      newTask.Execute;
      if assigned(newTask.ExceptObj) then
        raise Exception(newTask.ExceptObj);
    end;
  end;
```

Unit testing
Let’s talk about unit testing in a separate article

Conclusion
It is not that hard to write your own parallel processing framework if you follow these rules:
Keep it simple!
Minimize access to shared data structures!
Think about race conditions!
Do lots of testing!

Please write this article.

Answers

OpenAI ChatGPT


Anthropic Claude Sonnet 4


Gemini 2.5 Pro

Observations

As you can see from the prompt, I was trying to treat the AI as a helpful assistant researcher. I did specify the main guide lines for the article but let the AI pull in all the missing pieces. This approach, however, failed as the AIs were not able to access the code at all.

I have to repeat the experiment, but with a different codebase.

2 comments:

  1. The link to OpenAI ChatGPT gives me a 404. The other two provide the answers ok.

    It seems to me that you didn't put sufficient constraints on the original prompt. You must tell the LLM that it must access the original code and base its response on that code. All you said was to cover certain talking points, with no restrictions on where that discussion came from.

    ReplyDelete
  2. Anonymous12:44

    The URL for the ChatGPT article is https://github.com/gabr42/DelphiAndAI/blob/main/08%20Write%20me%20an%20application%20about%20ComputeCore/computecore-chatgpt.pdf

    ReplyDelete