Some time ago, I tried using AI to write a blog post, but things didn’t go quite as planned. For my second attempt, I aimed to do better. I focused on a single engine—OpenAI—and used it in a more granular way. Instead of simply saying, “Write me an article about...,” I started with basic introductory questions, then asked it to write the article, and finally requested improvements where I felt the draft was lacking. The end result was quite satisfying.
This article is my analysis of that process. If you’re only interested in reading the transcript, you can find it here.
I began by putting OpenAI into “research” mode and asked a simple question:
Hello. Please help me write a blog article. For starters - can you access https://github.com/gabr42/Chatterbox project?
The reason for this was that, in my previous attempt, LLMs couldn’t access the files I wanted them to write about. OpenAI, however, promptly responded that it could access the project and asked for more details about the kind of article I needed. So I followed up with a more specific request:
While you are entirely correct, I would like to get an article about using parts of the Chatterbox that access LLMs in your own applications. There is a demo in the demos folder that shows how that is done (EngineDemo).
In response, I received an “interesting” article about a ... pure fiction. The AI clearly hadn’t looked at the EngineDemo application at all. Nice try, little AI, but I need facts, not fantasy. So I tried again:
Most of that is completely incorrect. Did you look into https://raw.githubusercontent.com/gabr42/Chatterbox/refs/heads/main/demos/ EngineDemo/engineDemoMain.pas, which is the main unit for this demo?
To which the little AI replied:
I wasn't able to load the raw engineDemoMain.pas file directly (a security restriction kicked in). Could you paste the key portions—such as the initialization, engine creation, and chat logic—from that unit here? With those snippets, I'll be able to dive into exactly how the EngineDemo implements LLM integration and help you craft an accurate blog article around it.
A good reminder: be cautious when working with LLMs. Even when they claim they can access certain resources, that may not actually be true!
At that point, I pasted the full source of the main demo unit—just 154 lines long. After some time to process, OpenAI responded with a relatively good analysis. It was still missing some important points (see page 11 of the transcript), but it was good enough to kickstart my own writing brain.
With a rough draft in hand, I could easily identify weak spots, think about what was missing, and decide how to structure the article. This is something I always do when writing, but starting from a blank page usually takes longer. So even if I had stopped there and written the article myself, the experiment would have been a clear win.
But I didn’t stop. I explained to the AI how I wanted the article to be structured and what it should include:
Better, thanks. That's how I think the information in the article should be structured:
- Introduction of the topic, of course
- Make a point that Chatterbox LLM units don't actually handle network traffic and that you had to do this yourself (and later in the article show how this is done in this demo)
- Additional important talking point is that each LLM provider is implemented in a separate unit (for example, CB.AI.Client.OpenAI handles OpenAI) and all such units start with 'CB.AI.Client.'. Each unit registers itself in the application-global registry (GSerializers). Your project will only support provider engine in units that are used somewhere in the code. In this project, relevant units were added directly into the project and you can see them in the .dpr file (which is listed here full for your convenience):
(entire EngineDemo.dpr, skipped here for brevity)
- Next the article should focus on running a prompt
- After that describe how the app uses ChatterBox units to open the page where API keys can be created and how it retrieves the list of supported models
- At the end describe how the network layer is implemented
These are all points I, as a developer, know well—but they’re not easy to extract from the demo. I completely understand that I had to point out these implementation details.
OpenAI responded with updated sections and, at my request, regenerated the full article (see page 17 of the transcript).
While reviewing the updated article, I found a few more details that needed to be included:
Additional information to be included in the 'Implementing the HTTP Layer' section:
- GNetworkHeaderProvider is also a global registry, it is updated from the CB.AI.Client units, just as GSerializers
- It contains headers that have to be included in a HTTP request
- CAuthorizationKeyPlaceholder constant is a placeholder for the actual API key
After another update and full regeneration (page 19), I was mostly happy with the result. Even the AI seemed to think so—it asked:
Let me know if you'd like this exported in Markdown, HTML, or ready for WordPress.
I replied:
HTML, please, I have to import it into Blogger. And please add syntax highlighting to Delphi code.
And got back (page 23):
Sure! Below is the full article in HTML format with syntax highlighting for Delphi code using <pre><code class="language-delphi"> blocks for compatibility with most Blogger syntax highlighters like Prism.js or Highlight.js.
(entire article in HTML format)
I just copied and pasted the HTML into Blogger, and the article was ready! I then spent another hour integrating Highlight.js into my Blogger template—but that’s another story...
Overall, I consider this guided writing experiment a success. The resulting article was solid—I published it without any changes—and the process was smooth. The biggest issue is that the tone still feels a bit “AI-ish.” You can tell at a glance that it was written by an LLM. Maybe next time I’ll ask the AI to match my tone of voice more closely.
No comments:
Post a Comment