Wishing you and your loved ones a Happy and Safe Christmas and a wonderful New Year!!!
Have a good one !
Image by Richard Galbraith https://www.dustydog.com.au
Comments on projects I'm working on. All postings are my personal opinion only.
Wishing you and your loved ones a Happy and Safe Christmas and a wonderful New Year!!!
Have a good one !
Image by Richard Galbraith https://www.dustydog.com.au
As the year comes to an end, I wanted to share a quick update on a side project we’ve been working on.
Regular readers will know that we have spent a considerable amount of time developing Preemptive AI for Domino, which is an end-to-end AI solution for Domino versions 12, 14, and 14.5. Along the way, we even added audio-to-text transcription. If you missed that series, you can find some the details here
One of the biggest practical challenges with LLMs is that to enable a model to understand your data, you must provide it securely, in a useful format, and in a way that scales.
There are several ways to do that, and one of the most common is Retrieval-Augmented Generation (RAG).
So to learn more, we built a prototype.
The goal was to build a system where we could query a knowledge store (Domino) and have an LLM respond using the most relevant source material from that store. We used three months of my email for this experiment, which made it easy to validate the results.
At a high level, the app worked like this:
1. Extract text from emails, clean it, and store it in corresponding JSON files.
2. Generate embeddings using the nomic-embed-text model via a local Ollama server.
3. Run queries against the vector store and return the top X matches.
4. Augment the user prompt with those retrieved results, then send the expanded prompt to the LLM for final processing.
Performance and results:
Note: A Mac Mini M4 Pro handled all the computational tasks. No efforts were made to optimise processing.
Sample set: 11,208 email messages (1.48GB)
Text extraction: 192 messages/sec, 6,488,707 words
Embedding generation + storage: 11 messages/sec
Resulting vector database size: 95.7 MB
Queries: A typical vector query takes less than one second—it’s unbelievably fast. The hit rate is excellent for the kinds of messages you’d hope it would find.
Conclusion: The results were fantastic. Embedded retrieval is extremely fast, and when it functions effectively, it feels a bit magical.
So what’s next?
We learned a lot building this prototype. However, we understand that HCL has RAG support planned, and since we have no idea what that is going to look like, for now, we’ll wait to see what is included with Domino 2026. Once that is clearer, we will decide whether it is worth investing more time into this concept.
I think that's it for 2026. All the best over the holiday break—and we'll catch up in 2026.
Preemptive AI for Domino ships with built‑in proofreading instructions and enhancements to the Notes mail template that make email proofreading a breeze.
Simply select your text and click the AI – Proofread button. What could be easier?
Example: 1
Result:
PS: You don't need Domino IQ to do this.
Last week we launched Audio Intelligence for Domino; the details are here.
In this post we take a closer look at the new “AI-Audio to Text Summary” Task.
Here is what happens:
1. User sends an email with an audio attachment to the AI-Requests database
2. The email is automatically converted into an AI Request
3. The AI-Audio to Text Summary Task runs in the AI-Proxy database to process the request.
It performs the following tasks as defined in its configuration document:
We’re excited to introduce powerful new audio capabilities in Preemptive AI for Domino:
Audio-to-text Transcription, Audio Translation, and Audio Summarisation.
Key benefits:
Unlock faster insights from calls, meetings, and voice notes directly from your mailbox.
This solution works with Domino 12 and above. It runs on Windows and Linux. Requires access to an LLM server. This service is compatible with either the OpenAI API or Domino IQ. You get to pick if you want to use a cloud or local services.
The audio translation service engine runs on a local server. This solution is compatible with Windows, Linux, and macOS.
You can read more about this solution at the links below, or you can register for a free trial on our website:
I previously blogged about how the Notes Mac Client does not work on macOS 26.
As soon as I hit the issue I created a support ticket with HCL. They could reproduce problem, but were never able to resolve it. I assume it was a particular macOS bug they could not work around.
I think it's pretty poor that a showstopper bug like this could never be resolved.
The Notes Mac client really needs a revamp and needs to work natively on Apple silicon machines. Time is running out to make that happen as the Rosetta emulator in the OS is on the way out.
I’ve written before that the HCL Mac Notes client (14.5) does not work correctly on macOS 26.
Now it is apparent that the problem is actually with the macOS 26, and I am told that HCL is trying to work around it.
In the meantime, I can confirm that if you update macOS 26 to 26.1 beta 3, everything works again.
SUPER IMPORTANT - Make sure you perform a clean shutdown of the Mac before attempting the upgrade to macOS 26.1, and DO NOT start the Notes client (after the clean shutdown) before update is complete. I have received reports that not following these steps can completely break the upgrade process and may require you to do an internet-based reinstall. Proceed with caution.
Good luck.