Friday, December 19, 2025

Season's Greetings from Downunder!


Wishing you and your loved ones a Happy and Safe Christmas and a wonderful New Year!!! 

Have a good one !


Image by Richard Galbraith https://www.dustydog.com.au 


RAG and Domino: A prototype we built while we wait for Domino 2026

As the year comes to an end, I wanted to share a quick update on a side project we’ve been working on.

Regular readers will know that we have spent a considerable amount of time developing Preemptive AI for Domino, which is an end-to-end AI solution for Domino versions 12, 14, and 14.5. Along the way, we even added audio-to-text transcription. If you missed that series, you can find some the details here

One of the biggest practical challenges with LLMs is that to enable a model to understand your data, you must provide it securely, in a useful format, and in a way that scales.

There are several ways to do that, and one of the most common is Retrieval-Augmented Generation (RAG).

So to learn more, we built a prototype.

The goal was to build a system where we could query a knowledge store (Domino) and have an LLM respond using the most relevant source material from that store.  We used three months of my email for this experiment, which made it easy to validate the results.

At a high level, the app worked like this:

1. Extract text from emails, clean it, and store it in corresponding JSON files.

  • The text was split into chunks suitable for embedding (about 1,500 characters each).
  • Words are not split across chunk boundaries.
  • Chunks include a 5-word overlap to preserve context.

2. Generate embeddings using the nomic-embed-text model via a local Ollama server.

  • Each chunk of text got its embeddings and was turned into a vector in a matrix that has 768 dimensions. The embeddings capture meanings, not so much keywords. The math here is unbelievable and a big part of the magic that makes this work. 
  • Embeddings are stored in a local vector-enabled database (not Domino).
When it is time to answer a question, we follow these steps:

3. Run queries against the vector store and return the top X matches.

4. Augment the user prompt with those retrieved results, then send the expanded prompt to the LLM for final processing.


Performance and results:

Note: A Mac Mini M4 Pro handled all the computational tasks. No efforts were made to optimise processing. 

Sample set: 11,208 email messages (1.48GB)

Text extraction: 192 messages/sec, 6,488,707 words

Embedding generation + storage:  11 messages/sec

Resulting vector database size: 95.7 MB

Queries: A typical vector query takes less than one second—it’s unbelievably fast. The hit rate is excellent for the kinds of messages you’d hope it would find.

Conclusion:  The results were fantastic. Embedded retrieval is extremely fast, and when it functions effectively, it feels a bit magical.

So what’s next?

We learned a lot building this prototype. However, we understand that HCL has RAG support planned, and since we have no idea what that is going to look like, for now, we’ll wait to see what is included with Domino 2026. Once that is clearer, we will decide whether it is worth investing more time into this concept.

I think that's it for 2026. All the best over the holiday break—and we'll catch up in 2026.

Wednesday, November 26, 2025

Adding email proofreading to Domino (IQ)

Preemptive AI for Domino ships with built‑in proofreading instructions and enhancements to the Notes mail template that make email proofreading a breeze.

Simply select your text and click the AI – Proofread button. What could be easier?

Example: 1


Result:

Want to know more?

PS: You don't need Domino IQ to do this. 

Wednesday, November 19, 2025

Let’s take a closer look at our “AI-Audio to Text Summary” task for Domino

Last week we launched Audio Intelligence for Domino; the details are here.

In this post we take a closer look at the new “AI-Audio to Text Summary” Task.

Here is what happens:

1. User sends an email with an audio attachment to the AI-Requests database

2. The email is automatically converted into an AI  Request 


3. The AI-Audio to Text Summary Task runs in the AI-Proxy database to process the request.

It performs the following tasks as defined in its configuration document:

  •     Audio is converted to text
  •     The original text is 'cleaned' and formatted through a request to a language model.
  •     A post transcription summary email is created via another LLM Request
  •     Response email is sent.

Making the complex look simple - nice.

Would you like to know more? Click here 

Friday, November 14, 2025

Audio Intelligence Comes to Domino

We’re excited to introduce powerful new audio capabilities in Preemptive AI for Domino:

Audio-to-text Transcription, Audio Translation, and Audio Summarisation.


Here’s how it works:

  • Simply email an audio file (MP3 or WAV) to your Preemptive AI for Domino solution. 

  • The system will then automatically transcribe the audio into text. This includes automatic language detection with translation to English by default from over 99 supported languages (who knew there were that many!).

  • Then, based on your email, it will:

    • Provide a full transcription of the audio
    • Provide a concise summary of the transcript, or
    • Create a structured meeting summary if the audio is a meeting recording.

  • Finally, results are delivered straight back to your inbox.

Key benefits:

  • Fantastic productivity gains
  • Fast, email-first workflow—no new tools or training required.
  • Enterprise-grade privacy: all processing can be configured to run entirely within your organization, ensuring your data never leaves your environment.
  • Cost-effective
  • Use AI to analyse the audio; spot opportunities; document outcomes; and generate action items.
  • Language translation at no cost

Unlock faster insights from calls, meetings, and voice notes directly from your mailbox.

This solution works with Domino 12 and above. It runs on Windows and Linux. Requires access to an LLM server. This service is compatible with either the OpenAI API or Domino IQ. You get to pick if you want to use a cloud or local services.

The audio translation service engine runs on a local server. This solution is compatible with Windows, Linux, and macOS.

You can read more about this solution at the links below, or you can register for a free trial on our website:


Wednesday, November 5, 2025

The HCL Notes Mac Client once again works on macOS 26.1

I previously blogged about how the Notes Mac Client does not work on macOS 26.

As soon as I hit the issue I created a support ticket with HCL. They could reproduce problem, but were never able to resolve it. I assume it was a particular macOS bug they could not work around. 

I think it's pretty poor that a showstopper bug like this could never be resolved. 

The Notes Mac client really needs a revamp and needs to work natively on Apple silicon machines. Time is running out to make that happen as the Rosetta emulator in the OS is on the way out.


 

Wednesday, October 15, 2025

How to get the HCL Mac Notes Client on macOS 26

I’ve written before that the HCL Mac Notes client (14.5) does not work correctly on macOS 26.

Now it is apparent that the problem is actually with the macOS 26, and I am told that HCL is trying to work around it.

In the meantime, I can confirm that if you update macOS 26 to 26.1 beta 3, everything works again.

SUPER IMPORTANT - Make sure you perform a clean shutdown of the Mac before attempting the upgrade to macOS 26.1, and DO NOT start the Notes client (after the clean shutdown) before update is complete. I have received reports that not following these steps can completely break the upgrade process and may require you to do an internet-based reinstall. Proceed with caution.

Good luck.