Preemptive AI provides the components to add AI functionality to any Domino application.
Here is a simple LotusScript example using the AI-Ask instruction. It will work on Domino 12, 14 or 14.5
Comments on projects I'm working on. All postings are my personal opinion only.
Preemptive AI provides the components to add AI functionality to any Domino application.
Here is a simple LotusScript example using the AI-Ask instruction. It will work on Domino 12, 14 or 14.5
The adage, “You cannot manage what you do not monitor,” is particularly true in the realm of AI usage.
One of the notable features of Preemptive AI is its ability to maintain an audit trail of AI requests.
All the data is there, protected by readers fields and yet totally accessible by management.
Here are ten key benefits of this Audit Trail provides:
Find out more about Preemptive AI here.
We’re happy to announce the first release of Preemptive AI for Domino.
This is a soft release so we can get the solution into your hands and get valuable feedback before it goes on sale.
Please check out the product page, register for a download, and then let us know your thoughts.
Cheers
Preemptive AI is designed to complement Domino IQ. Straight out of the box.
It will add 11 very handy ultra productivity improvement -you're going to love it.
What's more, integrating Preemptive AI into Domino IQ is beyond easy!
All that is required in Domino IQ is a one System and Command Prompt.
Those settings will provide 11 new features, with endless possibilities moving forward.
More soon.
Well, the votes are in (actually, no one voted, so we’ve made the executive decision). There will be 11 instructions in our first release of Preemptive AI for Domino.
At the end of the day, there are endless possibilities for what we could add, and we think this is a great mix. That said, we'd love to hear suggestions on what to do next.
Preemptive AI is planned to “soft launch” on Monday 7 July Australia time. Stay tunned.
What's your go-to model? So many models…
Ha ha – not those models.
No matter how good your LLM server is, if the model you’re running isn't up to the task, then the results are going to be bad.
At the moment, I’m still testing out lots of different models, but I've found a nice balance with google/gemma-3-4b (3.03GB)
For more complex jobs maybe llama3.1 (8.55GB) or google/gemma-3-12b (8.07GB).
So, what’s working for you?