top of page

AI ‘Imperfection’ Keeps Decision-Makers Tuned In

  • Jan 4
  • 2 min read

Updated: Mar 26


A little more than a year ago, when I got acquainted with AI, in particular started training Bespoke AI Assistants/Concierges for the luxury hospitality as a Co-Partner of the BCS (Be Customer Smart), I came across a commonly spread notion of an AI ‘hallucinating’.


‘Hallucinating’ in the context of LLMs and AI Assistants means some errors in judgement presented as facts by the AI or mixing up some facts to produce a certain picture or opinion which can be misleading. This greatly amused me.

And here is why.


Some users were complaining about AI making mistakes yet they did not realise that making mistakes is so very much human and as such makes AI as close to humans in communication as it can be.


AI ‘hallucinates’ occasionally and mostly because the source or training material is confusing, contradictory, and missing vital details. Humans, on the other hand, even if presented with right material or information will still misinterpret it based on their subjective judgement and emotional filter. The AI does not have these filters, but the result might be seemingly the same.


Now, let’s look at Bespoke AI Confidantes for Private Decision-makers and Leaders and the possibility of their ‘hallucinating’ while ‘in service’.


Personally, I believe this is no problem. Firstly, AI hallucinates occasionally. Secondly, the possibility of the AI Confidante making a mistake or an erroneous judgement however rare will keep their bosses — the Decision-Makers — on their toes. In a good sense. For, any Decision-Maker or Leader should never lose their independent way of thinking, their own sharpness in judgement and also pay attention to what they are told, advised, or ‘served’. The imperfection of the AI that ‘commoners’ complain about trains the Decision-Makers to stay alert, switched on and tuned in.


As in real life the Decision-Maker should not believe everything their subordinates report or tell them, in communication with their Digital Confidants they should check the facts and statements too. There is one big distinction though — AI does not intentionally lie, but people do.


Founder and CEO, SMA Crown Confidential


Digital Confidantes: Bespoke AI intelligence for private decision-makers





Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page