Hallucination

Can we get strong guarantees from AI tools that are known to hallucinate? We discuss some strategies, and ways that Elm might be a great target for AI assistance.

https://cdn.simplecast.com/audio/6a206baa-9c8e-4c25-9037-2b674204ba84/episodes/d1c5f97c-9700-48b0-ab35-a039edbfd0d5/audio/16dc506d-5aa1-42c1-8838-9ffaa3e0e1e9/default_tc.mp3 elm radio – 080: Elm and AI page

[00:07:55] Hallucination being when it says something and it thinks [sic!] it's right. […] and hallucination is like sort of the technical term that open AI is using in some of these white papers […].

[00:08:14] But hallucination, it's very prone to hallucination because these are sort of predictive models that kind of synthesize information, but it's not an exact science. And sometimes it mixes things together that don't quite fit. And so I think, I mean, Jeroen, I think it's fair to say that we really like having tools that we can trust.

~

hallucination | BrE həˌluːsɪˈneɪʃ(ə)n, AmE həˌlusəˈneɪʃ(ə)n | noun (act) Halluzinieren (Neutr.) (instance, imagined object) Halluzination (Fem.), Sinnestäuschung (Fem.)