Guarantee

Can we get strong guarantees from AI tools that are known to hallucinate? We discuss some strategies, and ways that Elm might be a great target for AI assistance.

https://cdn.simplecast.com/audio/6a206baa-9c8e-4c25-9037-2b674204ba84/episodes/d1c5f97c-9700-48b0-ab35-a039edbfd0d5/audio/16dc506d-5aa1-42c1-8838-9ffaa3e0e1e9/default_tc.mp3 elm radio – 080: Elm and AI page

[00:08:48] We do not accept half guarantees, right?

[00:09:57] So then when you're talking about guarantees and then AI that's prone to Hallucination, that becomes an interesting question, right? Now I actually am pretty confident about our ability to do useful things with that. Maybe that's counterintuitive because I'm talking about how much we care about guarantees and then talking about hallucination. I'm actually very reluctant to integrate things like GitHub copilot suggestions into my code because I think it's a very easy way to introduce subtle bugs.

[00:10:32] But the way I'm thinking about how AI fits into my workflow for writing Elm code and my sort of ideals for tools that involve like trusting my tools so that I can do critical thinking and then delegate certain types of problems with complete trust to a tool, right? Those two things do fit together, but not out of the box.

[00:11:04] […], I was playing around with GitHub copilot, which for anyone who hasn't used it, it's now a paid tool, but it will give you sort of fancy AI assisted, GPT assisted auto completions in your editor.

[00:12:06] In some cases, I'll trust it. I will have a custom type with four variants, and I will write out a function that says my custom type to string, and then it fills it in perfectly. And it's impressive, but there are certain things like that, that I have an intuition that it's going to be really good and trustworthy at solving. That said, it does hallucinate certain things.