Can we get strong guarantees from AI tools that are known to hallucinate? We discuss some strategies, and ways that Elm might be a great target for AI assistance.
https://cdn.simplecast.com/audio/6a206baa-9c8e-4c25-9037-2b674204ba84/episodes/d1c5f97c-9700-48b0-ab35-a039edbfd0d5/audio/16dc506d-5aa1-42c1-8838-9ffaa3e0e1e9/default_tc.mp3 elm radio – 080: Elm and AI page
[00:12:45] So to me, where it gets very interesting is when you start using prompt engineering to do that. And so I've been thinking about a set of principles around this.
So prompt engineering is when you ask a question to get a Copilot or mostly chatGPT or other tools and you do it in a specific way, like you frame your questions in a specific way, you ask for specific things, you give additional instructions so that it gives you better results.
[00:12:32] It will hallucinate certain variants because the process through which it is arriving at these suggestions does not involve understanding the types of the program like the compiler does.
[00:13:18] I don't know why people call it engineering yet, but it's very interesting. Although, I mean, there are prompt engineer job posts out there, and I think this is kind of going to become a thing.
[00:13:32] So it feels more like politics, like when you try to phrase things that sound good to you, what makes you sound good. It's more like a Speech thing than an engineering thing so far. >> speech
[00:13:47] I hope those literature majors in college are finally cashing in on those writing skills. Oh, now you're not making fun of my poetry degree, right?
[00:13:58] Now that I can make full blown web applications in two seconds.