Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

That's very awesome, I feel like it would be fun to make a feedback loop with the LLM outputting syntactically valid programs which also log the status of various invariants and pre/post conditions as the program runs to validate correctness and maybe even train a LLM this way.

Would be interesting to see how far it would get writing programs, and wrt. the problem of stopping too early on a token, it could resume with some extra context from where it stopped.

Maybe construct program such that it is made by building blocks which can fit into context, such that each block/function has a pre/post condition and invariants are inside it, which the LLM will try to test against each time it runs. I think I just found my next side-project, I know there are many similar tools, but I haven't seen anything trying to couple the compiler, running the program, creating pre/post and invariant checking against emitted code yet.

Would be interesting to test this hypothesis to see if the llm can actually build a program this way. I think about it like humans have a short-term memory as well, and for each sub-structure of a program, the llm would have to work in a similar way. Then reserve a bit of context to control the long-term memory for general goals, or create something like an AST of thoughts which it would recurse through, while reasoning about it.



view as:

Legal | privacy