> In Intellij or Visual Studio, syntax suggestion/tab completion are already great. Those technologies - which involve none of the legal risks of Copilot- are a massive step forward in productivity. Copilot does help extend these benefits to other languages that I occasionally dabble in, like Lua and embedded C, though it's clearly better in languages which are better represented in its dataset.
Copilot is so much beyond regular autocomplete that it's playing a completely different game.
I've been using it today while writing a recursive descent parser for a new toy language. I built out the AST in a separate module, and implemented a few productions and tests.
For all subsequent tests, I'm able to name the test and ask Copilot to write it. It will write out a snippet in my custom language, the code to parse that snippet, and construct the AST that my parser should be producing, then assert that my output actually does match. It does this with about 80% accuracy. The result is that writing the tests to verify my parser takes easily 25% of the time that it has when I've done this by hand.
In general, this is where I have found Copilot really shines: tests are important but boring and repetitive and so often don't get written. Copilot has a good enough understanding of your code to accurately produce tests based on the method name. So rather than slogging through copy paste for all the edge cases, you can just give it one example and let it extrapolate from there.
It can even fill in gaps in your test coverage: give it a @Test/#[test] as input and it will frequently invent a test case that covers something that no test above it does.
Property tests are nice for lots of things, but for an AST parser? You'd basically have to re-implement the parser in order to test the parser, wouldn't you?
I suppose you could test "if I convert the AST back to a string do I get the same result", but that's not actually your goal with an Abstract Syntax Tree. If nothing else the white space should be allowed to change.
You are right that simplified reimplementations make good property tests, but in this case I'd go the other way around: generate an AST, render it a in test case-dependent way (adding whitespace as you said, but also parens etc), inject known faults for a fraction of test cases, and check that the parsed AST is equivalent to the original one or errored out if a fault was injected. Rendering a given AST is usually simpler than parsing it.
Copilot is so much beyond regular autocomplete that it's playing a completely different game.
I've been using it today while writing a recursive descent parser for a new toy language. I built out the AST in a separate module, and implemented a few productions and tests.
For all subsequent tests, I'm able to name the test and ask Copilot to write it. It will write out a snippet in my custom language, the code to parse that snippet, and construct the AST that my parser should be producing, then assert that my output actually does match. It does this with about 80% accuracy. The result is that writing the tests to verify my parser takes easily 25% of the time that it has when I've done this by hand.
In general, this is where I have found Copilot really shines: tests are important but boring and repetitive and so often don't get written. Copilot has a good enough understanding of your code to accurately produce tests based on the method name. So rather than slogging through copy paste for all the edge cases, you can just give it one example and let it extrapolate from there.
It can even fill in gaps in your test coverage: give it a @Test/#[test] as input and it will frequently invent a test case that covers something that no test above it does.
reply