Act works pretty well to debug actions locally. It isn't perfect, but I find it handles about 90% of the write-test-repeat loop and therefore saves my teammates from dozens of tiny test PRs.
Yeah most of the time that is a good way to test. There are some specific actions that aren't easily tested outside of the regular spot though. Mostly deployment related pieces due to the way our infrastructure is setup.
Everyone in the comments is talking about using this for their own debugging, however I think the way to win with this as a business is in two places
QA and Automated QA.
If you have real human QA people in your org and they could run this while doing QA. If they hit a bug, and could then share a simple link to the dev team that captured their issue + the stack.
Same goes with automated QA. Record the UI tests using this, and if one fails, store the state + stack.
There are a LOT of hard problems in that workflow... Good luck!
We do a similar thing for tests that are hard to automate. Instead of having the test protocol in a document it’s in a script that prompts the user to do things. The observations are then recorded and stored.
I’ve found them incredibly useful when making widespread changes where it’s hard to predicts the effects - one example is upgrading dependencies.
We’ve been using a combination of playwright tests and Meticulous. Playwright tests for the areas where we want solid coverage and are willing to put in the effort. Meticulous for the areas where we want decent coverage but aren’t willing to spend the time writing tests.
The other interesting protocol in this space I only recently became familiar with is the Debug Adapter Protocol or DAP [0]. Curiously it doesnt't seem like there's a standard for test running though? Unless that's supposed to be an LSP action of some sort.
Cypress is pretty good for this. Every command is queued up and sort of deferred. You write tests in a synchronous manner and it has retrryability built in so it will wait for an assertion to be truth before continuing (with a max timeout ofc).
At Mozilla we're actually currently working to run some of our tests continuously under RR (http://rr-project.org/), our record-replay debugger. When a flaky test fails the replay can carefully studied to understand why test only fails say 1 out of 10,000 runs.
A valid point. Haven't used it for bulk (automated?) testing, as I personally never find myself running more than once instance at a time while debugging.
I like SpecFlow (http://specflow.org/), a .net flavor of cucumber. It's not quite rapid development, but as with others here I've found the test recorders to be too brittle to justify the upfront time savings anyway.
I've used it for front end testing and database/view testing.
Author here, love this comment. Wish I would've wrote more about this loop (thing breaks in production, writing a test so said thing wont break again).
I really wish their was a way to make integration tests run quickly.
Would be quite cool if livecoding sessions were more TDD based, and you had a log of all the tests you'd created in the current session. I realise this might not be a good fit for all sorts of work though.
reply