Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Act works pretty well to debug actions locally. It isn't perfect, but I find it handles about 90% of the write-test-repeat loop and therefore saves my teammates from dozens of tiny test PRs.


sort by: page size:

Nice find there with Act! I've always hated the dev/test flow with GHA.

Yeah most of the time that is a good way to test. There are some specific actions that aren't easily tested outside of the regular spot though. Mostly deployment related pieces due to the way our infrastructure is setup.

Everyone in the comments is talking about using this for their own debugging, however I think the way to win with this as a business is in two places QA and Automated QA. If you have real human QA people in your org and they could run this while doing QA. If they hit a bug, and could then share a simple link to the dev team that captured their issue + the stack.

Same goes with automated QA. Record the UI tests using this, and if one fails, store the state + stack.

There are a LOT of hard problems in that workflow... Good luck!


We do a similar thing for tests that are hard to automate. Instead of having the test protocol in a document it’s in a script that prompts the user to do things. The observations are then recorded and stored.

I’ve found them incredibly useful when making widespread changes where it’s hard to predicts the effects - one example is upgrading dependencies.

We’ve been using a combination of playwright tests and Meticulous. Playwright tests for the areas where we want solid coverage and are willing to put in the effort. Meticulous for the areas where we want decent coverage but aren’t willing to spend the time writing tests.


This is almost exactly my approach to testing. I'm going to have to save this for sharing with teammates.

The other interesting protocol in this space I only recently became familiar with is the Debug Adapter Protocol or DAP [0]. Curiously it doesnt't seem like there's a standard for test running though? Unless that's supposed to be an LSP action of some sort.

[0] https://microsoft.github.io/debug-adapter-protocol/


Cypress is pretty good for this. Every command is queued up and sort of deferred. You write tests in a synchronous manner and it has retrryability built in so it will wait for an assertion to be truth before continuing (with a max timeout ofc).

It's been working pretty well for me so far.


We've been using this instead of TestFlight. The crash reporting aggregation is a huge time saver, and the developer is responsive.

At Mozilla we're actually currently working to run some of our tests continuously under RR (http://rr-project.org/), our record-replay debugger. When a flaky test fails the replay can carefully studied to understand why test only fails say 1 out of 10,000 runs.

Really nice for integration testing. Not to be confused with unit testing.

You could literally replay entire missions.


This is a great idea. I often struggle with testing outside the bounds of what I consider normal usage.

I really like the idea of tests as materialized debugging sessions.

A valid point. Haven't used it for bulk (automated?) testing, as I personally never find myself running more than once instance at a time while debugging.

I like SpecFlow (http://specflow.org/), a .net flavor of cucumber. It's not quite rapid development, but as with others here I've found the test recorders to be too brittle to justify the upfront time savings anyway.

I've used it for front end testing and database/view testing.


Author here, love this comment. Wish I would've wrote more about this loop (thing breaks in production, writing a test so said thing wont break again).

I really wish their was a way to make integration tests run quickly.


Also great for automated tests of interactive functionality in your own projects.

You are right. Small teams absolutely do not need to execute code remotely, especially if the cost is having an always on job server.

My team writes test output to our knowledge base:

    bugout trap --title "$REPO_NAME tests: $(date -u +%Y%m%d-%H%M)" --tags $REPO_NAME,test,zomglings,$(git rev-parse HEAD) -- ./test.sh
This runs test.sh and reports stdout and stderr to our team knowledge base with tags that we can use to find information later on.

For example, to find all failed tests for a given repo, we would perform a search query that looked like this: "#<repo> #test !#exit:0".

The knowledge base (and the link to the knowledge base entry) serve as proof of tests.

We also use this to keep track of production database migrations.


Would be quite cool if livecoding sessions were more TDD based, and you had a log of all the tests you'd created in the current session. I realise this might not be a good fit for all sorts of work though.
next

Legal | privacy