Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I unfortunately can't agree with that sentiment. If I have to rely on production traffic to test my feature, then, at minimum, my feedback cycle is:

1) Make change. Think really hard about it to make sure it's correct.

2) put out code review.

3) get approval. Merge change.

4) CI pipeline builds and deploys to prod.

5) absent of alerts must mean it works?

Even if you have no QA environment and nothing between you and prod, I've rarely seen deployment to prod take less than 30 minutes. That's an hour feedback cycle. Contrast with:

1) write code.

2) write unit tests.

3) run tests locally.

The feedback cycle, especially when you get iterative, can get as low as single digit seconds. I run my tests and see a bug. I fix the bug, then re-run the tests. Similarly, for a more complex feature, I can break the feature down into multiple cycles of build, test, verify.

And that's not even accounting for the overhead of managing feature flags, which is not free. In the best case, you need to at least release a second PR to remove the feature flag when the feature is successful. At my previous employer, this step was often forgotten and resulted in real, consequential technical debt as it became harder to figure out how the product behaved based on which feature flags were turned off or on.

If you have an experience leads you to believe that production testing can more productive than local automated testing, I at least have never seen it occur in and I find it difficult to even imagine it being true.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: