Possibly stupid question: is automated testing actually a common practice?

Referring more to smaller places like my own - few hundred employees with ~20 person IT team (~10 developers).

I read enough about testing that it seems industry standard. But whenever I talk to coworkers and my EM, it’s generally, “That would be nice, but it’s not practical for our size and the business would allow us to slow down for that.” We have ~5 manual testers, so things aren’t considered “untested”, but issues still frequently slip through. It’s insurance software so at least bugs aren’t killing people, but our quality still freaks me out a bit.

I try to write automated tests for my own code, since it seems valuable, but I avoid it whenever it’s not straightforward. I’ve read books on testing, but they generally feel like either toy examples or far more effort than my company would be willing to spend. Over time I’m wondering if I’m just overly idealistic, and automated testing is more of a FAANG / bigger company thing.

GissaMittJobb,

Automated tests are pretty common, yes. It’s not strictly speaking a matter of company size, but moreso company technical maturity.

Automated tests do not slow your business down, it is in fact the only way to not get slowed down as the amount of code you maintain increases.

The alternative cost of not having tests catch issues before they reach production is very significant - an error caught by an automated test costs nothing, while an error that makes it into production can cause immense harm to the business, if only for the time necessary to remediate the issue, which is time that could have been spent on actually making progress on delivering new features.

Not to mention the high cost of having to employ increasing amounts of manual testers just to keep the worst of issues from slipping through.

All in all, not having automated tests in place is a significant mistake from a business perspective. You might want to have a frank discussion with your CTO about it.

Theharpyeagle, (edited )

We started focusing in on automated testing when we had 3 manual QAs (not including me), and since then every new project has started with plans for automated testing.

It’s important to note that we don’t do automated tests instead of manual testing. Manual testing is still important for focused review of new features/bugs, but automated tests make sure code changes aren’t breaking anything elsewhere.

Also this is all about end-to-end tests (with Selenium, in our case). If you’re talking about a lack of unit/integration tests within the codebase itself, that’s a huge red flag. Even if quality issues aren’t the end of the world, they will definitely make people reconsider using your product. Who wants to trust their financial information with unstable software? It’s also making your QA team less efficient since they’re having to chase down issues that would be better recognized by the dev who wrote them.

echindod,

Linux Dev Time did a episode on this, it’s really good! www.linuxdevtime.com/linux-dev-time-episode-97/

lorty,
@lorty@lemmy.ml avatar

Yes, it’s pretty standard, although how valuable it is depends on a lot of factors. You can write a lot of useless tests just to get the expected “coverage”. Also management will never see value in that type of work even after things break in production.

0x0,

I wish. Most companies i’ve worked at i was maintaining monolithic legacy code that’s hard to test properly. Sometimes another team was developing the next best thing under management guidance (so it would become the next monolithic legacy code) but usually no.

I’ve only worked at one company that did TDD and things were smooth.

As usual, management only sees short-term and it’s hard to impress on them that any time lost now by implementing proper testing will be gained in the long run.

yournameplease,

another team was developing the next best thing under management guidance (so it would become the next monolithic legacy code)

Pretty much what my team is doing. No need to spend time improving the old system when this one will replace it so soon, right? (And no, we will not actually replace anything anytime soon.)

pkill, (edited )

Sometimes you’d use defensive programming (type checker, exception handling, null safeguards, fallback/optional values) which can be argued as a sort of in-place testing, so testing can be not as beneficial to your projects’ robustness as the readability of their core business logic. And some languages would lean more heavily towards defensive programming (e.g. Go, Scala or well written Typescript) and some would rely more on tests but also be designed in a way that makes testing really easy as they seek to keep things loosely coupled (Elixir or Clojure).

Also if your language doesn’t have a quality REPL to reliably test things manually, there is a relatively high chance you debugging process is causing you to waste more time than having a good test coverage.

loudWaterEnjoyer,
@loudWaterEnjoyer@lemmy.dbzer0.com avatar

What the fuck

FizzyOrange,

I think even in languages that do a lot at compile time (Rust, Haskell, etc.) it’s still standard practice to write tests. Maybe not as many tests as e.g. Python or JavaScript or Ruby. But still some.

I work in silicon verification and even where things are fully formally verified we still have some tests. (Generally because the formal verification might have mistakes or omissions, and occasionally there are subtle differences between formal and simulation.)

qevlarr,
@qevlarr@lemmy.world avatar

I worked at 8 different companies as a contractor, so hopefully my sample size is big enough to be meaningful. I’d say it’s 50-50. The companies that don’t, usually know that they should but they need a little help. Companies that don’t do it and they think they don’t need it, are becoming more and more rare (fortunately).

Stick with it. If you’re a junior, don’t go evangelizing automated testing because it will fall on deaf ears until you’re a little more experienced. Keep practicing and offer to set things up if they haven’t already.

Badeendje,
@Badeendje@lemmy.world avatar

And please for the love of all that is holy… DO NOT let some smuck go through setting up a test platform with all defaults and cause thousands of notifications per minute without any plan on actually addressing notifications (by either actually fixing the issue or tweaking thresholds for your specific situation.)

Else all the system does is train people to become apathetic to notifications and warnings. I once was on an IT Team that had read notification boards and 104k notifications… and all they did was joke about it… seriously… turn off the system then.

lurch,

yes, it’s very common in my region. 50% of companies I worked at had CI servers that ran unit tests round the clock. the companies are only slightly bigger than yours. also i know multiple companies my company worked with also have CI setups.

some even auto deploy to prod when the tests in master passed okay.

most use hudson or jenkins for CI with junit, phpunit, selenium and/or cypress for testing.

MonkderDritte,

Testing? What’s that?

apotheotic,

My team follows test driven development, so I write a test before writing the feature that the test, well, tests.

This leads to cleaner code in general because it tends to be the case that easy to test code is also easy to read.

On top of this fact, the test suite acts as a sort of “contract” for the code behaviour. If I tweak the code and a test no longer works, then my code is doing something fundamentally different. This “contract” ensures that changes to one codebase aren’t going to break downstream applications, and makes us very aware of when we are making breaking changes so we can inform downstream teams.

Writing tests and having them running at PR time(or, before its deployed to production, if you’re not using some sort of VCS and CI/CD) should absolutely be a part of your dev cycle. Its better for everyone involved!

hollyberries,

Doesn’t this rely purely on the fact that the test is right?

apotheotic,

Yeah, but it isn’t usually very difficult to write a test correctly, unit tests especially.

If you can’t write a test to validate the behaviour that you know your application needs to exhibit, then you probably can’t write the code to create that behaviour in the first place. Or, in a less binary sense, if you would write a test which isn’t “right”, you’re probably just as likely to have written code that isn’t “right”.

At least in the case with the test, you write the test and the code, and when the test fails (or, doesn’t fail when it should) you’re tipped off to something being funky.

I’m sure you could end up writing a test that’s bad in just the right way to end up doing more harm than good, but I do think that’s the exception(heh).

hollyberries,

I’m sure you could end up writing a test that’s bad in just the right way to end up doing more harm than good, but I do think that’s the exception(heh).

That’s exactly why I’ve asked. That is where I’ve gone wrong with TDD in the past, especially where any sort of math is involved due to being absolutely horrible at it (and I do game dev these days!). I can problem solve and write the code, I just can’t manually proof the math without external help and have spent countless hours looking for where my issue was due to being 100% certain that the formula or algorithm was correct >.<

Nowadays anytime numbers are involved I write the tests after doing manual tests multiple times and getting the expected response, and/or having an LLM check the work and make suggestions. That in itself introduces more issues sometimes since that can also be wrong. Probably should have paid attention in school all those years ago lol

apotheotic,

Aw man, I can empathise. I don’t personally have any issues with mathsy stuff but I can imagine it being a huge brick wall at times, especially in game dev. I wish I had advice for that but its not a problem I’ve had to solve!

yournameplease,

Game dev seems like a place where testing is a bit less common due to need for fast iterations and prototyping, not to say it isn’t valuable.

I’ve seen a good talk (I think GDC?) on how the Talos Principle devs developed a tool to replay inputs for acceptance testing. I can’t seem to find the talk, but here is a demo of the tool.

The Factorio devs also have some testing discussions in their blog somewhere.

hollyberries,

The Talos Principle video was interesting to watch, thanks for the link! It shined a little bit of light on automated testing.

Theres also someone on YouTube who has been teaching an AI on how to walk and solve puzzles on its own, the channel name escapes me and I’m nowhere near a working computer to look it up at the moment :(

yournameplease,

We’ve definitely written lots of tests that felt like net negative, and I think that’s part of what burned some devs out on testing. When I joined, the few tests we had were “read a huge JSON file, run it through everything, and assert seemingly random numbers match.” Not random, but the logic was so complex that the only sane way to update the tests when code changed was to rerun and copy the new output. (I suppose this is pretty similar to approval testing, which I do find useful for code areas that shouldn’t change frequently.)

Similar issue with integration tests mocking huge network requests. Either you assert the request body matches an expected one, and need to update that whenever the signature changes (fairly common). Or you ignore the body, but that feels much less useful of a test.

I agree unit tests are hard to mess up, which is why I mostly gravitate to them. And TDD is fun when I actually do it properly.

apotheotic,

I hear you. When you’re trying to write one big test that verifies the whole code flow or whatever, it can be HELL, especially if the code has been written in a way that makes it difficult to write a robust test.

God, big mocks are the WORST. It might not be applicable in your case, but I far prefer doing some setup and teardown so that I’m actually making the network request, against some test endpoint that I set up in the setup stage. That way you know the issues aren’t cropping up due to some mocking nonsense going wrong.

Asserting that some arbitrary numbers match can be quite fragile, as I’m sure you’ve experienced. But if the code itsself had been written in such a way that you had an easier assertion to make, well, winner!

Its all easier said than done, of course, but your colleagues having given up on testing because they’re bad at it is kinda disheartening I bet. How are you gonna get good at it if you don’t do it! :D

yournameplease,

especially if the code has been written in a way that makes it difficult to write a robust test.

I definitely deserve a lot of blame for designing my primary project in making hard to test. So, word to the wise (though it doesn’t take a genius to figure this out), don’t tell two fresh grads and a 1 YoE junior to “break the legacy app into microservices” with minimal oversight. If I did things again, I still think the only sane decision would be to cancel the project as soon as possible. x.x

I actually was using a mock webserver with the expected request/response, which sounds like what you’re getting at. Still felt fiddly though and doesn’t solve the huge mock data problem which is more an architecture design failing.

I’ve mostly gotten away from testing huge methods with a seemingly arbitrary numbers in favor of testing small methods with slightly less arbitrary numbers, which feels like a pretty big improvement.

How are you gonna get good at it if you don’t do it! :D

True. :)

Ephera, (edited )

You should think of an automated test as a specification. If you’ve got the wrong requirements or simply make a mistake while formulating it, then yeah, it can’t protect you from that.
But you’d likely make a similar or worse mistake, if you implemented the production code without a specification.

The advantage of automated tests compared to a specification document, is that you get continuous checks that your production code matches its specification. So, at least it can’t be wrong in that sense.

RonSijm,
@RonSijm@programming.dev avatar

Sure, but testing usually purely relies whether your assumptions are right or not - whether you do it automatically or manually.

Like if you’re manually testing a login form for example, and you assume that you’ve filled in the correct credentials, but you didn’t and the form still lets you continue, you’ve failed the testing because your assumption is wrong.

Like even if the specs are wrong, and you make a test for it, lets say in a calculator Assert(Calculate(2+2).Should().Equal(5) - if this is your assumption based on the specs or something, you can start up the calculator, manually click through the UI of the calculator, code something that returns 5, and deliver it.

Then once someone corrects you, you have to start the whole thing over, open up the calculator, click through the UI, do the input, now it’s 4, yay!

If you had just written a test - even relying on a spec that was wrong, it’s still very easy to change the test and fix the assumption.

Also, lets say next sprint you’ll have to build a deduct function in the calculator, which broke the + operation. Now you have to re-test all operations manually to check you didn’t break anything else. If there were unittests with like 100 different operations, you just run them all, see they’re all still good, and you’re done

OhNoMoreLemmy,

Yeah, debugging tests is an important part of test driven development.

You also have to be careful. Some tests are for me to debug my code and aren’t part of the ‘contract’.

But on the other hand, it’s really nice. If I spend a couple of hours debugging actual code and come out of the process with internal tests, the next time it breaks, the new tests make it much easier to identify what broke. Previously, that would have been almost wasted effort, you fix it and just hope it never breaks again.

Kissaki, (edited )

My context: I’m in a small ~30 software company. We do various projects for various customers. We’re close to the machine sector, although my team is not. I’m lead in a small 3-person developer team/continuous project.

I write unit tests when I want to verify things. When I’m in somewhat low, algorithmic, coding behavior, interfacing areas.

I would write more and against our interfaces if those were exposed to someone or something. If it needs that stability and verification.

Our tests are mainly manually (mostly user-/UI-/use-interface-centric), and we have data restrictions and automated reporting data consistency validations. (Our project is very data-centric.)

it’s not practical for our size and the business would allow us to slow down for that

Tests are an investment. A slowdown in implementing tests will increase maintainability and stability down the line. Which could already be before delivering (reviews, before merge or before delivery issues being noticed).

It may very well be that they wouldn’t even slow you down, because they could lead you to a more thought out implementation and interfacing. Or noticing issues before they hit review, test, or production.

If you have a project that will be maintained then it’s not a question of slowing down but of are you willing to pay more (effort, complexity, money, instability, consequential dissatisfaction) down the line for possibly earlier deliverables?

If tests would make sense and you don’t implement them then it’s technical debt you are incurring. It’s not sound development or engineering practice. Which should require a conscious decision about that fact, and awareness on the cost of not adding tests.

How common automated testing is - I don’t know. I think many developers will take shortcuts when they can. Many are not thorough in that way. And give in to short-sighted time pressure and fallacy.

yournameplease,

Perhaps it’s just part of being somewhere where tech is seen as a cost center? Technical leadership loves to talk big about how we need to invest in our software and make it more scalable for future growth. But when push comes to shove, they simply say yes to nearly every business request, tell us to fix things later, and we end up making things less scalable and harder to test.

It feels terrible and burns me out, but we never seem to seriously suffer for poor quality, so I thought this could be all in my head. I guess I’ve just been gaslit by my EM into thinking this lack of testing is a common occurrence.

(A programming lemmy may not be a terribly representative sample, but I don’t see anyone here anywhere close to as wild west as my place.)

Ephera,

It feels terrible and burns me out, but we never seem to seriously suffer for poor quality, so I thought this could be all in my head.

The way you suffer for it, is in a loss of agility.

When I’m in a project with excellent unit test coverage, I often have no qualms with typing up a hot fix, running it through our automated tests and then rolling it out, in less than an hour.
Obviously, if it’s a critical target system, you might want to get someone’s review anyways, but you don’t have to wait multiple days for your manual testers to get around to it.

Another way in which it reduces agility is in terms of moving people between projects.
If all the intended behavior is specified in automated tests, then the intern or someone, who just got added to the project, can go ham on your codebase without much worry that they’ll break something.
And if someone needs to be pulled out from your project, then they don’t leave a massive hole, where only they knew the intended behavior for certain parts of the code.

Your management wants this, they just don’t yet understand why.

yournameplease,

We haveused to have a scrum master so we’re already agile! /s

They want those things, sure, but I think it would take multiple weeks of dedicated work for me to set up tests on our primary system that would cover much of anything. Big investment that might enable faster future development is what I find hard to sell. I am already seen as the “automated testing guy” on my (separate) project, and it doesn’t really look like I’m that much faster than anyone else.

What I’ve been meaning to do is start underloading my own sprint items by a day or two and try to set up some test infrastructure in my spare Fridays to show some practical use. But boy is that a hard thing to actually hold myself to.

Ephera,

If we end up in a project with too little test coverage, our strategy is usually to then formulate unit tests before touching old code.

So, first you figure out what the hell that old code does, then you formulate a unit test until it’s green, then you make a commit. And then you tweak your unit test to include the new requirements and make the production code match it (i.e. make the unit test green again).

I am already seen as the “automated testing guy” on my (separate) project, and it doesn’t really look like I’m that much faster than anyone else.

This isn’t about you being faster, as you write a feature. I mean, it often does help, even during first implementation, because you can iterate much quicker than starting up the whole application. But especially for small applications, it will slow you down when you first write a feature.

Who’s sped up by your automated tests are your team members and you-in-three-months.
You should definitely push for automated tests, but you need to make it clear that this needs to be a team effort for it to succeed. You’re doing it as a service to everyone else.

If it’s only you who’s writing automated tests, then that doesn’t diminish the value of your automated tests, but it will make it look like you’re slower at actually completing a feature, and it will make everyone else look faster at tweaking the features you implemented. You want your management to understand that and be on board with it, so that they don’t rate you badly.

yournameplease,

Who’s sped up by your automated tests are your team members and you-in-three-months.

Definitely true. I am very thankful when I fail a test and know I broke something and need to clean up after myself. Also very nice as insurance against our more “chaotic” developer(s).

I’ve advocated for tests as a team effort. Problem is just that we don’t really have any technical leadership, just a hands-off EM and hands-off CTO. Best I get from them is “Yes, you should test your code.” …Doesn’t really help when some developers just aren’t interested in testing. I am warming another developer on my team up to testing, so at least I may get another developer or two on the testing kick for a bit.

And as for management rating me… I don’t really worry too much. As I mentioned, hands off management. Heck, we didn’t even get performance reviews last year.

PumpkinEscobar,

Automated testing is often more cost effective than manual testing. Not to say 100% automated testing is a reasonable goal. But I’ve never worked anywhere without some automated testing (unit, integration or end-to-end).

FizzyOrange,

Very common. Your coworkers are either idiots, or more likely they’re just being lazy, can’t be bothered to set it up and are coming up with excuses.

The one exception I will allow is for GUI programs. It’s extremely difficult to do automatically tests for them, and in my experience it’s such a pain that manual testing is often less annoying. For example VSCode has no automatic UI tests as far as I know.

That will probably change once AI-based GUI testing becomes common but it isn’t yet.

For anything else, you should 100% have automated tests running in CI and if you don’t you are doing it wrong.

yournameplease,

Leadership may be idiots, but devs are mostly just burnt out and recognized that quality isn’t a very high priority and know not to take too much pride in the product. I think it’s my own problem that I have a hard time separating my pride from my work.

Thanks for the response. It’s good to know that my experience here isn’t super common.

Ephera,

Our standard practice is to introduce a thin layer in front of any I/O code, so that we can mock/simulate that part in tests.

So, if your database-library has an insert()-function, you’d introduce a interface/trait with an insert()-function, which’s default implementation just calls that database-library and nothing else. And then in the test, you stick your assertions behind that trait.

So, we don’t actually test the interaction with outside systems most of the time, because well:

  • that database-library is tested,
  • the compiler ensures we’re calling that library correctly (assuming no use of a scripting language), and
  • it’s often easier to simulate the behavior of the outside system correctly, than to set it up for each test case.

We do usually aim to get integration tests with all outside systems going, too, to ensure that we’re not completely off the mark with the behavior that we’re simulating, but those are then often reduced to just the happy flow.

HubertManne,
HubertManne avatar

automate everything is the standard practice. You can't get a pull request in my company without automated code review including unit tests and selenium style practical tests plus two human reviewers.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • ask_experienced_devs@programming.dev
  • rosin
  • magazineikmin
  • GTA5RPClips
  • khanakhh
  • InstantRegret
  • Youngstown
  • mdbf
  • slotface
  • thenastyranch
  • everett
  • osvaldo12
  • kavyap
  • cubers
  • DreamBathrooms
  • megavids
  • Durango
  • modclub
  • ngwrru68w68
  • vwfavf
  • ethstaker
  • tester
  • cisconetworking
  • tacticalgear
  • Leos
  • provamag3
  • normalnudes
  • anitta
  • JUstTest
  • All magazines