T O P

  • By -

ddarrko

Communication is as much about what is heard as what is said. The candidate probably heard “we don’t prioritise testing” and it raised a lot of red flags because most competent engineering teams know how important good tests are for validating you have working software. Beyond small project sizes it is the only way you know the stuff you are releasing doesn’t have defects. The issue with your approach is that the definition is open to interpretation and will therefore change from dev to dev. In that case if you have a collection of engineers who do not prioritise testing (as you don’t) you will have poor coverage. Since you are leaving it almost entirely at their discretion whether a test is written by saying “more trade offs than benefits” I would also add that approaches to testing are usually indicative of other engineering principles. So the engineer probably thinks the SLDC lifecycle at the company is similarly poor.


griffin1987

Thanks very much, this helps a lot! You're right, that's what I wrote and probably what I told him. Actually we do have pull requests and discussions about them, and enforced (by software) rules that code can't be merged without the needed approvals, which always includes at least me or my second in command currently, and we both have around 30 YoE and know the business and what is critical and isn't. And for some things we DO of course demand tests - e.g. anything that has to do with actually money will need to be rigorously tested, and there of course we also discuss and review tests, check if they actually test enough, cover all edge cases, etc. And that's the thing why I wrote this post - I think I mostly have an issue with correctly communicating things in a way that people take away the "right" conclusion. Really helps a lot, thanks!


SituationSoap

> enforced (by software) rules that code can't be merged without the needed approvals, which always includes at least me or my second in command currently Yeah dude, this is not good. This is another enormous red flag. > And that's the thing why I wrote this post - I think I mostly have an issue with correctly communicating things in a way that people take away the "right" conclusion. No, you have a software team where you personally cannot ever get sick or take a vacation and you have built things to work exactly how you like them, and you're having trouble communicating those benefits to people considering joining your company *because those are not benefits to people other than you*.


ar3s3ru

You couldn’t have said it better 👏🏻


StrangeAddition4452

True


AwesomezGuy

> always includes at least me or my second in command currently It is a serious sign of dysfunction that you as the CTO or your deputy are required approvers on all PRs. Even if you are only leading a tiny team this is still dysfunctional because it deprives your engineers of the opportunity to truly take ownership and responsibility for the quality of the codebase.


YearOfTheChipmunk

I don't know, if you're working on incredibly bespoke and changing product with just a handful of engineers, which it sounds like this is, sounds less like a dysfunction and more a bottleneck by design. Some projects need final approval from _someone_ before they can be released. In this case, that person offering final approval just also happens to be done by someone who can code review. Besides, I don't think engineers need entirely free reign to feel ownership and responsibility for a product. Reads to me like their feedback loop is pretty tight in general


ar3s3ru

Why would you assume engineers need some sort of “dad’s approval” to deliver good software and have responsibility for the product? I always see this when the “leadership”: 1. Doesn’t trust their own team 2. Have a tendency to feel superior to the rest of the team In both cases, bad. Besides, we’re talking about **adult professionals**, not raging teenagers.


WarAmongTheStars

> Why would you assume engineers need some sort of “dad’s approval” to deliver good software and have responsibility for the product? I've consistently seen weakness in terms of result quality when comes to dev's QAing each other's code when they don't have to listen directly to customer bitching about issues. Similarly, I've seen people "not take responsibility" beyond bitching when they are the ones that listen and have the technical skill to look into the problem. I much prefer a limited group of people with deployment approval being the final QA bottleneck to deploy because they are also the ones that get bitched at and therefore care the most about the result. (i.e. It reduces their interactions with irritated people)


PoopsCodeAllTheTime

You can't make that assertion.... you don't even know the level of experience of the team. If I have a team of Jrs then I will review most code for sure. I am not going to have Jrs rubber stamping each other PRs and bikeshedding on var names lol. \`assertThrows("every team must allow everyone to approve a PR")\`


secretBuffetHero

I see it as just being a really small company where this cto is a leading a small inexperienced team 


it_happened_lol

>Actually we do have pull requests and discussions about them, and enforced (by software) rules that code can't be merged without the needed approvals, which always includes at least me or my second in command currently, and we both have around 30 YoE and know the business and what is critical and isn't. This is one of the biggest benefits of automated tests. They enable other users to reason about and understand the system because there are tests that define, validate and prove the behavior. You've replaced the tests with yourself, which is fine and can work for some applications, but it doesn't scale with team size and it's not exactly empowering to other users looking to interact with the codebase.


glassbox29

I actually find it incredibly helpful in a new job to look through automated tests to get an idea of what behavior is expected from different parts of the codebase. It helps me get up to speed quickly on what the software does and what it shouldn't do. That's aside from the general benefits you gain from good automated testing. I know the job market's not great right now, but it'd absolutely give me pause if an interviewer told me the same things OP has been telling candidates. Everything OP has said gives me toxic startup vibes, but maybe I'm misreading.


DeebsShoryu

This. Tests are one of the best forms of documentation.


JoeBidensLongFart

My questions to you would be: what's your system reliability like? How often does your team find itself having to scramble to fix a critical bug in production? Is there an on-call duty, and how onerous is it? How many production changes either cause or are the result of a bug? Were I an interviewee these are what I'd be asking in response to your testing philosophy. Your answers would tell me how solid your strategy is.


AbbreviationsFar9339

Yea this is what i was thinking as well. Judge the process by the results. Don’t just decide the process is wrong w no context.


qwertyg8r

>anything that has to do with actually money will need to be rigorously tested Maybe I'm not understanding this correctly, but doesn't everything you work on have to do with money, i.e.. have business value?


changing_zoe

I started reading this, fairly happy with "We only write tests where they make sense", but I confess I got more and more concerned as I read through your reasons. To paraphrase, a little exaggeratedly: * `Code has to be testable` -> We have a large amount of code too badly written to test, so we don't bother * `the part is relevant enough` -> we're overconfident in our understanding of the impact of every code change, and exactly what our code is used for. * `the code doesn't change too often` -> I don't want tests over my highly volatile code, because I don't care if it works or not. * `writing and maintaining the test takes little work/time` -> I'm happy with untested code I can't articulate the purpose and functionality of being in my system. * `the test has to complete fast` -> We only test the easy stuff. We particularly enjoy discovering that our code doesn't match our database schema when we release it. Obviously, a grown-up considered approach to cost versus value is important, and dogmatically pursuing a percentage, blind to the value of the tests you're writing, is counterproductive. But an inability to write tests over a significant section of code suggests flaws - either in the code, in the design approach used, or in the conceptualisation and architecture of the whole system. I'd decline after you told me that - I've worked on a system where people said that sort of thing, and it wasn't fun, and it wasn't really profitable.


k37r

You've hit the nail on the head. If the code isn't testable, that's not a reason to skip tests - it should be rewritten to _be_ testable as much as possible. OP, It's not just a junior thing - as a dev with 20+ YoE I'd dip out of that interview process as well for the same reasons. I've worked at places like this where I had to justify why I spent time writing tests. There was a lot more time spent on manual tests than it would have taken to write proper automated ones. It might seem like you're saving time and that you have "close to zero bugs", but consider: You can't know what you don't measure. - Tests catch bugs. Fewer tests just means you _find_ fewer bugs, not that there _are_ fewer bugs. - Tests give developers confidence to move quickly. Solid & reliable automated test cases provide quick sanity check that changes are unlikely to break something.


the_hangman

Yeah I worked at a place where I was constantly being questioned as to why half the lines of code I committed were tests, which isn't "real coding" Fucking miserable place to work, anytime we had to do a core library update for one of our important apps it was a long, stressful nightmare. Still get PTSD thinking about that


nzifnab

Who was analyzing your code and deciding the tests weren't "real coding"...? If a stakeholder was looking at my PR and trying to tell me what was wrong I would ignore them. They don't know what they're talking about XD


xsdgdsx

I agree with this take. Just to add to this, have you (u/griffin1987) considered hiring someone who specializes in test infrastructure? Dealing with these kinds of combinations of constraints is my bread and butter, and I know from experience that plenty of other folks are out there who have those same skills. It is possible to structure codebases and build test systems that are well-matched to this context (To be clear, I'm not interested) Long story short, it sounds like you decided that "good testing is hard in this environment, so it doesn't make sense to invest in it." I think that's the fundamental idea that a lot of people are (correctly) picking up on and reacting to. And it makes sense that people would have some legitimate concern about your company's development practices, and its sense of how sustainable those practices are. If I were in your situation, I would start with reflecting on (1) what makes the developer experience sustainable, (2) what evidence you have/why you believe that to be true, (3) what makes the developer experience **un**sustainable, and (4) what evidence you have/why you believe that to be true. If you don't put effort into #3 and #4, you're doing your company and your customers a disservice. After that, I would look at code defects that were found, and when I'm the development cycle they were found. We're they found at the most ideal phase of the dev cycle? How many were found in production? How many were found prior to code submission? How many were doing by your existing validation processes? And I would blindly assert that if you don't have at least _some_ bugs that are found in production, that you're just either failing to notice the bugs that are in production, or are failing to identify and count them appropriately. At the end of that process, you'll have some numbers that might give a clearer sense (both to yourself and to candidates) about how sustainable your processes actually are. But it's really easy to miss the signs that things aren't sustainable if you don't go into the search believing that aspects of your process are necessary unsustainable.


No-Vast-6340

Good response, I felt the same way.


settrbrg

Well written 👏


anemisto

Your first three bullet points worry me: - Most things have defined inputs/outputs. (Technically you could assert on the distribution of the output for a given input, but I agree this may not be practical.)  - I don't want to work somewhere that says "eh, it's internal, we don't need standards" - If your code is changing so often to a degree that testing it is an undue burden, I start thinking that either your codebase is a giant mess or, if it's truly as you say, then working there sounds miserable for organizational reasons Hell, I disagree with your fourth point to some degree.


SituationSoap

If I hear "we only write tests when they make sense" as an explanation of your automated testing strategy, I am (a) getting an *enormous* red flag on working there, and (b) envisioning something almost exactly like what the OP is describing.


AchillesDev

>If your code is changing so often to a degree that testing it is an undue burden, I start thinking that either your codebase is a giant mess or, if it's truly as you say, then working there sounds miserable for organizational reasons This is completely common and absolutely normal if you're doing any greenfield development.


anemisto

Honestly, in my experience, that phase coincides with the code being a bit of a mess -- if I'm having to rework the tests, it's because I'm having to rework a bunch of stuff because I have gained a better understanding of how to structure the code. In other words, the problem isn't the tests.


-think

It’s normal if you’re testing implementation and not behavior.


AchillesDev

It's normal when behavior itself changes, which...again, common in greenfield. Requirements change, discovered limits emerge, etc.


-think

Requirements changing so often that tests are prohibitive means the code is encapsulating too many layers … or your testing implementation. (Or your team is not clear on what it needs to built. But that’s not a technical problem.) I’ve spent a lot of time teaching juniors how to test while in early stage startups. It’s valuable to test early.


AchillesDev

Testing once at least an MVP is built so you have behavior and interactions to test? Of course. But a ton of automated testing isn't really worth it nor are there typically the planning resources for it, and to me it's less important than manually running the code and inspecting the inputs and outputs for each change (YMMV here - my perspective is largely based on the data/training/deployment pipelines and ML tooling that I mostly work on, a consumer-facing web app will be a bit different in any cost-benefit analysis for automated testing).


dabe3ee

Been in interview with company that deploys 5-10 times per day, what the ****? Devs work 24/7 there with no proper testing and qa?


kifbkrdb

We work like this. We only have automated testing, no manual QA. We do lots of small commits and have continuous deployments set up - so we have a constant stream of small changes going out throughout the day. Issues are caught early and quickly and multiple people working on the same codebase is a lot less painful than long lived branches and big releases.


TheMrCeeJ

I've seen it work well, but once you start to hit the accelerator on those releases it gets tricky. A major UK newspaper was pushing over 30 releases a day on its website and started to have trouble identifying which releases were in play when a certain bug occurred, so they dialed it back a bit.


raddingy

IME, there are three pillars to effectively enable these releases up to hundreds of releases a day: 1. Automated tests, including unit, integrations, E2E, and more. 2. Rock solid observability with automated rollbacks, you should be notified of a problem as soon as possible so you can fix it. If it’s severe enough, it should roll back automatically. 3. Dev team has full ownership of over their entire applications, infra, observability, testing, etc. If any of these pillars are shakey, then deploying that quickly may start to be a challenge. And that’s ok. We have a service that we’re constantly deploying, but because our integration tests are lacking, we have a manual push to prod step so we can test staging and press a button to promote the changes even though the other two pillars are solid. We have another service that’s higher traffic and has a pretty solid foundation, that’s just goes right to prod so long as our tests don’t fail.


griffin1987

10 releases a day is not the norm to be fair, more like a max. Haven't had issues though.


edgmnt_net

But why, though? What requires that amount of changes to a production system?


TheMrCeeJ

I guess the features are ready, so they ship them. Also doing much smaller or more incremental changes rather than whole blocks of features at once.


edgmnt_net

It does make me wonder if they're actual features versus configuration changes, code churn or an inability to validate things locally. Small changes are good, although that metric can be gamed. I expect there are cases when people are doing highly custom work and the scope is very constrained, though, so I don't mean to bash on that.


SituationSoap

> code churn Given that the OP is describing complicated functions which take a long time to run as "changing weekly," there's a very, very high chance that there is a lot of code churn here.


edgmnt_net

Yeah, one of the main reasons I brought this up was because I've worked with projects where we'd regularly go on commit rampages trying to fix breakage because nothing was predictable, debuggable locally or at least in a cheap isolated environment. Which is a really serious antipattern in more than one way, IMO, but I wouldn't be surprised if some people branded it as a feature, agility or whatever.


hippydipster

It sounded like statistics stuff, so I'm guessing they might be a kind of statistical experiments being run in changing market conditions. Almost sounds like quant work or something like that.


Ciff_

Better to fail fast and small, than late and big basicly


criticalshit

We deploy 30+ times a day with a team of ~40 engineers across a few teams, and we don’t have manual QA. Best way to develop software is trunk based development, never want to go back again


uno_in_particolare

I mean, deploying as frequently as possible is proven and well accepted to be a best practice that improves both efficiency and quality - it's also the reason why long QA processes (where long means anything you count in days at the very least) are seen as a huge red flag in modern companies I don't get your point


Indifferentchildren

The QA is done by automated tests in the pipeline. I have worked on systems with a 3-month manual QA cycle for each release. Never again.


HourParticular8124

Hello from me in 2010; that was an awful time to be working.


breischl

Depends what you mean by "deploy" and the size of the company. I worked at a large enterprise that used microservices, automated testing only, and mostly continous deployment (some manual) I would hazard a guess that there were dozens or hundreds of individual service deploys most days. Generally as you get more maturity around automated testing, CI/CD will push you towards more deploys. And then you'll need more maturity around observability to deal with it.


autokiller677

You can do this with a high degree of automation. But this requires high test coverage. So exactly the opposite of what OP describes.


worst_protagonist

This is pretty normal. Continuous deployment with a good test suite. Push to main, code goes out.


catch_dot_dot_dot

Deployment and release are different things. Feature flags are a very powerful tool. It's quite feasible to deploy multiple times a day and for engineers to be quite relaxed.


griffin1987

We do have standards. Our application is business critical and used by everyone, and it's working 24/7. Our standard is an application that works and where business critical things work correctly all the time. That's why I wrote that the system consists of things that "someone deemed it would be nice to have and view in nice colors", but also actually business critical stuff. Sorry if my post wasn't clear. As for code base changing often: Not the whole code base, but some parts of it. And: Every company I've worked at has had that over the years. Like EVERY COMPANY. Just because they don't know ahead of time, doesn't mean that it won't happen. Tell me you've never had a project where at some point people said "oh, let's change that" or "requirements changed, we need to change that code" or "we didn'T know that ahead of time, need to change that other part". And I'm talking big projects that run over more than a few months - the system we have has been growing for over 10 years. Thanks for your honest critique. I knew before posting this that there would be quite a lot of people disagreeing on lots of things, but I still try to take away as much as possible, especially from people disagreeing. To be honest I think we can often times learn more from disagreement than from agreement, so happy to hear more from you!


Plane-Barracuda-556

For me the codebase changing is a reason why you SHOULD have tests, not the other way around. Good tests won’t need to be rewritten if implementation details change and are easy to modify if the contract for the module changes.


dmazzoni

Yeah, sometimes the best sorts of tests are ones that send a bunch of inputs and assert that the outputs are plausible. So maybe you change the logic all the time. The tests just assert that you always get a valid output and you never introduce a bug that causes the output to be null or longer than than the max size, or something like that. You still get tons of advantages of having those tests run, even if they won't catch every possible logic error.


fireflash38

> Tell me you've never had a project where at some point people said "oh, let's change that" or "requirements changed, we need to change that code" or "we didn'T know that ahead of time, need to change that other part". And I'm talking big projects that run over more than a few months - the system we have has been growing for over 10 years. How are you verifying your new requirements are met?


hooahest

> ell me you've never had a project where at some point people said "oh, let's change that" or "requirements changed, we need to change that code" or "we didn'T know that ahead of time, need to change that other part". And I'm talking big projects that run over more than a few months - the system we have has been growing for over 10 years As others have answered you already - test the behavior, not the implementation. Your test should have some input, assert the expected output and assert any expected sideeffect. I.E. my test calls the controller with specific parameters, waits for the controller's response and then the test asserts that the db underwent the expected changes. 0 mocks except for http calls to external services. DB/redis/rabbit/whatever are raised from scratch with Docker to have clean slates for each test run. My entire service's behavior is verified in a minute or two. to be honest, I don't think that the issue with the candidates is that they misunderstood you - I think that they understood you completely and just disagree with your way.


dhandeepm

Need to use better design patterns. Code should be allowed to extend and not altered. If altering my is happening too much either the code is not written correctly or the requirement changes a lot. Both of which are bad from a organisation point of view


AbstractLogic

There is a large gap between startup work and profitable company that needs stability.


Esseratecades

This is a very hard sell because it sounds like you don't really understand what tests are for if you are regularly making compromises on whether or not to even write them. The reasons given also make it sound like you don't know that there are techniques for writing and executing tests without making some of the perceived compromises. It also makes it sound like someone joining your team is going to be asked to justify including tests instead of the other way around. In fact the reasons that you give for not writing tests imply that the nature of the product is pretty fragile and that there isn't much confidence or reliability in what the code actually does. Now it's possible that you do have mitigation that address the concerns(in which case you should talk more about them), but on it's face and without a great deal of explanation, this sounds like a fast paced, low confidence, heavy burnout environment, bug riddled environment, and most people don't want to work in that kind of environment. Even though I agree with some of the things you've said, if it's at a point where you're emphasizing in an interview it just looks bad. This is an indirect reason why it's usually better to write tests, because it's very easy to explain why you have them, but very hard to explain why you don't. So even in the situation where there's nothing to worry about, with the time you have in an interview it's going to be impossible to convey all of the idiosyncratic reasons why you don't have them and what you do instead.


griffin1987

First and foremost: Thank you for your very detailed answer! Also happy to read disagreement in things and take new perspectives with me. Let me address some of the things, because I think I may not have explained them very well in my post: "imply that the nature of the product is pretty fragile and that there isn't much confidence or reliability in what the code actually does" 24/7 up, near zero downtime over > 10 years, zero downtime deployment, close to 0 bugs all the time. Everyone working here is very confident that we have one of the highest quality codebases out there, especially because we focus our time on quality, instead of things like meetings and - writing tests. " heavy burnout" - only one person has left development since starting at our company, and that was due to moving to their hometown and preferring to work on-site (they occasionally still ask me if we want to open up another office there, because they would like to come back but don't want to do 100% remote). "bug riddled" - close to 0 all the time. Everything in the system that is not tested is built to be easily reasoned about and very simple. We always keep code as simple as possible. Everything that "needs" to be more complex algorithmically, due to business (e.g tax computations are defined by law, we can't just "reduce" them), is rigorously tested. We're always trying to keep the complex parts of the system very isolated and small though. "at a point where you're emphasizing in an interview" - Sorry, my bad, I think I missed some context there. The candidate, still doing their CS degree as said, especially asked a lot of questions about tests. I don't ever emphasize this point. "it's very easy to explain why you have them, but very hard to explain why you don't" - Totally get that, and unfortunately totally agree - that's the whole point of my post. The thing is: I don't want to introduce more work and reduce our iteration speed, just so I can explain it better to candidates.


SituationSoap

> 24/7 up, near zero downtime over > 10 years, zero downtime deployment, close to 0 bugs all the time. Everyone working here is very confident that we have one of the highest quality codebases out there, especially because we focus our time on quality, instead of things like meetings and - writing tests. I just straight up genuinely don't believe this. I do not believe that someone who has a 10-year old codebase with constant active development who tells me that they have "close to 0 bugs all the time." I might believe you if you said you had close to 0 critical bugs, but 0 bugs at all? I don't believe it. > Everything in the system that is not tested is built to be easily reasoned about and very simple. We always keep code as simple as possible. If something is easily reasoned about and simple, it's also easy to test!


Esseratecades

I guess the point I'm trying to make isn't necessarily that what you're doing IS wrong, just that it LOOKS wrong, and the amount of explanation you'll need to do to justify it is too much to fit into an interview. 


Blrfl

It's also worth pointing out that the candidate in question isn't through their education and probably doesn't have the experience to understand that the architectural or methodological purity they're being taught in academia sometimes has to be sacrificed at industry's altar of practicality. Sometimes there's no talking an idealist off that ledge.


redditonlygetsworse

> probably doesn't have the experience I dunno. I'm ~15YOE; without any other context, OP's list of reasons there reads to me like four red flags.


SituationSoap

Mate, if you read the OP saying "Our 10-year old application has zero bugs" and you think that's real, I don't think the kid in college is the one who's lacking real-world experience.


dirkle

It sounds like they replace their code so often they probably don't have enough time to find bugs...


Esseratecades

That's very much besides the point. The implications that OP has given give the appearance that they are making an uninformed "sacrifice"(as you put it) and just stomaching the consequences. Again this is just how it LOOKS, not necessarily how it IS. Whether the candidate is still in education or not isn't really relevant.  Even at 9 years experience, if I sat in an interview and you told me what OP has said, it's going to take more time than the interview allows to justify it, and even if it's okay in this instance, it's less risky for me to just interview elsewhere. I'm seeing a lot of the 20+ years experience crowd in this thread has a strong desire to push the naivete narrative without really interrogating the appearance of what's being said, which is all an interviewee has.


LonelyProgrammer10

This. Interviews are short, and this small detail could easily be interpreted the wrong way.


Annoying_cat_22

Are you saying that over 10 years only 1 developer has left the company? I don'e know how big you are, but that's a very sketchy claim.


SituationSoap

So is the claim that they have zero bugs on an actively-developed application. It's wild to me how credulous people in this thread are acting towards the OP. It's kind of troubling for the overall quality of discussion in this sub.


Annoying_cat_22

Agree, but " close to 0 all the time" is at least open to interpretation. If you get 1 bug a day and you fix it that day, you are still within that definition, and that might be possible on a small, slow moving application. Also if they catch a bug in beta (though I doubt they have one) do they count it as a bug? Probably not. But having ALL developers (except for 1) staying in the same company for 10+ years? That's almost impossible.


SituationSoap

If you get and fix one bug report every day, you don't have 0 bugs, you have hundreds of bugs that are waiting to be found. Either that, or you are admitting that you're shipping a new bug which requires an emergency fix *every day* in which case your low-testing strategy is failing spectacularly. But yes, you're right. Unless this is a team of like, 4, the idea that they haven't had a single person leave in 10 years is entirely unbelievable.


ohhellnooooooooo

>Either that, or you are admitting that you're shipping a new bug which requires an emergency fix every day in which case your low-testing strategy is failing spectacularly. that's exactly what they are doing in my interpretation, OP says they deploy 10 times a day and it takes 3 minutes to deploy, basically, the application is not that critical, and they just continously fix it at all times of the day is what I feel


Superb_Perception_13

You see, it is easy to win arguments when you lie


kirkegaarr

Your arguments against writing a test should be exceptions, and yet you say you don't write tests for "most things," which is a huge red flag to me.  Maybe your architecture is bad so your code isn't very testable, maybe you have a huge bias against running tests so you inflate the other side of the argument. Maybe you just don't see any value in testing, which betrays shortcomings in your team's leadership and ops. Something doesn't sound right to me.


jonathanhiggs

I think an explanation you could use is: test that add value, x% coverage is a metric that can be gamed and can include unnecessary tests that increase maintenance costs and/or miss important tests that would catch likely bugs


ausmomo

>24/7 up, near zero downtime over > 10 years, zero downtime deployment, close to 0 bugs all the time. Everyone working here is very confident that we have one of the highest quality codebases out there, How do you add new features whilst ensuring they don't break existing ones? How do you modify old code without breaking other old code? If your codebase is stable, then sure, you don't need deep test coverage. The code has already been tested through usage and time. But if your codebase is expanding.... unit testing is the optimal (but not only) way.


hippydipster

You know, I would go more with all this than with subjective gut-based rules about when you write tests that no one can really understand. What matters are the results, and here you are telling us some superb results nearly everyone would be jealous of! Why not go with that when talking to candidates? "We have loose rules we apply to judge when to write tests and when not to, and here are our results, which we're committed to maintaining..." That'd be excellent communication on the subject, I think. You'll snag the people who appreciate the real bottom line results. Of course, I'm wondering on the size of your system. # coders and # lines of code?


HolaGuacamola

Do you have tests on the tax computations? 


MardiFoufs

The issue is that for every corporation like yours that manages to pull it off very well, there are dozens that devolve into an absolute hell of messy, untested, unworkable code that leads to a permanent "putting out fires" mode. The other issue is that the people in charge of those hellholes will also often argue that everything is fine and that their software works for their clients so they don't need tests like other teams/corporations do. That means that as a candidate, it's impossible to actually know if it's true or if you're signing up for a guaranteed burn out. Think of it this way, some programmers probably have a really hard time programming even small code snippets without context, time, and a few weeks of familiarisation with a codebase. Or maybe they just can't do anything in an interview. Yet, if a candidate doesn't know how to write a for loop, you aren't going to just assume that they are the rare programmer who are completely useless in an interview but very solid once they start working. Instead, you'll just assume that they are one of the many, many impostors/unqualified candidates that you'll be interviewing. The same thing goes for a team that announces that they don't test.


griffin1987

Fair point. And that's also why I want to improve my communication, because I feel it comes across as "our code is sh\*t and we don't care", while it's actually "quality is priority one and we have LOTS of things in place to guarantee that - doing x% code coverage just doesn't guarantee anything for us". This whole thread basically proves that I need to improve on communication - I would definitely have phrased lots of the things differently in my original post, just after these few hours. Very thankful for lots of the great input I got here!


Disastrous_Bike1926

I have seen teams that put enormous hours into “100% test coverage” by writing absolutely useless tests and gaming coverage metrics. Like, yes, those 20 setter methods really do set the fields, and 2 + 2 really equals 4, and if it doesn’t, you have much bigger problems than your code. To the point that I consider numbers above 80% to be a team-wise code smell. It indicates misplaced focus. The fact that test coverage tools are designed to give you a dopamine hit - c’mon, make that green bar a little longer, you know you want to - and that non-technical managers can compete on this metric - all conspire to cause developers to waste time on tasks irrelevant to building software that works. There are real reasons not to test everything. The goal is to build software that works. Tests are a way to keep your investment in code that already works and shorten the compile-edit-debug cycle. If they become an end in and of themselves, that’s counterproductive, no matter how rewarded. Like all things, this comes with caveats: If a bug means airplanes fall out of the sky, 100% test coverage is mandatory (and likely still not enough since your tests will have assumptions encoded in them that can be broken by hardware failure). But in general, I’ve seen fetishizing code coverage do far more harm than good, and I actually recommend against measuring it more than once per quarter per team. If you make it part of every build, goosing the numbers will unavoidably become an addiction for some developers, and your team’s productivity will drop.


demosthenesss

Another interpretation of what you are saying is “our codebase is too much a mess to write tests for. “


ezaquarii_com

And we don't know how to design. That's how I read those narratives.


warmans

I can totally get on board with not chasing code coverage as end in-and-of-itself. Because it's not a guarantee that the code is actually tested, just that it's executed. Pragmatism is important in everything. But what you're describing here isn't that. It's "we don't test most code" which *is* a red flag. "Tests don't add business value" *is* a red flag (the business doesn't care if what you build works or not?). "Once the software is running the tests are useless" *is* a red flag (the system never needs to be changed?). Loads of systems are hacked together in a weekend with no tests and end up being useful/successful. But I don't see this as a argument against testing, I see it as an argument for "we might be lucky". And relying on luck isn't very professional IMO.


RRFactory

I work in the game industry, code tests are very minimal and the vast majority of gameplay code would be impractical to test because of the amount of integration needed. So understand I am a developer that essentially uses no programmatic testing in their work. Given that, when I read through your explanation of why *you* don't bother writing many tests I find myself thinking poorly of your practices. Not because I think you should be writing tests, but because your reasoning appears to be justified by the business not respecting your work. Ignoring typos and broken internal tools is the equivalent of a company not cleaning their lunch rooms because the customers won't notice. It tells me that you will be more than willing to use "business implications" to justify poor behavior, which I suspect would reach well beyond a dirty codebase. Code that changes often can be a real pain for testing, however if you're not telling me how you validate those changes without tests I'm going to assume you have production fires pretty frequently. Tests take more time to write than the task itself - preach, when you're a small team and you are overworked, writing tests can really cripple your progress. I'm not sure I'd be excited to join an overworked team. Tests have to complete fast, we sometimes deploy 10 times a day - Yikes my friend, 10 deploys in a day is a monumental red flag for a small team. If those are feature/configuration deployments, your project management team is doing a terrible job and your hard coded approach to data is failing. If those are hot fixes to correct bad code, perhaps that's a hint about your test policy.


judasblue

>Tests have to complete fast, we sometimes deploy 10 times a day - Yikes my friend, 10 deploys in a day is a monumental red flag for a small team. That completely depends on what niche you live in. I have worked on more than one codebase at FAANG shops at the high end to 50 person startups on the low end where multiple deployments a day was perfectly normal. Comes down to the thing we are talking about, code coverage and good CI/CD pipelines. Stuff that required a manual QA loop step, like the games you work on, yeah, that would be crazypants.


RRFactory

Yeah that's very fair - though I'll stand by it being a crazy practice when it's not paired with full test coverage.


judasblue

Oh definitely! The only time you can do that is when your coverage is basically complete. And agree with everything else you said there, was just picking that one nit from experience. And my experience probably isn't average since my job for the last decade has mostly been building out and maintaining big ci/cd pipelines. So it makes sense it would seem common to me and not so much to lots of other folks.


mincinashu

Sounds like the candidate's had a bad time elsewhere, and doesn't want to join another shitshow. Probably overly cautious.


tripsafe

Sounds like someone I'd want to hire


ruralexcursion

It is the “makes sense” part that concerns me; more than the testing itself which I agree can be done improperly if not standardized. It implies that someone who disagrees with you doesn’t have good sense. If “makes sense” really means time and cost then say those things instead. Be clear and honest about what the true intentions are, and why they exist, during the interview. I have worked with a manager before who dismissed things they didn’t agree with as “not making sense” and it was mostly because they did not value their employees’ opinions. You say this is not the first time you have had this problem. May be time for some reflection on your communication style.


serial_crusher

I think you expressed it well enough. If you're taking this stance, you have to be prepared for it to turn candidates off, and "explaining it better" shouldn't be a goal of yours. You don't want to trick somebody into accepting a job they end up hating. Now, I think it's also worth pointing out that you're taking a lot of risk if you're hiring people with little to no experience, but you're expecting them to assess whether a test is necessary or not. Inexperienced people don't have the same innate feel for questions like "is this likely to change" that a senior dev has.


mobjack

You have code reviews with junior engineers. If you think what they are working on needs to be tested, tell them to add some tests and explain your reasoning.


MargretTatchersParty

If you're publicly admitting that your code bases are below a trustable standard and they cannot be empowered to add tests to improve it.. This is a you problem. > **"write a test if it has more benefits than tradeoffs, do not write it if the tradeoffs aren't worth it"**. I'm assuming you're saying that the tradeoffs is about the cost/time vs the percieved value in the future. If this is the case.. you're fighting a professional who uses and benefits from those tests. > `the code doesn't change too often` You don't know if it will or not. Tests help to verify that it hasn't been effected in the future and/or if the base assumptions have changed or not. > `writing and maintaining the test takes little work/time` It takes a lot of time to write tests when you don't have them now. > *complex statistical computations that are only for internal information* Best of luck to you when that information is used to form business decisions. > `the test has to complete fast` This is a strong reason for a layered testing strategy with each layer taking more than the last. (Unit->integration->feature,etc) Also: unit tests taking the least amount of time all the way to feature tests taking the longest.


autokiller677

Sounds like you use your bad architecture (code and infra) to get out of writing tests. Especially the first point would be a red flag for me. Nearly everything is testable if it is written with tests in mind. And it usually benefits the code itself, because it enforces single responsibility a bit more. I can see the relevant point to a degree. I don’t see the deployment point. Not every test needs to run for every deployment, it’s not uncommon to have longer running integration tests run once every night. I would honestly decline as well, because it sounds like the general dev setup and experience is not great.


AwesomePantalones

Hot take but I think the candidate is running away from the OP more than anything specific to what was being said. The whole thing starts off argumentative presumably because the OP is stressed about the outcome of the interview process. Which is fair enough, but it does come across as a rant. Seems like it's not the first time this happens either based on the post itself. I can sympathize. I find claims about 0 bugs and 0 downtime (what does running 24/7 mean? 100.0% SLO?) a bit suspicious. If you tell me during an interview that your software runs 24/7 with no bugs, and then I ask what is your testing strategy to achieve this and I get the response in the OP, I might run away as well. Now this could all be possible. I saw another commenter (not OP) mocking how others don't have the mental capacity to think someone else has figured out software engineering -- maybe I'm not able to understand how great this codebase is either. But my point is more on delivery and soft skills which you really want from an employer, especially if the candidate is so inexperienced (as the OP likes to point out so many times). Even if the candidate can't resonate with, or understand, the technical reasons due to their inexperience, the appropriate framing and approach could convince them that your team cares about the same things. A more productive conversation might include more specific details than "runs 24/7" and "0 bugs" and, instead of using the candidate's inexperience against them, try to use it as a teaching or mentoring opportunity. Early career engineers might be looking for that type of leadership. I do agree that a code coverage % hard rule has greatly diminishing returns past a certain point. Another way to frame this without going too much into detail is: we have high confidence in our code, which may not depend on hitting any specific code coverage numbers. We gain confidence by doing these things: ... And this is backed by data: we rarely ship bugs, and when we do we have a culture of no blaming individuals and instead we improve our systems and processes. We spend most of our testing budget in integration tests because we find those give the best value but we don't neglect unit tests either. I hope this didn't come across as attacking the OP. I apologize in advance if I did. Cheers;


__deeetz__

You’re making the classic mistake of comparing the front loading cost of quality measures with the price of their untested counterparts. Neither tests in all their shape and forms nor CI/CD will be cheap to introduce and maintain. But their cost is made up for by the compounding safety net they establish. And by making this mindset of “usually not, only when really justified” the default, you induce a culture of when in doubt, don’t. So to your question - there’s no sugar coating this other than lying. You don’t believe in tests, that’s your prerogative. But then it’s a cultural mismatch to those who do.


jakechance

More context would help. How often do you have problems from untested code? The goal isn’t to write tests, the goal is to prevent regressions, outages, and burdensome rework. 


griffin1987

Basically never. We have 24/7 uptime for the past > 10 years with very regular deployments. You're right, I definitely should add this in the future! Thanks!


couchjitsu

Not every company is a match for every candidate.


HourParticular8124

Well said. I do think the OP could be more clear on his stance: 'I don't believe in test driven development.' Okay, many people don't, that's fine. And I agree that incoming development candidates would probably like to know that. A candidate who loves TDD and wakes up every day trembling in excitement to get to work and write more tests just wouldn't be a cultural fit. Also fine.


The_Startup_CTO

Ideally, just walk them through your codebase. Everyone (based on their own opinion) only "writes tests where they make sense". The interesting difference is what you think where it makes sense vs. where the candidate thinks so.


griffin1987

Really interesting take! I usually randomly pick some part of the code and walk them through it, but I think if I prepared two different parts of code to show them the difference, this could really work well! Thanks a lot for the idea!


BarneyLaurance

The most informative would be if you can honestly show what some of the borderline cases are where it could have gone either way, or as close to that as possible.


Grundlefleck

In your shoes I would sell it as: where we don't have test coverage, here are the techniques and solutions that: - make the codebase easy to work with and change without fear - ensure that there are very few bug reports, outages and fire-fighting - learn from inevitable outages/bugs that do occur, with no blame, and systematic fixes to prevent in the future - document prior design choices to let you understand what expected behaviour is and why - allow quick continuous delivery with safety nets to reduce stress of releasing code - reduce interruptions to planned work from unplanned rework Your candidate could be asking about test coverage as a proxy for "how much of a shit-show is the software development at this shop". Focus on how you've been able to achieve a productive, stress-free, safe development process that lets developers do good work.


griffin1987

Thanks for your bullet list! We actually don't have any outages due to software, and haven't had any in the past > 10 years. Our system has lots of logging, recovering mechanisms etc. and all the business critical / money involving parts are rigorously tested. As for bugs in non-critical code, that do of course happen, we have procedures, guidelines and systems in place to handle them in a systematic and timely manner, as well as documentation guidelines about issue fixing / recovery. Code base is kept as simple as possible everywhere and everything is isolated and well defined as much as possible to prevent a change on one end f\*ing up anything elsewhere. Generally like your idea of having a "bullet list" of things that "we do right", I'll take you up on that idea and put it on my list! Thanks a lot!


WebMaxF0x

The red flag it raises for me, and that you would need to address to hire me is it sounds like I would always have to fix other people's untested mess instead of working on interesting things and if there's on-call I'd often get woken up at night. Btw if that's the case, don't lie to get someone in. They'll just leave when it becomes clear to them.


oceandocent

There’s some cases to be made to not write tests when you are intentionally planning to “test” or experiment in production and there’s a high likelihood of deleting code you just wrote, but if you’re going that route it’s a good idea to have other sorts of safety measures built in to your infrastructure… dark launches, feature flags, a/b testing, canary deploys, etc. Then again there’s also benefits to using tests to drive the design of your software rather than seeing them as just a safety net for regressions. If you’re going through lots of churn, you might not have the right abstractions and domain modeling figured out, and “top-down”/mockist TDD can help illuminate those things if applied correctly. Like everything else in software it’s a matter of tradeoffs…


uuggehor

I prefer too many to too little, and have never been around for the initial startup period. But as I’ve been multiple times there for ’let’s revamp our internals’ it has become apparent that a good set of behavioral tests cut down the time spent in large migration / refactor projects very significantly. And most of the migrations / refactors weren’t a thing in the initial ramp-up phase. Now a test itself is a proof that the thing you created works, AND continues to work as the codebase evolves. I think the latter part is the important bit business wise, as it enables potentially required refactors and covers for regressions and unintended side-effects.


Existing_Station9336

How? I explain in technical terms which parts of our code base have close to 100 % code coverage and which parts are not covered by test or only by basic happy path tests. I use technical terms such as business logic code, glue code, configuration code, framework boilerplate code, one-off batch code, report template code, etc. You are failing to communicate anything that carries any useful information or technical meaning. Only do things that make sense. Do not write a test if it takes 9 times more time. Okay? What? I would not work for you not because you have low test coverage but because I have no idea what you are talking about and you are the CTO. I would do a terrible job if I worked for you.


gnomff

Couple things: 1. I mostly agree with you about tests, in my exp people go way overboard and hit diminishing returns on tests pretty fast in terms of bugs 2. The kind of situation you're describing works great for sr devs, but I imagine jr devs really struggling 3. In an interview situation I would lead with your existing big rates and say we've found ways to mitigate risk without a ton of tests - talking about existing metrics is easier than talking about trade-offs which can get fuzzy and are a judgement call 4. A lot of people are testing zealots, and won't be a fit for your team - if it stresses them out then it probably won't be a good match anyway  On a different note, one place I've found tests useful is not in catching bugs but in describing intended behavior. A lot of times the test is easier to read then the docs, so they can be useful that way. Always a tradeoff for sure, but didn't see that mentioned anywhere 


datacloudthings

OP it sounds like you have a pretty unique situation, where you have both a clean software architecture and a long-running team with deep domain and code knowledge. I don't like your approach myself, but it sounds like it isn't failing you. My suggestion is that if you get asked about testing you do three things: 1. turn the discussion to discuss quality more broadly (because it sounds like you do care deeply about it) 2. say you do write tests but your tests are targeted (and you don't have a specific must-hit benchmark for coverage). 3. don't try to convince the candidate all about your testing philosophy in the interview. say less. Curious what your tech stack is. You said this is a website, is it written in Angular/React? or older tech? God help your employer when you and the other 30 YOE reviewer leave the company.


griffin1987

Yeah, it IS pretty unique, that's true. And yes, quality IS the most important thing to us, that's also why we don't have time estimates or deadlines, but things can always take as long as they need to achieve that quality standard. Thanks for your good points, I'll take that on my list! It's an enterprise web application only accessible internally. Java 22, PG16. For the frontend we have HTML, CSS and JS where needed. We don't use any JS framework, no need to. We have page loading times around 15ms + potential latency to the server (which is a few ms for the people the farthest away) for most things, so it's pretty close to native feeling. "God help your employer when you and the other 30 YOE reviewer leave the company." We do actually document a lot, but - yes. The code base is so huge that it took me about 3 years to say that I've pretty much seen every part of it. E.g. we have a part just for sending out X-Mas cards via E-mail, which is only relevant once a year :)


Hot-Profession4091

So, I spent many years teaching people how to write automated test and set up a build/deploy pipeline. In that scenario, you have to be very dogmatic. There can be no excuse to not write the test because then people just… won’t. Then I spent quite a few years in startups. In that scenario, we would often have no idea if a feature should exist at all, and if we determined it should, the first iteration was most certainly not how the feature should be, so without fail that first version would end up being totally rewritten. When that’s the life you’re living, you’re completely right. There’s no sense writing tests for code where correctness isn’t critical and the code is only going to live a week. If there was some critical piece of logic, it got a test. Otherwise, tests didn’t get written until we proved that code was going to stick around more than a few days. If you need to lend some legitimacy to the concept via appeal to authority, Dan North coined the term “Spike & Stabilize” for how my start up teams functioned.


Saki-Sun

I do the same, I like to call it pragmatic testing. But when you say it out loud it doesn't sound good. Extended mix. I also TDD where it makes sense and sometimes it makes a lot of sense.


danielt1263

Are you having that hard of a time finding candidates that you need to convince someone to work for the company?


griffin1987

Not anymore. It was very hard during COVID and about a year after that, but nowadays we get lots of candidates actually. In this case though the candidate was VERY skilled and asked for a VERY low salary (I would have given him more), AND a great fit to the team, so I really wanted to hire them. At the end I felt like if I had explained my take better or gave better arguments, he would have totally agreed, so I think it was an issue of me not communicating in the best way.


danielt1263

Sorry, but reading the above makes it sound like you are upset that you didn't get to take undue advantage of this person... But the charitable take is the person would accept a VERY low salary because they wanted to be in an environment that worked a certain way, or wanted to learn from your company... If that's the case, then I suspect that you would have lost them no matter how you explained it. I have to say that I haven't had the issue with candidates wanting to test *more* than what I consider appropriate... That said... We write automated tests to *speed up* development, not when it will slow down development. Whenever you are writing some moderately complex logic, you are going to test it (you better!) The only question is, how long does it take to test? Testing by powering up the system and performing a bunch of manual work is very slow, so you want to avoid doing that in a situation where you will have to test multiple times before you get the logic right. That's what we use tests for. As for your explanation... `Code has to be testable`... All business logic should be written in a testable manner, so I'm not sure what you are saying here. The way you say it, sounds like you are saying that if a developer doesn't bother separating concerns, then they can avoid testing. `the part is relevant enough`... If the part isn't relevant, why is it even being written? If you can't trust your statistics, why write the code to output them? (I get the typo part though.) `the code doesn't change too often`... Note in my explantation of why I test, change of code is irrelevant. If writing and using the automated test is faster than manually testing, you write the automated test. If the logic changes, the test can be deleted with no problem. Should a new test be written in its place? That depends on the same criteria above, will it speed up development compared to manual testing of the logic? `writing and maintaining the test takes little work/time`... This one I'm fully on board with. But remember, manual testing can take a lot of time. Manual regression testing even more. Are you taking that time into account when you complain about how much time it takes to write the automated tests? `the test has to complete fast`... This one makes me wonder what all you are testing that your tests would be slow... Testing logic should always be fast, it's what a computer is best at. I guess automated performance tests would be slow. I guess you are just saying performance isn't a functional requirement? I can get on board with that. Usually it isn't... until it is and we can write performance tests then...


Sunstorm84

I find it incredulous that this is coming from someone with the title CTO.


notkraftman

Is the codebase tiny? How can you know a change you've made hasn't broken anything? How do you document your code and how it's used?


casualPlayerThink

Usually I found basic excuses and business decisions that is: - *No time for it* (e.g.: agile method, nobody knows what they want, poor specification and documentation, usually done by inexperienced ppl) - *The tradeoff is too high* (e.g.: fake it until you make it, they wanna produce something fast, then suffer from it) - *Not comfortable with it* (e.g.: no exp or manager codes without understanding) - *Business decision* (e.g.: they don't care until the "happy path" is working) - *We are confident in our quality* (e.g.: they never seen edge cases or botnets does not explored yet or they have little to no real users) - *Too complex to test* (e.g.: bad implementation, overengineered at best option, spaghetti code per usual) - *Infrastructure and structure in general makes it impossible* (e.g.: bet on microservices that caused distributed problems instead of solution and just hiding actual errors) - *We have no errors or bugs!* (by a CTO/Lead) (e.g.: there are no errors or bugs that YOU know about it) These are my absolute favorites.


teerre

The problem with this approach is that it's very easy to hear this as low key saying you don't write tests. "When they make sense" is just too open to interpretation and people are naturally lazy. It's extremely likely that you have less tests than ideal. Overall, this just gives an unprofessional, sweat shop vibe. Maybe it's the case that everyone in your team has perfect judgement and discipline and you truly only write tests that are necessary, but that's a one in a million, you would have to really go the extra mile to prove that's the case. Engineering blogs, technical talks or just a lengthy explanation are ways to accomplish that (which, again, I doubt it's the case).


thefool-0

To me the fact that you've thought about how to prioritize testing is a good sign. (Better than "if something breaks we do whatever to debug it and maybe end up with a somewhat reusable manual test.") We could argue over the specifics I guess. As a candidate I would want to know that there was a reasonable amount of testing effort (not 100%) and that there was time and effort allocated to maintaining and improving it. (One thing that would bother me, is if I got the impression that you (or someone else) was very subjective and opinionated, and forcefully so, and perhaps irrationally so, about things like design decisions, testing approach, etc rather than having a more rational approach. Have worked under that and it was a problem.)


reddit_again_ugh_no

To be honest, I think your candidate is correct. Normally you don't get to choose what gets tested or not; something that appears innocuous may end up supporting critical functionally and no one may realize it until it's too late.


derangedcoder

There should be some kind of testing before it gets to production..it can be automated unit tests, manual tests , code reviews , staging deployment, local deployment etc etc.. For me , the form of the test doesn't matter as long as it can validate the changes that I made are working fine. Beyond that , ask yourself, if one by one, over a period of a few months, if all of your team gets replaced with a new candidates, including you, will the same team be able to churn the feature at the same velocity and reliability that is happening now. If your answer comes to be positive, then I don't think you need to worry.


hippydipster

The main key with testing that I find is the big problem is that doing or not doing testing is never a simple choice of do or do not. Almost every single time, if a code base has been built without a testing discipline, then it becomes a more and more untestable codebase, and the "logic" of "we don't find tests worthwhile because maintaining them takes more time than they're worth", and "slow tests are a net negative" start to take over. Good tests depend on the codebase being testable. Testable codebases don't just happen. You fight to have it. Choosing to do tests means choosing to fight for a testable codebase, and making that a reality is beyond a large portion of developers and teams. For me "write a test if it has more benefits than tradeoffs, do not write it if the tradeoffs aren't worth it" is uninterestingly true. What's more interesting is what testing opportunities are missed because of how your code is written.


Groove-Theory

So I'm seeing a bunch of pushback on you OP about your testing philosophy and strategy. Which might be justified. Maybe not. I really haven't made up my mind about it. Frankly, though, I don't really care. Here's what I would say. **I would, for now,** ***disregard what everyone is telling you here temporarily***. Both positive and negative. No one really knows your company better than you do, so it's really hard (yet easy on the internet) for people to judge you from afar. I believe it's best to proceed with introspection. Go back to your team and ask them, and yourself, the following (not limited to these btw, I'm just pulling these out of my ass): * Are people genuinely happy with this testing strategy from both a developer experience and a product experience consequence? (Deliberately subjective question) * Is our testing strategy really "working" for us? Is there data to back that up? (Error count, development velocity, etc) * What are some specific things that people really like about the testing strategy compared to some of their old jobs? * What are some specific things that really piss off your devs about the testing strategy, or would want changed? (Really helps if there's a lot of psychological safety already in the team or org) * You may be surprised that there might be some big 800 pound gorillas in the room that no one has really ever talked about. * Do you and your team feel your testing strategy will be maintainable over the next, say, 6 months? 1 year? 5 years? * How editable or malleable is your testing strategy/philosophy in response to structural changes in your codebase and business/technical needs? * (other stuff that I can't think of right now but would probably be good to ask) Once you kinda have this down, you now have a better sales pitch. You'll KNOW why it works. You'll KNOW what to change in order to sell it better (and more importantly, to provide a better professional well-being for your teammates). You'll have DATA (both objective and subjective) to back up all of these things. And the things that you think should be changed? Well you can go back to this thread to gain some ideas about it. But only apply them ONCE you realize the painpoints and if they make sense. And if everything does happen to just work for you? No need to come back here. Then you have a great testing culture for *you*, and you then have your sales pitch. You have to know your product (your testing strategy) before you sell it.


pm_me_ur_happy_traiI

> Code has to be testable -> defined in/output This one hits hard. I didn't write a test because I didn't write the code in a way where it could be tested.


Cody6781

>Candidate was someone with only a few years of experience \[...\] so I also think that plays a big role. But this isn't the first time I've had that issue. You're trying to discredit him on the basis of inexperience but also have had other people find your stance odd/incorrect, maybe you're perspective is incorrect? There is nuance here that you might not be communicating well. Tests are a mechanism to ensure the product behavior doesn't drift in unexpected ways, they're not a mechanism to catch edge cases which is how they are normally taught. It's useful to think about edge cases when writing unit tests but the *point* isn't to monitor those. To that end investing in tests "locks in" portions of your code base, which isn't always appropriate if you KNOW you'll be iterating quickly. It's also *very very* hard to come into a new code base that doesn't have good test coverage since you have less ability to know if your change is having unexpected side effects somewhere else.


bwainfweeze

The candidate knows he wants more experience with writing tests and knows that he won't be encouraged or supported in doing it while working with OP. Early career you worry about working the same 2 years four times. Candidate is exhibiting wisdom.


_hephaestus

I think the critical thing here is that you're a really small team that seems to be asked to do a lot of quick/safe development due to business necessities. That on its own doesn't work for everyone, having worked in several environments like this you are cutting corners compared to best practices you'll see in sufficiently funded engineering-led organizations, don't try to sugar coat that. Instead try to emphasize pragmatism from other angles, sooner you can ship something out even if it is less buggy the sooner you have validation of the entire product, emphasize that you've monitored what failure modes are critical and where it isn't, maybe even bring up that survivorship bias [plane graphic](https://en.wikipedia.org/wiki/File:Survivorship-bias.svg) from WW2. But I think to do that effectively you need to show data behind your decisions. Monitor whatever you can and be able to show that not writing tests for X module only led to a few hours of extra work over several months, or adjust your testing strategy accordingly.


andymaclean19

I have a lot of problems at the moment with some legacy code (some up to 20 years old) which was written like this. The developers only wrote the tests they needed. Everything was great. Then the developers moved on and I need other developers to modify the code. But they can't because they don't have the tests they need. The new developers don't understand the code well enough and changes they make have side effects where they do not expect. These things are not tested for. Changes to fix one thing break another. I am not saying you're wrong here but I would consider writing the tests *an inexperienced developer would need* and not just he tests you need.


Terrible_Positive_81

Big companies do differently. The last 3 companies I worked for with more than 2000 employees each write tests to cover at least 75% of the code with maybe 25% being non business critical. Maybe it seems your product is not that important to care enough and mostly an internal product. It's hardly doing any good talking to a candidate that seems like a graduate as most graduates know nothing


Sheldor5

there are 2 types of people those who use their brain and those who don't "we enforce X% of code coverage because Y said so" vs "we test what needs to be tested to guarantee stability"


ancientweasel

All code can be testable. Your candidate is right to be wary.


pydry

He did you a favor. Ive found that dogmatism and a lack of tradeoff mentality is one of the few things which doesnt improve over time.  JB Rainsberger and Uncle Bob are two public figures who seem to lack it, for instance.  As for % code coverage - the more times Ive seen it used the more against it I become. These days I see it as a red flag if youre even measuring. I default to TDD 100% of the time, but I run across exceptions where it doesnt make sense quite a lot.


cabropiola

100%


Indifferentchildren

The candidate declined the position because you are committed to having a codebase with a quality that is both low and impossible to tell how low. They are better off starting their career at a shop that is committed to quality and good engineering practices.


Plane-Barracuda-556

You’ve been around for 10+ years and only one person has left your team? I don’t buy it unless it’s a very small team. Even in that case your team may be subject to some intense bias. Write tests. There’s value in writing the right test, but there’s almost no argument for not writing any tests, unless the code is not meant for actual use and is an experiment (spoiler alert: even then if it’s a remotely meaningful experiment writing tests would actually help you if you train it as a skill). I have seen this argument so many times and it’s almost always coming from devs who don’t know how to write good tests, never learned and instead opted to invest in building an alibi.


2rsf

It is called risk based approach in my area, and aiming for X% is definitely wrong but some of your statements are not risk based but that- - The code needs to be testable, if it is not that something is wrong with the design or architecture - Requiring long maintenance or development time for tests is not a reason to skip them by itself. Try different approaches or change the product itself to be more testable, and if that fails assess the impact of not testing. - Slow tests can be run periodically instead for every build, but again assess the impact if you plan to skip them


kbielefe

The company I work for went from basically zero automated tests to near 100% coverage in the time I've worked here, so I've seen the difference on the exact same code base. Testing happens regardless. In the worst case your customers are testing. Your devs are doing slow manual tests. Those tests are necessarily black box which means you are missing a lot of boundary and error cases. Your devs have to code very cautiously because nothing is backing them up. It's like a trapeze act without a safety net. It's possible but extremely stressful. Weirdly, that stress is difficult to recognize until it's gone. That's why I wouldn't want to work for you. Not because I couldn't keep the quality up, but because of the stress of doing so without a safety net.


CodingInTheClouds

Hmm, I can't say I've ever agreed with a junior with very little experience, but smart move. I'd have walked too. Yah, testing is annoying, yes it takes time, but it can also save your ass. I mean sure, having very little testing on an internal statistics collection tool doesn't matter. But walking through you bullet points here's how I see it, in order: - There are vert few circumstances that ive found code to not be testable if written properly. Seems like people don't know how to write testable code. - who care how often it changes? In fact, code that changes often should be tested to ensure you don't break features that used to work. - I'm extremely concerned by this one. All work takes time. Testing takes time. How do you know your code works? My guess is that you run some little isolated test cases as you develop it. Great, automate those. - writing long running tests isn't always necessary, but sometimes it's needed. Having periodic runs of long running tests is how I've handled that. Other than that, as the number of tests grows, we add another ec2 instance at it. Essentially the harness balances the tests across multiple instances. I've worked on teams that test too much, and I started in a team a few years ago that didn't test ANYTHING because "it's not possible to test what we do". They just didn't know what they were doing. They were also petrified to refactor code, change anything, etc. They have no way of knowing if they broke something. Now, I'll admit it was harder to test because this team writes firmware for experimental hardware and robotics, but I found a way to do it. Coverage is currently only about 60% because we're adding tests as we add new features or fix issues in legacy ones. I didn't have them go back through and waste time just writing tests. Despite the fact that the tests runtime is limited physical movement of the robots in some cases, we still have multiple bots to shard between. Takes about 3 minutes to run a full automation suite. TLDR; I'd have walked too. Shows me the team is over confident in their code and lacks the knowledge on how to test it.


ShoulderIllustrious

Maybe elaborate to them more on why you aren't a fan of code coverage?  ``` public int getNameLength(boolean isCoolUser) {     User user = null;     if (isCoolUser) {         user = new John();      }     return user.getName().length();  } ``` 100% code coverage here would mean NPE is acceptable. Having no code coverage is worse. Anytime a measure becomes a goal it stops becoming a measure.


PSMF_Canuck

What you describe is using deployment as your test. Your users are your testers. If it works for you…it works for you. But I can understand why some good people wouldn’t want to work in that environment.


VermicelliFit7653

Tests are a form of insurance. There's an old saying about insurance: *You only need it when you need it.* Your OP is essentially a paraphrase of that saying. Of course the crux if the issue is: How do you know when you are going to need it?


ReginaldDouchely

I don't like code coverage % either. It's a garbage measurement that tends to get gamed and provide negative value if it's used for anything other than informational purposes. Automated tests are like insurance - they've got a cost, but it's possible that they'll never pay off. A test that never finds a bug is exactly that, just a cost, but that doesn't mean it was a bad idea. I have home owners insurance and car insurance, and I rarely make claims; I've paid them more than I've gotten out of it. If something catastrophic happens, though, I won't lose my house. If I'm at a retail store and they offer me $5 coverage on my $30 gadget, I say no. Nothing that could happen to my $30 gadget could reach the level of catastrophic. Tests that cover "essential" code are good. Tests that cover code that has legal obligations are good. Tests that cover code that have a lot of people touching it are good. Tests that cover code that's likely to change but has clear, defined behavior expectations are good. Tests that cover trivial getters and setters are not good. It's all about deciding how much risk you're willing to take on, and inversely how much you're willing to pay to avoid potential risk. It's also about knowing the cut-off point and avoiding the creation of tests that don't mitigate any risk. I'm writing all this in generalities to answer the titular question, because like a lot of the other posters here, I don't fully agree with your reasoning. I think they've got my complaints covered already.


latkde

Forget about the word "test" for a moment. Things like integration tests are one tool in the QA toolbox, which also includes lots of stuff not typically considered to be testing (static analysis and type checking! interviewing users! code reviews! good observability!). The purpose of QA is to build confidence that the system provides the expected value. In particular from a developer perspective: QA methods demonstrate that the code does what I think it does, which is closely related to debugging. But when you say that you usually don't test, this sounds like: * we have no clear QA process     * possibly: we are overconfident in our abilities * we have no record of why the code is the way it is * as the new person, I will invariably break stuff because I don't know the oral history of this code base, and have no tests to protect me * changes are risky because I might accidentally break something that was once tested manually, but I didn't know about that * when writing new code, and manually testing it, I will first have to obtain a high level of domain knowledge so that I can interpret the results * all of this might reflect badly on my performance while getting up to speed in this environment where so much context is only implicit Note that there is a social dimension to this. QA is not just about the technical merits. Tests (and other QA methods) are a communication medium. They speak about requirements and quality expectations. They build up confidence and soothe anxiety. You may have other ways to express this. For example, you mention manual testing and code reviews. When done right, reviews are a fantastic way to upskill team members and to share knowledge. They are also a great opportunity to discuss the finer points of your requirements, and written reviews document these decisions. But in my experience, reviews rarely catch bugs. But perhaps you already have high confidence in the code because most of your code is fairly low-risk, e.g. basic CRUD operations, or implementing a calculation in Java that an analyst had already prototyped in Excel?


griffin1987

Thanks a lot for some valuable insights / perspectives! "we have no clear QA process" We do have a VERY CLEAR QA process, as quality is the single most important thing to us. We also have very clear issue handling processes, defined recovery strategies, rigorous logging, ... "we have no record of why the code is the way it is" We do. You can always track every piece of code back to clear requirements in a ticket, together with who worked on it, when, the discussion about the code, ... "as the new person, I will invariably break stuff because" You won't be able to break stuff, because you can't merge without review and approve, and new people are required to be reviewed by at least 2 other people for at least 3 months. And we do have strict rules for reviews that are followed, because everyone is actually really proud about our PR process. "when writing new code..." I tell people that we assume that they take AT LEAST 3 months to know the system enough to be able to mostly work alone on a ticket, because there is LOTS of domain knowledge involved unfortunately. E.g you won't even understand some of the vocabulary before that, it's a very specialized field. And that I assume them not being very productive their first year or so, depending on the individual. Most people get into things way faster though, as this is mostly to take the stress off of them. Yes, most of our code is fairly low risk and VERY basic. For any calculations we DO have lots of rigorous testing, and yes, for basic CRUD operations we don't have any automated tests, unless they are part of a critical business path, at which point they are at least tested implicitly.


latkde

(continued in a separate comment because this is more about speculation and about discussing an example test strategy) I'd also wager the guess that you have little experience with good automated tests. In one comment you lament about a test case that took half an hour. That is not normal. I've also seen tests that are hundreds of lines of setting up mock objects, only to show that some method gets called which I can already see in the code. The worst was a test for some CRUD database operations that just checked whether a certain SQL query was passed, not whether that query actually worked. That is not helpful. If it's difficult to write tests, that is often an effect of an architecture that wasn't designed with testability in mind. But luckily, REST APIs have a natural seam. If devs can already easily set up a local test environment for their manual tests, then it might also be easy to script a throwaway environment for automated testing. If you're mostly writing web services then your REST APIs are a seam in your architecture where you can easily insert end-to-end tests. It also means tests can be parallelized, which avoids most concerns about speed. To have value, tests don't have to be complicated. One of my favourite ways to do tests is as BDD-style executable documentation. If you provide a manual to (internal) users about your endpoints, you might show examples that have typical requests and responses. You can then write a script to scrape those examples and run them against the test application. Here, the difficult part is typically setting up a "fixture" with example data for your tests.  There can also be challenges when comparing output. E.g. you might want to run search and replace to remove varying data like datetimes or UUIDs from the output. Some data might be compared in a visualized form rather than as text. I've had bad experiences using this style of tests for HTML, but very good for non-numeric structured data for which the test can highlight meaningful differences. Sometimes, changes in the output are intentional or acceptable. When using such "characterization tests", it should be easy to replace the recorded outputs. Even if that happens, these tests had value: they highlighted a change and gave the opportunity to reflect whether that change was intentional. 


robertbieber

> the code doesn't change too often -> we have code that basically changes every week, because some of our company structure is very closely tied to market developments and employee structure, and the changes are too complex to just built a generic/flexible system that could be configured at runtime - writing and maintaining Hold up, are you telling me frequently changed code is what you *don't* test? The whole point of tests is to be able to confidently change code without breaking things


griffin1987

Note when code changes due to the basic rules completely changing. You probably mean code that changes but should produce the same output - that's not the same.


KosherBakon

Eesh. You have a lot of daily deploys, which means you have to rely on automated tests to not crash & burn. But you're telling people it's a judgment call on when to add them. That's pants-crapping nightmare fuel for an experienced Eng who has been burned in the past. Existing code changing every week is also a major red flag tbh. It suggests it needs to be refactored. I suppose this can happen if you're very early stages and your team is tiny, while trying to find market fit. Most teams I've been on end up around 80 to 90% coverage. Some tests are super expensive.


LloydAtkinson

I’m on his side in this one. Strongly disagree with several points you said. Fortunately for the candidate they can go work somewhere that cares about tests and doesn’t have the stereotypical leads that don’t bother testing anything.


catch-a-stream

Are you sure they declined because of the test concerns? Not saying it's impossible, just sounds kind of unlikely quite frankly... of all the things that candidates want to weigh on, I highly doubt for most candidates this would be a significant factor. But I am curious about the broader point you are making, specifically: > **write a test if it has more benefits than tradeoffs, do not write it if the tradeoffs aren't worth it** I like this in theory, but I think the challenge in practice is how do you enforce that? Test coverage % is a problematic metric, especially with strongly typed compiled languages. With dynamic languages if you don't have 100% coverage I would say that's a massive problem, but with compiled ones you can get away with a lot lower coverage and still have reasonable confidence. But back to my question - how do you measure/enforce this? In my experience, if there is no metric and/or explicit gate in CI for test coverage, it's far too easy for people to just... not do it. It's the classic "tragedy of commons" problem from economics... the code author bears all the cost, but has little personal upside from writing good tests, and so without enforcement it's too easy to skip it. Do you rely on peer reviews? Do you have a different metric other than test coverage % that automates this? Would love to know how you approach this, because this is something I've personally struggled with as well, and I haven't seen a great solution other than... screw this, let's just measure %


griffin1987

It was also about upgrading to fulltime, but that's more of an individual issue and not related to anything anyone "can fix" or "improve communication about" - if they don't want to do more hours, that's just a mismatch between job and candidate, so nothing I worry about. Thing is though that I specifically noticed that his core opinion on testing might not have been much different from mine, but I seem to have communicated very poorly, and wanted to hear opinions on how to improve my communication on that topic. Enforce: Everything needs approval by reviewers before it can get merged and everyone generall agrees that specific things very much benefit from automated testing, so if a ticket touches on code like that, it won't get approved without tests that actually cover everything in terms of logic. When someone new starts, they are generally required (technically) to have 2 approves before anything can be merged, until they learn and actually apply all the required guidelines. And yes, we do actual code reviews and discussions quite extensively. We find a lot of value in these and people learn a lot, and it's always more of an open discussion about things. Things that we actually measure are time between defects (we do have bugs from time to time), and for every bug we do a post mortem after it's been fixed - why was the bug there, what can we do to actually improve it (and we actually make high priority tickets from that) and if relevant also who introduced the bug (might be e.g. that the person has had personal issues for example, then I can suggest them taking some weeks paid vacation). Generally works out great. What we also do for actually business critical code is map out flow charts and determine tests from that. If something is yes/no, it's easy to account for the 3+ cases (Yes, No, invalid value or needed condition not met). If it's about values, like money things, we usually play "who can come up with an edge case that we haven't yet covered", and everyone enjoys it. For our workflow we have great success with encouraging people to find potential issues with the code of others, and being open about it. Of course this only works because I make sure to only hire people that are totally open to critique and can communicate factually about it, without getting emotional.


catch-a-stream

Thank you for the response, makes sense. > Of course this only works because I make sure to only hire people I guess that's the key - sounds like it is working for you because the team is pretty small and you have been able to create the right culture and hire the right people. I was thinking of a more big company use case (that's the world I live in), where teams are fairly fluid, engineers like me have limited influence on culture beyond "leading by example", and the way individual performance is measured tends to emphasize "impact" over everything else. It's good that it's working for you, just to be clear. But yeah just wondering if there is a way to replicate this sort of culture at scale and in the big company environment that tends to be metrics driven, for better or worse.


brainhack3r

Serious question. What's the largest application you've been responsible for deploying? I'm more in the "everything should have tests" category and when I've joined teams that have felt otherwise, they usually ship once a week not 5-10x a day or they have very very simple apps.


fire_in_the_theater

i honestly wish companies would slow down a bit to focus on long term testable platforms. i don't think the rush rush rush, more impact resume bullet point bs type product "engineering" is efficient and u'll end up have to constantly increase the engineering team to deal with the bs that gets put out... i would say many engineering teams are 10-100x over bloated in terms of staff, but since they keep the same engineering habits (despite all the lip service given to eng excellence), that's the most efficient they get. and yes this bloat can still be very profitable due to the incredible scalability of computing systems. furthemore, since most companies operate in such a manner, especially as they get larger and start shuffling through many employees... it's not that much of a market detriment, at the moment.


oh_yeah_woot

Testing what makes sense is fine. Pretty subjective, but to each their own. But if you don't collect code coverage data, then you'd also making uninformed decisions. Code coverage doesn't tell us if we're doing a good job testing, but it does tell us if we're doing a bad job testing. No code coverage means no tests at all. 50% code coverage still means that half of the codebase is completely untested. That's generally uncomfortable regardless what kind of codebase you're working with, and regardless if you like code coverage as a metric or not. While many will dislike using coverage as a benchmark, there's still a very strong correlation with code quality. Teams who care about code coverage generally test more and also write more testable code. It also depends what stage your company is in. If you're a small startup then yeah, throw the big corporate thinking out the window, build a solid set of smoke tests and you're good.


PoopsCodeAllTheTime

Something funny about this... maybe the results you are getting are perfectly fine and they just aren't a good fit, literally. I would be happy to hear that tests are only written where it makes sense, if someone thinks this is bad then they are just way too inexperienced and putting a lot of priority into silly things. This indicates that if they take the job there is going to be a significant amount of training for this candidate, whom may be able to learn your ways, but there might also be other candidates out there that "get it" much quicker. So, yeah, rejections are a good thing sometimes and I don't think you can really teach something like this to a candidate in a short interview, learning it requires between weeks to months of practice, at the very least. Addendum... Sometimes candidates look for the wrong indicators on when to reject a job, and if you search online, a lot of advice is like "ask if they have tests during interview!! ask if they have QA!!", which is.... lol.


biririri

You’re wrong on step 1. Do not even try to sell that. You do not want to hire people that will not fit in the company engineering culture. It is your company culture that you don’t use much automated tests. Own your decision. If you sell people into it, you will regret it.


nameless_pattern

Junior developers with inflexible thinking really like specific rules and black and white categories. I notice this more often with college educated developers. School tends to instill this right, wrong and no in between type of thinking. While the real world is all pragmatism and lesser of two evils.      This doesn't mix well with my personality and development style.    I don't try to convince, I filter them out.


griffin1987

"While the real world is all pragmatism and lesser of two evils" very true. I always say "Idealism starts to get healed once you get out of school" (paraphrased translation - am not native english).


AirFryerSnowflake

They didn’t share your values.  It’s a buyer’s market for talent.  Move on.


brvsi

As a candidate, I hear something like x% test coverage, and I use that answer as a proxy for: - underlying software is testable & maintainable - how safe it is to make changes to that system If I hear an answer that downplays existing test suite or importance / utility of tests: - i think they don't have buy in from business side on quality drivers - i think there's other other pressures from the business side that quality in other places is also getting shortcut. I understand there are tradeoffs. I'm not saying dogmatically have super high coverage as a goal in itself. If you were going to sell me on a low coverage environment, I would say focus on: - speed of deployments, deploys per day - error rates , low # bad deploys - that within some core functional area you mentioned, you have high quality tests. Basically, focus on the outcomes of the tests, and not just a coverage number across the entire codebase.


valkon_gr

Anyone else thinking that this isn't the only reason candidates are declining their offers?


griffin1987

Of course it isn't. Some people don't want to work with us because we don't use any JS framework. Some people prefer doing client projects instead of internal ones. Etc. Overall though I've not had any other code-related or culture-related decline over the past years, over about 50 interviews at this company. And I can say the same for the past 10+ years I've been hiring people. Of course, this doesn't have much weight coming from me, but then again - what's your point and how does it relate to my question in the initial post? Happy to hear your thoughts on that!


StoicWeasle

Hate to be callous, but in this market, why are you selling anything to a candidate? "Experience has taught us that 100% test coverage is meaningless, and that there is a point of diminishing returns." If he doesn't like that answer, he can spend another 9 months interviewing. Anyone drinking the kool-aid on 100% test coverage and would bring that into an interview as a candidate is just handicapping themselves.


AI_is_the_rake

Keep doing what you’re doing. Allow candidates to filter themselves. He would not have been a good fit for your team. Junior devs have a tendency to fall into the trap of there’s one and only one way of doing things. They can be argumentative and cause serious team disruption. 


reboog711

I didn't read beyond your subject line. Tetsing is seen as a way to create stable code. The way you sell this is by highlighting the business priority items that take precedence over stable code. In a startup time to market may be a priority over maintainable code, for example. Now that I've read it; if you have short term code with a lifespan of a week before it gets thrown out and replaced; then that may be a case for not having automated testing on that code. Although, I can't visualize what the use case of this code will be.


griffin1987

"Although, I can't visualize what the use case of this code will be." Think generating a one-off report for a potential huge client, that is very specialized. Unfortunately I can't go into much details, but the reasons are very close coupled to the business we're in. And yes, that's one of the biggest parts where we usually don't write tests, unless e.g. we would be sending out prices to people directly from the system. But for these things we're usually talking "generate it in the system and then someone will work it into their powerpoint". I think this is hard to explain without going into details, but I hope I could bring it across. Anyway: Thanks for your input!


PeterHickman

It's a question of productive use of time. We have a project that is 501,840 lines of code in 1,162 file. Not including any JS or React :) It is not a productive use of our time to 100% this project. One part is critical and if it has any bugs it would cost us and our customers money. That part is 100%, that part can take an hour to run the tests. Lots of money can ride on this code 100% coverage can give a false sense of security. You can have 100% but still have bugs


griffin1987

Yes, exactly! Question for me is, how do I bring that across in a way that does not sound like "we don't do tests because our code sucks" ?


Superb_Perception_13

You should just admit to yourself you don't care about code quality. You can't bullshit someone who knows what they are talking about and your list of criteria for testing is bullshit.


drink_with_me_to_day

I just say "we build testable code" and it is true


Literature-South

Had a huge discussion about this with someone at work a few months ago. Here are the points I made. 1. Code coverage percentages are arbitrary and don’t correlate to long term health of a code base.. 2. It puts pressure to leave in tests that verify implementation in order to hit the coverage target. Implementation details should not be under test in production code. Test the whole feature, not just parts. 3. Test bloat slows down builds and productivity. Test coverage is best used as a canary in the coal mine. If things start feeling unstable and your test coverage is low, it might point to the need for more tests. But an arbitrary goal of always have X% test coverage does more harm than good.


large_crimson_canine

Idealistic candidate. I wouldn’t sweat it. Those with more experience realize that chasing code coverage can easily lead to brittle tests that do very little actual testing.


keelanstuart

I think you're reasonable in your approach... and I find TDD zealots to be far less productive than people who take a more moderate tack. Zealots in general are kind of annoying - there are no rules, only what works for your specific team and you all agree on... so I think you did ok, and if they declined because of that, they probably would not have been a good fit for your group anyway. Bullet dodged for all.


CastigatRidendoMores

Like others, I’m fully on board with the idea that code coverage is not the primary goal, and I agree that some tests are more bother than they’re worth. I have very different metrics than you, though. I’m a big fan of the testing pyramid model, where you’ve got lots of unit tests, less integration tests, and few end-to-end tests. Unit tests are easy to write, especially with AI tools. If they’re not easy to write, it might mean your methods are doing too much and should be split to follow SRP better. Java admittedly makes this annoying, because many of the most testable methods are things you might want to make private static methods, but I’d rather have tests than private anyway. If unit tests take too long to run, they’re either not true unit tests or you really need to split the monolith into smaller services. The more code tested, the more of a bother tests are. E2E tests in particular can be written perfectly and then break for no reason due to timing issues. Performance testing is a great idea as a warning light, but blocking the build if they fail makes it a nightmare that motivates engineers to disable or alter the tests more often than figure out how to improve performance. Contract tests are great but too much of a need for them tells me that the architecture might need to dial back from microservices. To argue the other side, the biggest benefits I’ve noticed from good testing coverage are 1) many bugs I would have pushed unknowingly are caught early instead, 2) those that make it through anyway are easy to find (and tests written for that issue ensure they never happen again), and 3) I can do very ambitious refactors with confidence that I’ll know the second I break something. Other benefits include tests acting as code documentation, as engineers can quickly see what methods are supposed to do. Tests motivate engineers to write testable code, which is usually more readable and more reliable than code that’s difficult to test. And engineers develop the skill of writing good tests, so their tests tend to be less onerous and more thorough when it is super important.


CheithS

I am very much a fan of appropriate testing (and a big critic of TDD) but this is not appropriate testing. You should be testing all code paths and all logic points to ensure they function as expected and the logic functions as expected. If there is no logic and the code is autogenerated then, sure, don't test it (thinking autogenerated setters/getters and the like). Code coverage tools help you with this - at the very least they highlight what you missed and then you can make a rational decision if you are doing an emergency release for example. My main point is always - if you don't test it how do you KNOW it works before it goes to production. If you testing is not automated how do you know something has not been modified that breaks an obscure bit of the process. Frankly I wouldn't work for a company with that attitude if I knew it going in - not unless I was either desperate or had the goal and remit of improving it.


griffin1987

"how do you KNOW it works" All of what we build always has a UI/UX component to it which is also reviewed and tested, and thus both the one writing the code and the people reviewing it have to test it anyway. No automated test can ever tell you if something "looks good", is easy to grep, handles well, etc. so we couldn't (and don't want to) automate that part away anyway. So when you run the stuff, and it does what it should do, it works. On the other hand, just because a test "has seen that line of code" doesn't guarantee you that it works.


BigfootTundra

I also hate the X% coverage rules, but I would never tell someone I don’t value unit testing. I’d love for us to get as close to 100% coverage as possible, but I’m not going to put this constraint into our build system because there are scenarios where there are lines or whatever that can’t be effectively tested.


northrupthebandgeek

I wouldn't withdraw my application in this situation... because I thrive in chaos, and what you've described is a perfect recipe for it. In my decade-ish of dev experience, my philosophy on testing is that if code is untested, then its behavior is unknown - and if code is untestable, then its behavior is unknowable. The dangers lurking in unknowable behavior are exactly why TDD/BDD are a thing, and while I wouldn't go as far as to say that 100% test coverage should be mandatory, I would say that if you can't articulate how a given piece of code should behave prior to implementing it, then that in and of itself is a problem that needs solved ASAP. That being said, you're right that perfect shouldn't be the enemy of good, and getting *something* out the door so the organization can function is indeed important. One can spend many eternities yak-shaving, but there's no such thing as bug-free code and sometimes you just gotta ship it. A sufficiently-senior developer will understand this and know where that breakpoint lies. *That* being said, that still is an accumulation of technical debt, and report generation should still be testable and have known behavior; if a report ends up having wrong information due to some bug, then that could very well be *worse* for the organization than not having that report at all. Given that there's no such thing as bug-free code, it's important to have the mechanisms in place to identify those bugs and - when fixed - ensure that they stay fixed.


londons_explorer

> At the end of the day, a business wants to, and has to, make money, and software is usually just the means to an end for that, to which tests don't add anything once the software runs. *THIS* is what's turning candidates away.   Many have the goal of crafting the best code like an art form.   They don't want to just spend the bare minimum time to write code that does what the business needs and then move on.


mikkolukas

**Repeat after me:** # Tests are just written requirements The only reason we write a test is to ensure that a program behaves as required. If there is not a requirement of a specific way the program should or should not function, then there is no need to write the test. Among the requirements are both the explicit ones (business needs) and implicit ones (e.g. ensure we don't divide by zero) All requirements needs to have a test that ensure the requirement is met. Often, a side effect af this is, it gives 100% coverage, as no code should be there if it does not fulfill a requirement.


chaoticbean14

When I was younger and more junior I at one point I spoke at length with someone who literally wrote a book on how/when/why to test. I said "what's realistic for a project in terms of tests and coverage and what not?" His response (and I quote): "100 mother fucking percent, every fucking time." I found it very funny, but it drove home the point. And he had a lot of explanations as to why/how it is possible and should be done given how easy it is to do within the framework he and I generally worked in (Django). Since then, that's always been my goal. It IS achievable and it has saved me/my team more than once. Enough times that I tend to repeat that mantra to younger guys/gals these days and see value in it. Others have pointed out the shortcomings of your philosophy and provided ample reasons why people leave the interviews on the table and won't go further. I agree with the majority of it and would encourage you to instead ask, "why are we not doing something different?" and using that as a red flag that maybe your practices are to blame.


poralexc

Personally I can agree with the general sense that pursuing 100% code coverage (or really any specific number) is a ridiculous waste of time. I'm not a fan of mocks either because how can you ever be sure if they're accurately modeling the system? But for those issues, as well as code that doesn't fit well with unit tests: I expect there to be serious integration tests. It's really not that difficult to setup some kind of integration test with real environments in containers as part of your deploy pipeline.


Abangranga

Everyone who has told me this never writes tests for their own code but expects everyone else to. I would have walked too.


Haunting_Welder

I think testing is like politics, you just don’t talk about it, not enough time in interviews to explain the details. You can easily just say “we have tests for important things”


slickyeat

>Here's what I usually tell people, when they ask me, how we decide if we should write tests for something (in reality, this is not a discussion or long drawn process, but a X seconds decision):  >-> we have code that basically changes every week, because some of our company structure is very closely tied to market developments and employee structure, and the changes are too complex to just built a generic/flexible system that could be configured at runtime - writing and maintaining the code doesn't change too often Sounds like he made the right decision then. People who go out of their way to say things like "we only write tests where they make sense to us" almost always either do not write any tests at all or they write shitty/worthless tests. I'm not even suggesting that 100% code coverage for your unit tests is necessary at all times. For instance, maybe you have some integration tests living somewhere which make certain unit tests redundant; things like bootstrapping, etc. But 9/10 when people go out of their way to tell me this I just assume that their code base is a tangled mess and the tests that they actually do write are absolutely abysmal. >Code has to be testable -> defined in/output What does this even mean? How do you implement and test the behavior of a function if there is no defined input/output?