9

I’m currently leading a project where I completed 80% of the work myself because I felt a strong expectation from the team to ensure timely delivery. The rest of the team contributed about 20%. Due to complications with local testing, I skipped thorough local tests and relied primarily on integration and QA tests in the dev environment.

The QA tests so far seem to be going well. However, I still have doubts about potential bugs and whether the QA tests cover all critical scenarios.

Our tech lead suggested postponing the delivery to allow more testing and review, but I opposed this, insisting I would take responsibility and lead the delivery. Despite my confidence, I’m now questioning whether we’ve done enough to mitigate risks before moving to production.

What are the best steps to ensure stability and minimize risks at this stage, given the limited testing? How can I better handle similar situations in the future to balance delivery speed with quality assurance?

10
  • 11
    I recommend to state precisely what you mean my "QA tests" - that can mean a lot of different things (and yes, I know QA means quality assurance, that's not my question). Commented Dec 18, 2024 at 4:33
  • 31
    This seems like a people problem, not a software problem. You know the issue is not having tests. You know the issue is fixed by writing tests. Yet you reject writing tests that you don’t have when given the chance. How is that an engineering issue? Commented Dec 18, 2024 at 14:05
  • 4
    What is the possible impact of a bug? It can range from life-critical (but I hope you wouldn't be in that situation if it were the case) to "vaguely annoyed customer". In between you have anything that can cost money, make the company lose customers, bad PR, etc. The amount of testing should at least be proportionate to the risk. This is probably a business decision, not a technical one. Commented Dec 18, 2024 at 15:24
  • 1
    I’m currently leading a project where I completed 80% of the work myself Given this, how are you not the "tech lead" on the project? Commented Dec 18, 2024 at 19:17
  • 8
    It seems unlikely that you are 4 times as productive as the rest of your team combined, even on a team of 2. If you're going to continue leading a team then you desperately need to learn how to leverage that team. Even if you're head & shoulders above everyone else on it, you need to make the team's projects team projects, not your projects, notwithstanding that yes, as team leader, you bear the mantle of responsibility. Commented Dec 18, 2024 at 19:35

6 Answers 6

71

I still have doubts about potential bugs and whether the QA tests cover all critical scenarios

I opposed this, insisting I would take responsibility and lead the delivery

Despite my confidence, I’m now questioning whether we’ve done enough to mitigate risks before moving to production.

Pick a lane.

Either you think the product is ready for delivery, or you don't. But you're thinking one thing and telling your lead something else (to the point of overriding them and taking on all the responsibility for the failures that can ensue).

You're flitting between contradictory stances and the only consistent behavior I see from you is rejecting doing what you're supposed to do.

  • You skipped testing during development,
  • You relied on tests that you now tell us you think don't actually cover everything,
  • When given extra time to address those concerns you dismiss them and double down against not only your own judgment but that of your tech lead
  • You're now rushing to ensure stability and minimize risks, still somehow glossing over the option of writing the tests that you've so far refused to write since the beginning.

Based on what you've presented, your development style is best described by that of an unguided projectile. You make erratic decisions, don't consider the long term ramifications, and state the opposite of what you believe when communicating to your lead.

What are the best steps to ensure stability and minimize risks at this stage, given the limited testing?

Write the damn tests.

Why do we test things? To make sure that they behave according to specifications.
How do we ensure that things behave according to specifications? We write tests.

You're asking how to confirm that things conform to specifications, but somehow still aren't thinking of writing the tests that confirm this. What answer are you waiting for, because clearly you're ignoring the answer that you already know.

How can I better handle similar situations in the future to balance delivery speed with quality assurance?

Let's start with low hanging fruit: if a lead is telling you that they're okay with delaying the deadline to address concrete concerns with the project, concerns that you very much believe to be true, why on Earth would you reject that?

That's not a balancing issue, that's just outright doing the opposite of what you believe and effectively undermining your entire team's project.

For the future, I think you need to revisit why you think tests need to be written in the first place. I can explain my take on it (which I will), but I can't make you believe/understand it; that part is up to you.

While writing tests adds an additional upfront cost to development, they start paying dividends in the future when you start debugging/maintaining/refactoring your codebase. Don't trick yourself into thinking that you'll do it perfectly the first time. You will need to revisit your logic. You will break things when making those changes. You will fail to catch those introduced bugs if you rely solely on your ability to not make mistakes. The tests that you will already have written by the time you're in that stage will tell you exactly what you've broken.

The upfront cost of writing the tests pales in comparison to the added effort from all the extra bugs and difficulties with debugging that you'd suffer from the lack of tests. For any business application with a sustained lifespan, tests are essential to keeping the codebase maintainable.
If a codebase is not maintainable, the application dies and a new one has to be built from scratch. Not only is this a tremendous waste of effort, it will take a long time of developers needing to deal with an unmaintainable mess before anyone will greenlight any rewriting effort (if ever).

If a contractor builds a house that ends up collapsing in on itself two years into the build, the contractor is to blame. "The customer asked me to rush the build" is not a valid excuse for the contractor to build a shoddy house.

Do not undercut the quality of your codebase, because you will cause suffering for anyone who has to maintain your messy results. That will be you and/or your coworkers.

1
  • 13
    This is wonderful advice +1, and I can't find fault with any of it. The main thing for me is the going against the lead's "advice" (if you can call it that) ! That's a recipe for disaster in itself - the risks are heavily skewed against you - if the project is released and sails calmly away into the sunset with no issues, then great - you did your job!! OTOH if the release is scheduled and capsizes soon thereafter, in some shops, you might be looking for a new job soon. Even if the lead is "OK" with that, it does not reflect well on either you or them ! Commented Dec 18, 2024 at 14:21
17

I’m now questioning whether we’ve done enough to mitigate risks before moving to production.

Well, what are those risks? Whether your software needs more testing, or if you can deliver software which probably has a certain amount of bugs depends heavily on what can happen in the worst case, when the software will stop working completely, or may cause some data corruption. Will this lead to a situation where

  • two or three users will just have to postpone their work for a day until you delivered a fix (where they can do something else in between, or have some manual workaround)?

  • or might this lead to a situation where a whole factory will stand still?

This depends heavily on the type of software you are developing, and your user base. Make a proper risk analysis, then you can compare the costs and time losses of a failure with the costs and time losses for further testing.

Moreover, are you prepared for delivering bug fixes quickly in case a bug blocks production? Is deployment under your control, or is there some other organizational unit involved which is known to defer deployments because they have other prorities? What happens when you deploy today, but get sick tomorrow - is there someone else in the team who can take over your role and clean up your mess, ideally without introducing more bugs?

You cannot expect a software to be stable without proper testing, but sometimes that is an acceptable risk when you make sure defects in production can be fixed ASAP. Hell, that's also a good idea when you do proper testing, since no testing can guarantee your software won't show up a bug in production. This is usually an organizational issue, not a technical one.

3
  • Risk option C: the result (financial data etc.) is faulty in a way that looks plausible at first glance. Aka a bug is not noticed until the IRS raids your customers offices in 5 years or someone down the line in 10 years notices inconsistencies. Or c* if they externally validate the result anyway so any potential bugs would be caught immediately, massively limiting liability/damage. But as your answer perfectly states: without context from OP no call can really be made Commented Dec 19, 2024 at 10:51
  • 5
    The range of possible bug impact is even greater than that…  Depending on the nature of the software and where and what it's used for, failure could merely mean that a couple of folk elsewhere in the company have to spend a couple more minutes on something, or that a screen used internally doesn't look quite as elegant as it should.  Or it could cause many deaths and injuries! Commented Dec 19, 2024 at 10:56
  • @gidds: I agree, but I had more a situation in mind where the OP envisions the worst possible bug they can imagine in their software. Even then, the risk to deliver without more tests might be acceptable - or not. Commented Dec 20, 2024 at 8:27
13

Call it beta.

You've managed timely delivery, of everything except thorough testing. You can still release responsibly but only if you're honest about that.

If you can get it in front of customers and get feed back from them you're in a good position. Just make clear that this is an early release snapshot that is still in early testing.

Of course that only works when the software isn't being relied on for something critical. If legal compliance, money or lives are involved take a breath and focus on what makes it critical. You can still release it and call it beta but only if you can keep people from actually using it for these critical things.

In short, customers can make good testers. But only if you can keep their feet out of the line of fire. Remember, they aren't pros.


Does that mean you didn't need to write the "thorough local tests"? Well you never "need" to write tests. They simply give you added reassurance. But that reassurance can speed development. But you decided to skip it. I'm deeply suspicious of why. If you're doing "thorough local tests" correctly they should speed you up not slow you down. They shouldn't be subject to "complications". They shouldn't end up depending on much to create.

If by "thorough local tests" you mean what Fowler called solitary tests in the mockist style then you're better off without it. I deeply believe in unit testing. Real unit testing. Behavior based unit testing. The original unit testing. Not every Foo class gets a FooTest class testing.

If you gave up on unit testing just to avoid that nonsense I don't blame you. However, I do encourage you to figure out how to craft a functional core for your unit and wrap that in tests that never heard of your DB, file system, or network. It's different. It takes practice at home. Use it at work once it makes you faster.

The bad news is, if you didn't do this from the start you've likely written what we call legacy code. Once you've done that it isn't easy to switch. Michael Feathers gave us a book to help get such code under test.

I point to that only because all I know about your integration and QA tests is they've left you feeling nervous.

I will say, there is absolutely such a thing as too many tests. Don't feel bad simply because there could have been more of them. Feel bad if they could have been better.


Also, what kind of code coverage do you have at this point? Hopefully your interesting behavior code, code that makes decisions, is covered. If not, spend some time getting it covered with some kind of test. That coverage is actually more important than the kind of testing used.

The main difference unit tests make isn't that they cover cases better. It's that you don't have to rerun them when nothing has changed. Timing issues won't change their behavior.

3

I work in an industry where we write software, but it is impossible to write tests that cover even a large part of the system. This is because we write the software and other people build and wire up the 50 m high cranes that the software runs on. And I can't fit a 50 m high crane in my home office.

But un-tested software isn't perfect (and should be considered broken), and no hardware installation is perfect either. So the solution we (and the rest of the industry uses) is field engineers who travel to the customers sites and are dedicated to getting everything up and running to the agreed upon quality.

If you are shipping untested software, then your position is akin to mine, and somewhere down the line someone is going to have get your software working for your customers. So if you continue to do so, then you need the equivalent of field engineers to make sure your customers are getting working software.

So either you pay now for more testing (so you are shipping a quality project), or you pay later for allocating resources to get the software running.

However, in the pay later process, you are not only paying with $$$, but you are also paying in loss of goodwill from your customers.

6
  • Are you writing glue code that connects existing things or behavior code that makes decisions? Commented Dec 18, 2024 at 19:56
  • 1
    @candied_orange I'm writing code that controls complex machinery. The machinery is built and wired up in China. I write code based on my understanding of the electrical drawings. However, the drawings are never perfect, the physical wiring can be wrong, the systems I am interfacing with may not work as expected, and the customer may change their mind. In one case last year I spent a couple of weeks designing and coding an interface to a specific device, only to be told by the field engineer that the machine builder substituted it for a completely different device without telling anyone. Commented Dec 18, 2024 at 20:35
  • 2
    @candied_orange (continued) And I'm neither in China nor at the customers site. And based on lead times, I may be writing code for equipment that won't be installed for another year after I've finished. The only testing I can do is confirm that my code is self-consistent within in my realm of (very simply) simulated hardware. Commented Dec 18, 2024 at 20:38
  • Wow. I hope your field engineers know how to put that thing into a safe state before your stuff starts sending control signals down an unproven channel. Someone could get hurt. Commented Dec 18, 2024 at 21:20
  • @candied_orange It's worse. People could die. But FEs are skilled in start ups and getting things running. This both a blessing and a curse. The blessing is that they are very knowledgable in the operation of the equipment. The curse is that they will fix the problem in front of their nose and not consider/care about the future ramifications of the fix they just implemented. Commented Dec 18, 2024 at 21:25
1

I’m greatly pleased at all the answers to this question. Just three more points from me:

  1. If you want to get project out the door because of whatever reasons, identify the most critical paths. Focus on testing them thoroughly. That way you solidify stability there. Every software has bugs. Yours will just have a little more than expected.
  2. If you have some peers who will be willing to lend you a hand, request their support in reviewing the key sections on the critical paths. In that process, you get a software release and they, in turn, get recognition and your goodwill.
  3. For future projects, apply the lesson you have learned this time, which is to have automated tests support your development during the development process and not as an afterthought.

Don’t be too harsh on yourself during this experience. Learning is a part of all this, even if you are learning a lesson that others think is obvious.

1

Who set your deadline? Did you have input on the deadline? Were customers given a release date?

If it was made in a vacuum without you, you should voice your concerns and get more time to test adequately. Clearly there was an error in the project timeline.

If you said it could be done, you have to get it done, or take your lumps and ask for more time to test adequately. You made a mistake, own it.

In NO case should you put untested code in front of customers. They will break it, they will complain about it, they will lose confidence in your product.

If customers were even given a release date; If you are late, they will complain, they may lose confidence in your delivery dates, but probably not the product.

If your tech lead says there is more time for adequate testing, shelf your pride, take the time.

How to do better in the future:

  1. delegate and make better use of your team resources
  2. work with those setting deadlines, make sure they are reasonable and achievable. be willing to push back if the deadline is not reasonable or achievable. if you believe it can be done in less time, agree to the deadline and deliver early, but don't give up your wiggle room.
  3. invest the time up front in unit tests. once you get in practice it will pay off in spades
  4. your project plan needs to include adequate test cases for qa. you should never lack confidence in qa signing off after their testing.
  5. don't tell your tech lead you are confident about the deadline while second guessing the quality of the project. this will only ever cause you pain. be honest with the tech lead, they will respect you more for owning mistakes and being honest, than for being overconfident.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.