Skip to main content
Cites adapted to changes of the question
Source Link
Doc Brown
  • 220.5k
  • 35
  • 410
  • 625

What I see here is that you're taking on two completely different jobs.

YourOur team delivers a client facing SDK.

This is one job. This job doesn't care about the problems of the black box library. You can't fix it. You just need it to work. So for your own tests, that show that your own stuff works, mock the hell out of it. Prove that your own stuff works before worrying if the other teams stuff works.

YouWe are also responsible for the quality of the product - making sure it works as expected in client's environment.

This is the integrated job where you have to show that everything, including this black box, works together. Here, you can't mock them out. But if you did the other job correctly, when this fails it shouldn't be hard to show where the failure is coming from.

As algorithms are refined, theynew versions are constantly breaking our test suite, making it impossible to determine if algorithm is broken or if test video file was just not good enough.

This is not your job. This is the other teams job. Make them do that. If the new algorithms require new videos make them make them.

you can'twe cannot impact how the other team operates, so youwe can't require more testing on their side or just blame them for production issues unless you caught and recorded a problematic scenario

Make it easy to record problematic scenarios. Make it easy to prove where the problem isn't. This is work. You don't get it for free. So plan for it.

SDK size matters very much to clients, so youwe can't deploy both old and new algorithm (algorithm code and data make the majority of the size of the SDK) in a single SDK release

Who cares? Give yourself a way to install every version you've ever made. You don't have to deploy them all. Just create the capability so you don't lose the ability to test a video against both the past and current versions. Now you can show what abilities you're gaining and losing.

What would be youra good strategy for integration testing in this project to validate systems behavior and minimize risk of bad version of algorithms hitting the production?

Test, test, test, and test.

Don't accept work from the black box team without them showing you what videos they claim will work. If old videos that used to pass now fail make them state that they are OK with that.

You can't control the other team. But if you're responsible for the whole thing then you decide when it's ready. Make clear what you want to see passing tests before you make that call. If they don't respond, just don't use their new stuff.

Politically, it'd be much easier if you weren't wearing both these hats. Maybe look for a way to fix that.

What I see here is that you're taking on two completely different jobs.

Your team delivers a client facing SDK.

This is one job. This job doesn't care about the problems of the black box library. You can't fix it. You just need it to work. So for your own tests, that show that your own stuff works, mock the hell out of it. Prove that your own stuff works before worrying if the other teams stuff works.

You are also responsible for the quality of the product - making sure it works as expected in client's environment.

This is the integrated job where you have to show that everything, including this black box, works together. Here, you can't mock them out. But if you did the other job correctly, when this fails it shouldn't be hard to show where the failure is coming from.

As algorithms are refined, they are constantly breaking test suite, making it impossible to determine if algorithm is broken or if test video file was just not good enough.

This is not your job. This is the other teams job. Make them do that. If the new algorithms require new videos make them make them.

you can't impact how the other team operates, so you can't require more testing on their side or just blame them for production issues unless you caught and recorded a problematic scenario

Make it easy to record problematic scenarios. Make it easy to prove where the problem isn't. This is work. You don't get it for free. So plan for it.

SDK size matters very much to clients, so you can't deploy both old and new algorithm (algorithm code and data make the majority of the size of the SDK) in a single SDK release

Who cares? Give yourself a way to install every version you've ever made. You don't have to deploy them all. Just create the capability so you don't lose the ability to test a video against both the past and current versions. Now you can show what abilities you're gaining and losing.

What would be your strategy for integration testing in this project to validate systems behavior and minimize risk of bad version of algorithms hitting the production?

Test, test, test, and test.

Don't accept work from the black box team without them showing you what videos they claim will work. If old videos that used to pass now fail make them state that they are OK with that.

You can't control the other team. But if you're responsible for the whole thing then you decide when it's ready. Make clear what you want to see passing tests before you make that call. If they don't respond, just don't use their new stuff.

Politically, it'd be much easier if you weren't wearing both these hats. Maybe look for a way to fix that.

What I see here is that you're taking on two completely different jobs.

Our team delivers a client facing SDK.

This is one job. This job doesn't care about the problems of the black box library. You can't fix it. You just need it to work. So for your own tests, that show that your own stuff works, mock the hell out of it. Prove that your own stuff works before worrying if the other teams stuff works.

We are also responsible for the quality of the product - making sure it works as expected in client's environment.

This is the integrated job where you have to show that everything, including this black box, works together. Here, you can't mock them out. But if you did the other job correctly, when this fails it shouldn't be hard to show where the failure is coming from.

As algorithms are refined, new versions are constantly breaking our test suite, making it impossible to determine if algorithm is broken or if test video file was just not good enough.

This is not your job. This is the other teams job. Make them do that. If the new algorithms require new videos make them make them.

we cannot impact how the other team operates, so we can't require more testing on their side or just blame them for production issues unless you caught and recorded a problematic scenario

Make it easy to record problematic scenarios. Make it easy to prove where the problem isn't. This is work. You don't get it for free. So plan for it.

SDK size matters very much to clients, so we can't deploy both old and new algorithm (algorithm code and data make the majority of the size of the SDK) in a single SDK release

Who cares? Give yourself a way to install every version you've ever made. You don't have to deploy them all. Just create the capability so you don't lose the ability to test a video against both the past and current versions. Now you can show what abilities you're gaining and losing.

What would be a good strategy for integration testing in this project to validate systems behavior and minimize risk of bad version of algorithms hitting the production?

Test, test, test, and test.

Don't accept work from the black box team without them showing you what videos they claim will work. If old videos that used to pass now fail make them state that they are OK with that.

You can't control the other team. But if you're responsible for the whole thing then you decide when it's ready. Make clear what you want to see passing tests before you make that call. If they don't respond, just don't use their new stuff.

Politically, it'd be much easier if you weren't wearing both these hats. Maybe look for a way to fix that.

Source Link
candied_orange
  • 119.7k
  • 27
  • 233
  • 369

What I see here is that you're taking on two completely different jobs.

Your team delivers a client facing SDK.

This is one job. This job doesn't care about the problems of the black box library. You can't fix it. You just need it to work. So for your own tests, that show that your own stuff works, mock the hell out of it. Prove that your own stuff works before worrying if the other teams stuff works.

You are also responsible for the quality of the product - making sure it works as expected in client's environment.

This is the integrated job where you have to show that everything, including this black box, works together. Here, you can't mock them out. But if you did the other job correctly, when this fails it shouldn't be hard to show where the failure is coming from.

As algorithms are refined, they are constantly breaking test suite, making it impossible to determine if algorithm is broken or if test video file was just not good enough.

This is not your job. This is the other teams job. Make them do that. If the new algorithms require new videos make them make them.

you can't impact how the other team operates, so you can't require more testing on their side or just blame them for production issues unless you caught and recorded a problematic scenario

Make it easy to record problematic scenarios. Make it easy to prove where the problem isn't. This is work. You don't get it for free. So plan for it.

SDK size matters very much to clients, so you can't deploy both old and new algorithm (algorithm code and data make the majority of the size of the SDK) in a single SDK release

Who cares? Give yourself a way to install every version you've ever made. You don't have to deploy them all. Just create the capability so you don't lose the ability to test a video against both the past and current versions. Now you can show what abilities you're gaining and losing.

What would be your strategy for integration testing in this project to validate systems behavior and minimize risk of bad version of algorithms hitting the production?

Test, test, test, and test.

Don't accept work from the black box team without them showing you what videos they claim will work. If old videos that used to pass now fail make them state that they are OK with that.

You can't control the other team. But if you're responsible for the whole thing then you decide when it's ready. Make clear what you want to see passing tests before you make that call. If they don't respond, just don't use their new stuff.

Politically, it'd be much easier if you weren't wearing both these hats. Maybe look for a way to fix that.