1

Is it a good practice to have a list containing objects of derived types and then filtering it by type checking when need to call methods specific for one of the types? From what I read typechecking is some kind of code smell and it indicates that OOP isn't fully used.

I use common list for object of all subtypes because I want to call some common methods on all objects. But some methods are specific for some subtype and if I want to invoke the method I need to filter out objects by type.

 public abstract class ParentType { public abstract void CommonMethod(); } public class DerivedTypeA : ParentType { public override void CommonMethod() { //do something } public void TypeASpecificMethod() { //do something } } public class DerivedTypeB : ParentType { public override void CommonMethod() { //do something } public void TypeBSpecificMethod() { //do something } } static void Main() { List<ParentType> objects = new List<ParentType>() { new DerivedTypeA(), new DerivedTypeA(), new DerivedTypeB(), new DerivedTypeB() } //Call common method on all objects in a list foreach(var obj in objects) { obj.CommonMethod(); } //Call only specific method for DerivedTypeA objects foreach(var obj in objects) { if(obj is DerivedTypeA derivedAObj) //type checking needed { derivedAObj.TypeASpecificMethod(); } } } 
4
  • Why can't CommonMethod in DerivedTypeA just call TypeASpecificMethod? Commented Oct 1, 2019 at 11:39
  • 4
    If type checking is a senisble solution or not is a question of context and requirements in a real-world program. Contrived examples like this one, without any context, are usually not suitable for making a choice between "good" and "bad". Commented Oct 1, 2019 at 11:39
  • Your title is not exactly a question. The body however does seem to be asking something interesting here. Commented Oct 1, 2019 at 12:28
  • 2
    @Akiva I agreed, title has been updated. Commented Oct 8, 2019 at 14:20

3 Answers 3

3

Why do you need to call processA() for A objects and processB() for B objects? Almost always it makes for a better solution to invent a dynamically dispatched method process() and define your subclasses so that each overriding method automatically does the right thing. (If some methods do not really apply to all subclasses, doing nothing is a perfectly fine choice - certainly better than writing your client programs with lots of typeof checks.)

3
  • 2
    I am pretty sure if this is a good solution depends heavily of what A, B, processA and processB represent, how often they are supposed to be changed, how the whole class hierarchy looks like. It also depends on the context of where the process() method will be called, and if we are writing a throw-away program or a reusable library, and a dozen other things. I don't like this kind of contrived examples where anyone can pick the interpretation he/she likes best. Commented Oct 1, 2019 at 11:46
  • Maybe more concrete example: Let's say we have parent A class. B and C classes inherits from A. Additionally B implements IDisposable. C doesn't implement IDisposable because it doesn't anything to clean-up. List<A> contains some B and C. I want to dispose all B instances to not make memory leaks. I don't want to force to implement IDisposable on all A children when not every child have some resources to clean-up. Commented Oct 1, 2019 at 12:03
  • Either have C be (vacuously) IDisposable anyway, or introduce your own finalize() method into A with a no-op default implementation. Commented Oct 1, 2019 at 12:05
2

Not sure if it's a code smell, but your approach of checking the specific derived type and invoking a specific method on that type leads to tightly coupled, hard to maintain, code. So it's best avoided.

Instead of that abstract class and derived classes, compose each class from a set of interfaces:

public interface ICommon { void CommonMethod(); } public interface ITypeASpecific { void TypeASpecificMethod(); } public interface ITypeASpecific { void TypeASpecificMethod(); } public interface ITypeBSpecific { void TypeBSpecificMethod(); } public class TypeA : ICommon, ITypeASpecific { public void CommonMethod() { //do something } public void TypeASpecificMethod() { //do something } } public class TypeB : ICommon, ITypeBSpecific { public void CommonMethod() { //do something } public void TypeBSpecificMethod() { //do something } } 

Then you can check the types for specific actions without tying your code to specific classes; just the interfaces. Also this approach will handle the situation where you want a type that implements both ASpecific and BSpecific:

public class TypeAB : ICommon, ITypeASpecific, ITypeBSpecific { public void CommonMethod() { //do something } public void TypeASpecificMethod() { //do something } public void TypeBSpecificMethod() { //do something } } static void Main() { var objects = new List<ICommon>() { new TypeA(), new TypeA(), new TypeB(), new TypeB(), new TypeAB() } foreach(var obj in objects) { obj.CommonMethod(); if (obj is ITypeASpecific typeAObj) typeAObj.TypeASpecificMethod(); if (obj is ITypeBSpecific typeBObj) typeBObj.TypeASpecificMethod(); } } 
3
  • This does not solve anything in terms of type checking, does it? You are doing the exact same thing, just without inheritance because you are that guy who is super-allergic to inheritance and thus forbid yourself to use polymorphism . Yes, interfaces are more flexible. But sometimes you want something rigid to support you in order not to fall on your pretty face. Commented Oct 8, 2019 at 14:34
  • @MartinMaat David is on the right track however: one can dynamically inspect for some feature (interface) of the object that it can deliver (or not). That can easily be more flexible than inheritance. In java <T> Optional<T> lookup(Class<T> interfaceType) for dynamic capability lookup.. Commented Oct 8, 2019 at 16:03
  • It doesn't solve anything in terms of typechecking but it sticks to open-closed principle. Adding new class implementing particular interface doesn't require any change in the class with loop. Problem here is not only typechecking but downcasting too. Some says that downcasting is justified only if any execution path puts into downcasted variable an object of type that we downcast to. So are we forced to use separate collections to conform all mentioned rules? But then if we want to add new type then we need to add new collection so we are breaking open closed principle. Commented Oct 9, 2019 at 18:26
2

Is it a good practice to have a list containing objects of derived types and then filtering it by type checking when need to call methods specific for one of the types? From what I read typechecking is some kind of code smell and it indicates that OOP isn't fully used.

It indicates that polymorphism isn't fully used. Polymorphism is a big part of OOP.

Now sure, sometimes you're stuck and you can't do it the way you'd like but please understand how to use it to solve this problem when you can.

If for some reason you simply can't change these derived types you can still make that type specific checking move away from you.

static void Main() { List<VagueType> objects = new List<VagueType>() { new VagueType( new DerivedTypeA() ), new VagueType( new DerivedTypeA() ), new VagueType( new DerivedTypeB() ), new VagueType( new DerivedTypeB() ) } //Call common method on all objects in a list foreach(var obj in objects) { obj.CommonMethod(); } //Call something, if needed foreach(var obj in objects) { obj.SomeSpecificMethod(); } } public class VaugeType { public override void CommonMethod() { derived.CommonMethod(); } //Call only specific method for DerivedTypeA objects public void SomeSpecificMethod() { if(derived is DerivedTypeA derivedAObj) //type checking needed { derivedAObj.TypeASpecificMethod(); } } } 

So wait. Didn't I just use type checking? Yes I did. But I shoved it away from the using algorithm so we can stop looking at the ugly thing every time we use it.

It's not that type checking is evil. It's a code smell that encourages you to stop and think about your design and the problems it's causing. This runs counter to the instincts of new programmers who tend to pull every ugly detail towards them. Experienced programmers like to shove details away and hide them behind things so we can think about more important things.

6
  • If you make the VagueType wrapper an interface, there's no need to check the type. You just create different implementations for each DerivedType. Not sure about C# but in Java you could do this with lambdas and default methods. Commented Oct 8, 2019 at 16:14
  • @JimmyJames that's the rub of not having duck typing. I can't teach those derived types that they implement a new interface without changing them. I can wrap them or I can derive new ones that implement the new interface. Commented Oct 8, 2019 at 21:40
  • I'm probably missing something here but I'm not sure how duck-typing helps here. The problem here is that you have two types with different method signatures. In order to make a single handler use both of the mismatched signatures them, you need some sort of indirection. You could add a method to one or both objects but that's not what I would call duck-typing. Do I misunderstand your point here? Commented Oct 15, 2019 at 19:37
  • @JimmyJames I'm the examples given the signatures are: no arg, return void. But I think you're arguing the case that they might not be. In which case yes you need indirection. If not duck typing frees you from needing to somehow teach old classes that they support new interfaces. All they have to do is have the right signature. No indirection needed. Just testing. Commented Oct 15, 2019 at 21:22
  • But they have different method names. I thought that was the crux of the problem. My understanding of duck-typing is that if the a type has the foo method and a bar, then we just assume that it's a foobar duck type. I'm not sure how that works if one object has foo and bar and the other has foo and baz regardless of the return type. Commented Oct 15, 2019 at 21:27

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.