Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

Required fields*

16
  • 21
    What is q.v. stand for ? Commented Jan 14, 2019 at 20:54
  • 15
    @JonH basically "see also"... the Target hack is an example that he is referencing en.oxforddictionaries.com/definition/q.v. Commented Jan 14, 2019 at 21:43
  • 8
    This answer is as it stands just doesn't make sense. It's infeasible to anticipate each and every way a third-party library might misbehave. If a library function's documentation explicitly assures that the result will always have some properties, then you should be able to rely on it that the designers ensured this property will actually hold. It's their responsibility to have a test suite that checks this kind of thing, and submit a bug fix in case a situation is encountered where it doesn't. You checking these properties in your own code is violating the DRY principle. Commented Jan 14, 2019 at 23:58
  • 24
    @leftaroundabout no, but you should be able to predict all valid things your application can accept and reject the rest. Commented Jan 15, 2019 at 2:15
  • 10
    @leftaroundabout It's not about distrusting everything, it's about distrusting external untrusted sources. This is all about threat modelling. If you haven't done that your software isn't secure (how can it be, if you never even thought about against what kind of actors and threats you want to secure your application?). For a run of the mill business software it's a reasonable default to assume that callers could be malicious, while it's rarely sensible to assume your OS is a threat. Commented Jan 15, 2019 at 9:07