Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

Required fields*

7
  • $\begingroup$ A quite fundamental question. How would you ponder that: 1) a good feature is good as long you already know it is good (prior knowledge) 2) little changes can affect detection (adversarial attack in deep learning, but I'm sure we humans are affected too) $\endgroup$ Commented Dec 30, 2016 at 22:13
  • $\begingroup$ @LaurentDuval: I don't want "good" features. I want stable ones, in the sense explained in the post. At the border little changes will indeed affect detection. What I want is large detection basins. $\endgroup$ Commented Dec 31, 2016 at 14:02
  • $\begingroup$ Understood. I was starting from "Good features are those that...". I think that a lot of stuff happens between dimension 1 and 2: singular along the normal, regular in the orthogonal direction. Can you restrict the class of objects you are interested in? $\endgroup$ Commented Dec 31, 2016 at 14:07
  • $\begingroup$ Did you already elaborate from Canny versions of what an edge can be? $\endgroup$ Commented Dec 31, 2016 at 14:08
  • $\begingroup$ @LaurentDuval: no, my idea is to find analytical ways to express robustness a priori, and from there derive which features are interesting. $\endgroup$ Commented Dec 31, 2016 at 14:50