Skip to main content
20 events
when toggle format what by license comment
Jul 21, 2018 at 6:58 audit First posts
Jul 21, 2018 at 6:59
Jul 11, 2018 at 1:57 comment added jpmc26 @BgrWorker It's possible we may agree on some principle, but I think your choice of wording is misguided to the point I can't support it. What I would agree to is that if you have already developed intuition about what good code should look like, then tests can help you detect when your organization is getting too complex or too messy, since the extra burden of writing a meaningful test exposes this to you. But if your intuition is not at all well developed, then you're much more likely to write bad tests and bad code in an attempt to blindly adhere to some vague supposed virtues.
Jul 10, 2018 at 17:22 comment added Ant P @DavidHammen that account points to unit tests doing their job perfectly well in this context. Software tests aren't there to tell you that you have the right solution, they're there to tell you that your code behaves as specified.
Jul 9, 2018 at 22:15 comment added jrh Just wanted to mention that in some cases unit tests can help a little, though for best results you may have to think outside of the typical "TDD" box. You could for example add datasets that are known to produce a certain good result and make sure that they still work. Then add some datasets that are known to fail and make sure they still do, so that you know you're not getting false positives. I use this in my current work sometimes, it can help find side effects.
Jul 9, 2018 at 10:19 comment added BgrWorker In defense of this answer, good and well defined unit tests will help you shape your code in a more modular and understandable way. This is one of the strong points of TDD, so the answer is correct in specifying it. But if you write ugly code to begin with, and you're a scientist with no specific training or experience in writing effective and understandable tests, you probably won't be able to reap those benefits, and end up with messy tests (which probably won't even be real unit tests) complementing messy code.
Jul 9, 2018 at 7:44 comment added Ant P @James_pic a lot of software is really just glue - what you're saying is really just a matter of good vs. bad testing. If you try and test all of your "glue" code in isolation then you're going to end up with a mess of pointless mocks and unnecessary abstraction, which will just make the problem worse. If I'm using a library that adds numbers and a library that divides numbers, I don't write a test that asserts that my code calls add and then calls divide, I write a test that asserts that my code produces an average.
Jul 8, 2018 at 15:12 comment added David Hammen The Principal Investigator did not want his good scientific name sullied by possibly erroneous conclusions made from suspect data. He insisted that I mark ozone measurements as missing when the solar backscatter angle was too low and the measurement was apparently out of whack. I argued against this, but relented (I was a fresh out). Because I argued against doing so, I also had to write unit tests to show that such suspect data points were indeed marked as missing. The unit tests passed, and because of this the instruments failed to detect the ozone hole.
Jul 8, 2018 at 15:08 comment added David Hammen Another example of the worse than useless nature of unit tests in scientific code: NASA's Nimbus 7 satellite was equipped with two experiments to monitor stratospheric ozone levels. Forty years ago, I wrote a good chunk of the code base for those two experiments. That very expensive satellite did not detect the ozone hole thanks to my code. Instead, that discovery was made seven years later by a British scientist on the ground in Antarctica who pointed a cheapish, handheld Dobsonmeter at the sky.
Jul 8, 2018 at 14:55 comment added David Hammen On second thought, the overemphasis on unit tests of scientific code does make this worthy of a downvote.
Jul 8, 2018 at 14:28 comment added David Hammen An integrator suitable for studying short-term transients won't show these very small relativistic effects. Integrators that takes the large steps needed to see these small relativistic effects are not suitable for studying short-term transients. This obviously is not a unit test.
Jul 8, 2018 at 14:27 comment added David Hammen NASA is finally thinking of sending humans beyond low Earth orbit. So how to test the relativistically-correct code? A unit test is worthless. A good test is to see whether numerical integration results in Mercury's orbit precessing by 43 arc seconds per century over the much larger precession of Mercury's orbit induced by the other planets. This requires a good deal of infrastructure, and it requires a second order ODE numerical integrator that does a very good job over long time spans.
Jul 8, 2018 at 14:11 comment added David Hammen Not worthy of a downvote, but I do agree strongly with @KonradRudolph regarding the "less than useful" nature of unit tests. For example, I've been making a yearly request to make the gravitational models in a set of space environment simulation models correct from the perspective of general relativity. This request has been turned down for over a decade. There's no point in doing so when the vehicles of concern are in low Earth orbit. (Humans have been stuck in low Earth orbit since the end of Apollo.) The request was finally granted a few months ago, and without me even asking. ...
Jul 7, 2018 at 0:06 comment added George M Reinstate Monica Let me just emphasize that 'version control', while really necessary, should be used in a slightly sophisticated way. it's not enough to just throw everything in, in a big pile. That's not very different from naming your files according to what they do and throwing them on a hard drive. Some understanding and practice of branching is what makes it possible to retrieve the most useful version of the code, not start from scratch, not cream the whole thing for one harebrained idea. Give it a little time, a little practice, if you want real results
Jul 6, 2018 at 19:01 comment added jpmc26 "Between version control and writing unit tests as you go, you code will naturally become a lot cleaner." This is not true. I can attest to it personally. Neither of these tools stops you from writing crappy code, and in particular, writing crappy tests on top of crappy code just makes it even harder to clean up. Tests are not a magic silver bullet, and talking like they are is a terrible thing to do to any developer still learning (which is everyone). Version control generally never causes damage to the code itself like bad testing does, though.
Jul 6, 2018 at 15:54 comment added James_pic @AntP It's possible that there just isn't that much code that can be meaningfully refactored out into well-defined testable units. A lot of scientific code is essentially taping a bunch of libraries together. These libraries will already be well tested and cleanly structured, meaning the author only has to write "glue", and in my experience, it's damn near impossible to write unit tests for glue that aren't tautological.
Jul 6, 2018 at 13:20 comment added Dan Bryant On a side note, version control also works quite nicely for LaTeX documents, since the format is amenable to text diffing. In this way, you can have a repository for both your papers and the code that supports them. I suggest looking into distributed version control, like Git. There's a bit of a learning curve, but once you understand it, you've got a nice clean way to iterate on your development and you have some interesting options to use a platform like Github, which offers free team accounts for academics.
Jul 6, 2018 at 12:42 comment added Ant P I would add to this answer that unit tests aren't only useful for making sure you don't break things later, they are also (possibly more importantly) a very useful tool for reasoning about the way your code is structured. If your code is a mess, your tests will signal that. If you write the tests before you even write the code, you never write the messy code in the first place.
Jul 6, 2018 at 12:19 comment added Ant P @KonradRudolph the trick in those cases is likely to be clean separation of concerns between parts of your code that have clearly definable behaviour (read this input, compute this value) from the parts of your code that are either genuinely exploratory or are adapting to e.g. some human-readable output or visualisation. The problem here is likely to be that poor separation of concerns leads to blurring those lines, which leads to a perception that unit testing in this context is impossible, which leads you back to the start in a repeating cycle.
Jul 6, 2018 at 11:30 comment added Konrad Rudolph I religiously unit test most of my code but I have found unit testing exploratory, scientific code less than useless. The methodology fundamentally doesn’t seem to work here. I don’t know any computational scientist in my field who unit-tests their analysis code. I’m not sure what the reason for this mismatch is but one of the reasons is certainly that, except for trivial units, it’s hard or impossible to establish good test cases.
Jul 6, 2018 at 2:15 history answered Karl Bielefeldt CC BY-SA 4.0