Skip to main content
replaced http://stackoverflow.com/ with https://stackoverflow.com/
Source Link

What makes maintainance necessary is that requirements are a moving target. I like to see code where future possible changes have been anticipated, and thought put into how to handle them if they should arise.

I have a measure of maintainability. Suppose a new requirement comes along, or a change in an existing requirement. Then the changes to the source code are implemented, along with fixing any related bugs, until the implementation is complete and correct. Now run a diff between the code base after and the code base before. If that includes documentation changes, include those in the code base. From the diff you can get a count N of how many insertions, deletions, and replacements of code were necessary to accomplish the change.

The smaller N is, as a rough average over past and future requirements changes, the more maintainable the code is. The reason is, programmers are noisy channels. They make mistakes. The more changes they have to make, the more mistakes they make, which all become bugs, and every bug is harder to find and fix than it is to make in the first place.

So I'm agreeing with the answers that say follow Don't Repeat Yourself (DRY) or what's called cookie-cutter-code.

I'm also agreeing with the movement toward domain-specific-languages (DSLs) provided they reduce N. Sometimes people assume the purpose of a DSL is to be "coder-friendly" by dealing in "high-level abstractions". That doesn't necessarily reduce N. The way to reduce N is to get into a language (which may be just things defined on top of an existing language) that map more closely onto the concepts of the problem domain.

Maintainability doesn't necessarily mean any programmer can just dive right in. The example I use from my own experience is Differential ExecutionDifferential Execution. The price is a significant learning curve. The reward is an order of magnitude reduction in source code for user interface dialogs, especially those that have dynamically changing elements. Simple changes have N around 2 or 3. More complex changes have N around 5 or 6. When N is small like that, the likelihood of introducing bugs is much reduced, and it gives the impression that it "just works".

At the other extreme, I've seen code where N was typically in the range of 20-30.

What makes maintainance necessary is that requirements are a moving target. I like to see code where future possible changes have been anticipated, and thought put into how to handle them if they should arise.

I have a measure of maintainability. Suppose a new requirement comes along, or a change in an existing requirement. Then the changes to the source code are implemented, along with fixing any related bugs, until the implementation is complete and correct. Now run a diff between the code base after and the code base before. If that includes documentation changes, include those in the code base. From the diff you can get a count N of how many insertions, deletions, and replacements of code were necessary to accomplish the change.

The smaller N is, as a rough average over past and future requirements changes, the more maintainable the code is. The reason is, programmers are noisy channels. They make mistakes. The more changes they have to make, the more mistakes they make, which all become bugs, and every bug is harder to find and fix than it is to make in the first place.

So I'm agreeing with the answers that say follow Don't Repeat Yourself (DRY) or what's called cookie-cutter-code.

I'm also agreeing with the movement toward domain-specific-languages (DSLs) provided they reduce N. Sometimes people assume the purpose of a DSL is to be "coder-friendly" by dealing in "high-level abstractions". That doesn't necessarily reduce N. The way to reduce N is to get into a language (which may be just things defined on top of an existing language) that map more closely onto the concepts of the problem domain.

Maintainability doesn't necessarily mean any programmer can just dive right in. The example I use from my own experience is Differential Execution. The price is a significant learning curve. The reward is an order of magnitude reduction in source code for user interface dialogs, especially those that have dynamically changing elements. Simple changes have N around 2 or 3. More complex changes have N around 5 or 6. When N is small like that, the likelihood of introducing bugs is much reduced, and it gives the impression that it "just works".

At the other extreme, I've seen code where N was typically in the range of 20-30.

What makes maintainance necessary is that requirements are a moving target. I like to see code where future possible changes have been anticipated, and thought put into how to handle them if they should arise.

I have a measure of maintainability. Suppose a new requirement comes along, or a change in an existing requirement. Then the changes to the source code are implemented, along with fixing any related bugs, until the implementation is complete and correct. Now run a diff between the code base after and the code base before. If that includes documentation changes, include those in the code base. From the diff you can get a count N of how many insertions, deletions, and replacements of code were necessary to accomplish the change.

The smaller N is, as a rough average over past and future requirements changes, the more maintainable the code is. The reason is, programmers are noisy channels. They make mistakes. The more changes they have to make, the more mistakes they make, which all become bugs, and every bug is harder to find and fix than it is to make in the first place.

So I'm agreeing with the answers that say follow Don't Repeat Yourself (DRY) or what's called cookie-cutter-code.

I'm also agreeing with the movement toward domain-specific-languages (DSLs) provided they reduce N. Sometimes people assume the purpose of a DSL is to be "coder-friendly" by dealing in "high-level abstractions". That doesn't necessarily reduce N. The way to reduce N is to get into a language (which may be just things defined on top of an existing language) that map more closely onto the concepts of the problem domain.

Maintainability doesn't necessarily mean any programmer can just dive right in. The example I use from my own experience is Differential Execution. The price is a significant learning curve. The reward is an order of magnitude reduction in source code for user interface dialogs, especially those that have dynamically changing elements. Simple changes have N around 2 or 3. More complex changes have N around 5 or 6. When N is small like that, the likelihood of introducing bugs is much reduced, and it gives the impression that it "just works".

At the other extreme, I've seen code where N was typically in the range of 20-30.

added 4 characters in body
Source Link
Mike Dunlavey
  • 12.9k
  • 2
  • 38
  • 59

What makes maintainance necessary is that requirements are a moving targetrequirements are a moving target. I like to see code where future possible changes have been anticipated, and thought put into how to handle them if they should arise.

I have a measure of maintainability. Suppose a new requirement comes along, or a change in an existing requirement. Then the changes to the source code are implemented, along with fixing any related bugs, until the implementation is complete and correct. Now run a diff between the code base after and the code base before. If that includes documentation changes, include those in the code base. From the diff you can get a count N of how many insertions, deletions, and replacements of code were necessary to accomplish the change.

The smaller N is, as a rough average over past and future requirements changes, the more maintainable the code is. The reason is, programmers are noisy channels. They make mistakes. The more changes they have to make, the more mistakes they make, which all become bugs, and every bug is harder to find and fix than it is to make in the first place.

So I'm agreeing with the answers that say follow Don't Repeat Yourself (DRY) or what's called cookie-cutter-code.

I'm also agreeing with the movement toward domain-specific-languages (DSLs) provided they reduce N. Sometimes people assume the purpose of a DSL is to be "coder-friendly" by dealing in "high-level abstractions". That doesn't necessarily reduce N. The way to reduce N is to get into a language (which may be just things defined on top of an existing language) that map more closely onto the concepts of the problem domain.

Maintainability doesn't necessarily mean any programmer can just dive right in. The example I use from my own experience is Differential Execution. The price is a significant learning curve. The reward is an order of magnitude reduction in source code for user interface dialogs, especially those that have dynamically changing elements. Simple changes have N around 2 or 3. More complex changes have N around 5 or 6. When N is small like that, the likelihood of introducing bugs is much reduced, and it gives the impression that it "just works".

At the other extreme, I've seen code where N was typically in the range of 20-30.

What makes maintainance necessary is that requirements are a moving target. I like to see code where future possible changes have been anticipated, and thought put into how to handle them if they should arise.

I have a measure of maintainability. Suppose a new requirement comes along, or a change in an existing requirement. Then the changes to the source code are implemented, along with fixing any related bugs, until the implementation is complete and correct. Now run a diff between the code base after and the code base before. If that includes documentation changes, include those in the code base. From the diff you can get a count N of how many insertions, deletions, and replacements of code were necessary to accomplish the change.

The smaller N is, as a rough average over past and future requirements changes, the more maintainable the code is. The reason is, programmers are noisy channels. They make mistakes. The more changes they have to make, the more mistakes they make, which all become bugs, and every bug is harder to find and fix than it is to make in the first place.

So I'm agreeing with the answers that say follow Don't Repeat Yourself (DRY) or what's called cookie-cutter-code.

I'm also agreeing with the movement toward domain-specific-languages (DSLs) provided they reduce N. Sometimes people assume the purpose of a DSL is to be "coder-friendly" by dealing in "high-level abstractions". That doesn't necessarily reduce N. The way to reduce N is to get into a language (which may be just things defined on top of an existing language) that map more closely onto the concepts of the problem domain.

Maintainability doesn't necessarily mean any programmer can just dive right in. The example I use from my own experience is Differential Execution. The price is a significant learning curve. The reward is an order of magnitude reduction in source code for user interface dialogs, especially those that have dynamically changing elements. Simple changes have N around 2 or 3. More complex changes have N around 5 or 6. When N is small like that, the likelihood of introducing bugs is much reduced, and it gives the impression that it "just works".

At the other extreme, I've seen code where N was typically in the range of 20-30.

What makes maintainance necessary is that requirements are a moving target. I like to see code where future possible changes have been anticipated, and thought put into how to handle them if they should arise.

I have a measure of maintainability. Suppose a new requirement comes along, or a change in an existing requirement. Then the changes to the source code are implemented, along with fixing any related bugs, until the implementation is complete and correct. Now run a diff between the code base after and the code base before. If that includes documentation changes, include those in the code base. From the diff you can get a count N of how many insertions, deletions, and replacements of code were necessary to accomplish the change.

The smaller N is, as a rough average over past and future requirements changes, the more maintainable the code is. The reason is, programmers are noisy channels. They make mistakes. The more changes they have to make, the more mistakes they make, which all become bugs, and every bug is harder to find and fix than it is to make in the first place.

So I'm agreeing with the answers that say follow Don't Repeat Yourself (DRY) or what's called cookie-cutter-code.

I'm also agreeing with the movement toward domain-specific-languages (DSLs) provided they reduce N. Sometimes people assume the purpose of a DSL is to be "coder-friendly" by dealing in "high-level abstractions". That doesn't necessarily reduce N. The way to reduce N is to get into a language (which may be just things defined on top of an existing language) that map more closely onto the concepts of the problem domain.

Maintainability doesn't necessarily mean any programmer can just dive right in. The example I use from my own experience is Differential Execution. The price is a significant learning curve. The reward is an order of magnitude reduction in source code for user interface dialogs, especially those that have dynamically changing elements. Simple changes have N around 2 or 3. More complex changes have N around 5 or 6. When N is small like that, the likelihood of introducing bugs is much reduced, and it gives the impression that it "just works".

At the other extreme, I've seen code where N was typically in the range of 20-30.

deleted 120 characters in body
Source Link
Mike Dunlavey
  • 12.9k
  • 2
  • 38
  • 59

What makes maintainance necessary is that requirements are a moving targetrequirements are a moving target. I like to see code where future possible changes have been anticipated, and thought put into how to handle them if they should arise.

I have a measure of maintainability. Suppose a new requirement comes along, or a change in an existing requirement. Then the changes to the source code are implemented, along with fixing any related bugs, until the implementation is complete and correct. Now run a diff between the code base after and the code base before. If that includes documentation changes, include those in the code base. From the diff you can get a count N of how many insertions, deletions, and replacements of code were necessary to accomplish the change.

The smaller N is, as a rough average over past and future requirements changes, the more maintainable the code is. The reason is, programmers are noisy channels. They make mistakes. The more changes they have to make, the more mistakes they make, which all become bugs, and every bug is harder to find and fix than it is to make in the first place.

So I'm agreeing with the answers that say follow Don't Repeat Yourself (DRY) or what's called cookie-cutter-code.

I'm also agreeing with the movement toward domain-specific-languages (DSLs) provided they reduce N. Sometimes people assume the purpose of a DSL is to be "coder-friendly" by dealing in "high-level abstractions". That doesn't necessarily reduce N. The way to reduce N is to get into a language (which may be just things defined on top of an existing language) that map more closely onto the concepts of the problem domain.

Maintainability doesn't necessarily mean any programmer can just dive right in. The example I use from my own experience is Differential Execution. The price is a significant learning curve. The reward is an order of magnitude reduction in source code for user interfacesinterface dialogs, especially those that have dynamically changing elements,. and simpleSimple changes have N around 2 or 3. More complex changes have N around 5 or 6. When N is small like that, the likelihood of introducing bugs is much reduced, and it gives the impression that it "just works".

At the other extreme, I've seen code where N was typically in the range of 20-30.

What makes maintainance necessary is that requirements are a moving target. I like to see code where future possible changes have been anticipated, and thought put into how to handle them if they should arise.

I have a measure of maintainability. Suppose a new requirement comes along, or a change in an existing requirement. Then the changes to the source code are implemented, along with fixing any related bugs, until the implementation is complete and correct. Now run a diff between the code base after and the code base before. If that includes documentation changes, include those in the code base. From the diff you can get a count N of how many insertions, deletions, and replacements of code were necessary to accomplish the change.

The smaller N is, as a rough average over past and future requirements changes, the more maintainable the code is. The reason is, programmers are noisy channels. They make mistakes. The more changes they have to make, the more mistakes they make, which all become bugs, and every bug is harder to find and fix than it is to make in the first place.

So I'm agreeing with the answers that say follow Don't Repeat Yourself (DRY) or what's called cookie-cutter-code.

I'm also agreeing with the movement toward domain-specific-languages (DSLs) provided they reduce N. Sometimes people assume the purpose of a DSL is to be "coder-friendly" by dealing in "high-level abstractions". That doesn't necessarily reduce N. The way to reduce N is to get into a language (which may be just things defined on top of an existing language) that map more closely onto the concepts of the problem domain.

Maintainability doesn't necessarily mean any programmer can just dive right in. The example I use from my own experience is Differential Execution. The price is a significant learning curve. The reward is an order of magnitude reduction in source code for user interfaces that have dynamically changing elements, and simple changes have N around 2 or 3. More complex changes have N around 5 or 6. When N is small like that, the likelihood of introducing bugs is much reduced, and it gives the impression that it "just works".

At the other extreme, I've seen code where N was typically in the range of 20-30.

What makes maintainance necessary is that requirements are a moving target. I like to see code where future possible changes have been anticipated, and thought put into how to handle them if they should arise.

I have a measure of maintainability. Suppose a new requirement comes along, or a change in an existing requirement. Then the changes to the source code are implemented, along with fixing any related bugs, until the implementation is complete and correct. Now run a diff between the code base after and the code base before. If that includes documentation changes, include those in the code base. From the diff you can get a count N of how many insertions, deletions, and replacements of code were necessary to accomplish the change.

The smaller N is, as a rough average over past and future requirements changes, the more maintainable the code is. The reason is, programmers are noisy channels. They make mistakes. The more changes they have to make, the more mistakes they make, which all become bugs, and every bug is harder to find and fix than it is to make in the first place.

So I'm agreeing with the answers that say follow Don't Repeat Yourself (DRY) or what's called cookie-cutter-code.

I'm also agreeing with the movement toward domain-specific-languages (DSLs) provided they reduce N. Sometimes people assume the purpose of a DSL is to be "coder-friendly" by dealing in "high-level abstractions". That doesn't necessarily reduce N. The way to reduce N is to get into a language (which may be just things defined on top of an existing language) that map more closely onto the concepts of the problem domain.

Maintainability doesn't necessarily mean any programmer can just dive right in. The example I use from my own experience is Differential Execution. The price is a significant learning curve. The reward is an order of magnitude reduction in source code for user interface dialogs, especially those that have dynamically changing elements. Simple changes have N around 2 or 3. More complex changes have N around 5 or 6. When N is small like that, the likelihood of introducing bugs is much reduced, and it gives the impression that it "just works".

At the other extreme, I've seen code where N was typically in the range of 20-30.

bolded the main idea for TL;DR people
Source Link
Jim G.
  • 8k
  • 3
  • 38
  • 66
Loading
deleted 120 characters in body
Source Link
Mike Dunlavey
  • 12.9k
  • 2
  • 38
  • 59
Loading
added 85 characters in body
Source Link
Mike Dunlavey
  • 12.9k
  • 2
  • 38
  • 59
Loading
Source Link
Mike Dunlavey
  • 12.9k
  • 2
  • 38
  • 59
Loading
Post Made Community Wiki by Mike Dunlavey