What you're talking about is less syntax than structure. You could really only have a when statement like that in a system that executes a finite amount of logic, then executes the when statements, then loops around and executes the logic again, continuing in an infinite loop.
For instance Windows programming is typically "event based". Subscribing to a button's Click event essentially means "do this when clicked". However, what's going on under the hood is a message processing loop. Windows sends a message to the application when the user clicks the button, and the message processing loop in the application runs the appropriate event handler.
If you use events in, for instance, C#, you can do this without a message loop, but the limitation is that you have to declare the event ahead of time, so you can't write an artibrary when statement that watches for any kind of state. You have to wait for a specific event.
To get this behavior in a Von Neumann Architecture you have to run some kind of infinite loop that checks for all the conditions every time through the loop running the appropriate code if appropriate. Internally you just get a big list of if/then or switch statements. Most desktop application and web programmers would vomit if they saw such a construct so it's really only palatable if you wrap it in some kind of syntactic sugar like the Windows event model (even though that's what's going on under the hood).
On the other hand, if you look at the field of embedded firmware development, real-time executives, or industrial controllers, this model of programming is very common. For instance, if you have a real-time program, you may want to express:
outputA = input1 && input2
The code is straightforward to understand (because it's declarative). However, to make it work you have to execute it in a tight loop. You re-evaluate outputA every time through the loop. A lot of desktop or web programmers wouldn't like this because it's inefficient. To them, the only time you should re-evaluate outputA is when input1 or input2 changes. They would rather see something more like you're describing:
when input1 changes evaluateOutputA() when input2 changes evaluateOutputA() evaluateOutputA() outputA = input1 && input2
Now if this is what you want (and personally I don't prefer this idea), and your goal is efficiency, then you still have to ask yourself what the processor is doing under the hood. Obviously there's still some kind of loop running that compares the input states to the previous input states every time, and executes the appropriate code whenever one changes. So really it's less efficient and it's harder to read and harder to maintain.
On the other hand, if the work that you have to do when input1 changes is significant, then your when clause might make sense. In PLCs this type of instruction is called a "rising edge detection". It saves the state of input1 on the last time through the loop, compares it to the value this time, and executes the logic if the last state was false and this state is true.
If you don't have a Von Neumann Architecture, then the game changes. For instance if you're programming an FPGA in VHDL, then when you write:
outputA = input1 && input2
(... or whatever the appropriate VHDL syntax would be) then the FPGA actually gets wired up such that input1 and input2 are wired to the input of an AND gate, and the output of the AND gate is wired to outputA. So, not only is the code easy to understand, it's also executed in parallel with all the other logic, and it's efficient.
When you're talking about an industrial controller like a PLC or PAC, programmed in one of the five IEC-61131-3 languages, the typical case is this kind of arrangement:
- Read inputs and store in memory
- Execute main program
- Write outputs from memory to actual outputs
- Go to step 1
This is built into the architecture of the system, so it's expected that you'll just write:
outputA = input1 && input2
... and it will be executed in a continuous loop.
There are also interrupt routines in these machines. These are more like hardware level support for the when operator that you're talking about. The hardware interrupt is a means of executing some code on an external event. For instance, when a network card says that it has data waiting, the processor normally has to read that data immediately or you'll run out of buffer space. However, for the amount of times you need to hook a real hardware interrupt, I doubt including a language keyword for it is worthwhile. You'd be limited to CPU input pins, and it looks like you want to test internal program state.
So, in a traditional language (without a tight loop that runs infinitely) you have to ask the question, "when does the evaluation code run"?
If you write:
when A do launchNukes()
...and assuming A is an arbitrary boolean expression, how do you know when to re-evaluate that expression? A naive implementation would mean you have to re-evaluate it after every single memory write. You might think that you can narrow it down, but consider this:
when systemTime > actionTime do launchNukes()
Notice that systemTime is always changing (every time you read it, you'll get a different number). This means that the conditional part of all of your when clauses have to be re-evaluated continuously. That's almost impossible (and just consider for a second what happens if your conditional expression has side effects!)
Conclusion
You can only have a when statement (like you're describing) in an architecture based around an infinite loop that runs the main program, then executes the when statements if the conditions went from false to true on this loop. While this architecture is common in embedded and industrial devices, it's not common in general purpose programming languages.
select case table1.col1 when 1 then 'Y' else 'N' end as col1_yn from .... Also: msdn.microsoft.com/en-us/library/dd233249.aspx Basically I would do a search for "when" using Google code search.