OK, so here's a little break from the usual - programming languages differ in part not only due to their syntax, but also in the vernacular they provide for the expression of programming ideas, right? To that end, there are still some semantic atoms out there that aren't in general use (yet). Here's a paper about an evaluation system used in a research language in the 70's that permitted an interesting type of backtracking during evaluation.
Essentially, it builds on the concept of generators (like the ones offered by Python, which can deliver any number of values before they "fail", having run out of values - this is explicitly called succeeding and failing in Icon, but Python just uses an undefined return as failure, which is pretty reasonable).
If you chain generator and expression calls together with &, then Icon will try to retrieve a value from the first thing in the chain, then go on to evaluate the rest of the chain. Only if each link in the chain succeeds does the overall expression succeed; a failure at any step causes the evaluator to backtrack to the previous link in the chain. And you can assign "temporary variables" within the chain, whose values revert to the earlier value as you backtrack up through the chain.
This is really a pretty cool notion, but I have to start asking, first: what other "semantic primitives" are permitted by programming languages, and how can they be categorized in terms of ease of comprehension? How far can you go, designing a language, before people just don't get it?
Second: it would be cool to categorize this kind of semantic primitive and see how they move between languages. If a given algorithm is expressed using such a primitive, how easy it is to "recast" the concepts into other idioms? This kind of thing is also related to the notion - often seen in Python discussions - of "idiomatic" programming, that is, programming that makes use of the community-condoned semantic primitives to achieve elegance and evidence of community membership, of "getting it".
There's a sliding scale of complexity here. Programming languages are, when you get down to it, just another human medium of expression - they're just specialized for the expression of algorithms and procedures. Are they as good as they can be? How easily can software "understand" the same things humans do?