I came to the same conclusion from another direction:

Thinking in terms of maintainability under mental load (i.e., debugging on little sleep with a deadline / whatever), idioms trade space in…

I came to the same conclusion from another direction:

Thinking in terms of maintainability under mental load (i.e., debugging on little sleep with a deadline / whatever), idioms trade space in code for mental load, so long as those idioms are appropriately applied.

In other words, if you write code with the awareness that every idiomatic structure is one chunk and every major divergence from idiom counts as a chunk (and everything that looks like an idiomatic use but actually relies on some minor detail to change the behavior non-trivially — i.e., obfuscation — counts as several chunks, since at the very least the programmer must identify the idiom then identify the actual behavior), appropriate use of idioms can reduce the code size in terms of number of chunks (and thus reduce mental load). This corresponds to engineer-time much better than number of lines or number of characters do.

When people complain about boilerplate, they are usually complaining about the misuse of idioms (either enforced by language or style guides or enforced by a programmer’s ignorance): a java program written in ‘good’ java style for solving a problem not well suited to that set of abstractions can easily contain more trivial classes / beans / interfaces than an equivalent program in a more well-suited language contains lines of code — in other words, the chunk count actually increased by trying to force the solution into an inappropriate set of idioms, because in addition to solving the actual problem the program also solves the additional non-trivial problem of idiom conversion.

When I am not developing interactively or writing throw-away code on a deadline, I avoid features that allow me to compress many mental chunks into a small number of characters (particularly in languages like python, where such features are extremely powerful), because I know that no matter what I do, a bug in such a heavy line will be many times harder to fix (and the behavior many times harder to reason about) as soon as I’ve GC’d my mental model of it & no longer have a detailed internalized map of execution in working memory.

I have a coworker who favors such constructs, and finds idiomatic code hard to understand. However, he thinks about code in terms of how the interpreter or compiler works (learning languages by reading their implementations) and his chunk size is determined by implementation AST — in other words, he has an unusual way of thinking about and reading code that is unconcerned with programmer intent. I would recommend anyone who does not model programs in this way to embrace the force multiplier of careful idiom use.