Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I always shrugged off the concept of code metrics (from LoCs to coverage) as a distraction from getting actual things done. But since doing more code-review I started to lack a framework to properly explain why a particular piece of code smells. I sympathize with the way the author cautiously approaches any quantitative metrics and talks of them more like heuristics. I agree that both Halstead Complexity and Cognitive Complexity are useless as absolute values. But they can be brought up in a conversation about a potential refactoring for readability.

What I didn't find is a mention of a context when reading a particular function. For example, while programming in Scala I was burnt more than once by one particular anti-pattern.

Suppose you have a collection of items which have some numerical property and you want a simple sum of that numbers. Think of shopping cart items with VAT tax on them, or portfolio positions each with a PnL number. Scala with monads and type inference makes it easy and subjectively elegant to write e.g.

  val totalVAT = items.map(_.vat).sum
But if `items` were a `Set[]` and some of the items happened to have the same tax on them, you would get a Set of numbers and a wrong sum in the end.

You could append to the list of such things until the OutOfMemoryError. But it's such a beautiful and powerful language. Sigh.



> But it's such a beautiful and powerful language. Sigh.

Don't give up. Half of the time when a good language has problems, it just means that the bad languages don't have those problems yet.

You don't need global type-inference and monads to run into your problem. Dynamic languages exist, and even the static ones usually have some kind of 'var x =' local type-inference. And collections like Set probably have a map function.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: