Hey HN community!
Recently I've noticed a disconnect between GitHub stars and actual usability in some top-trending(#1) projects—no names, just a pattern I've observed. Stars don't seem sufficient for gauging project quality. For instance, a new project with thousands of stars last week doesn't work for many users(as per issues/community channel).
When you evaluate open-source projects, what metrics matter to you? Do forks (suggesting active use and adaptations), the number of contributors (indicating a healthy development environment), or open issues (reflecting responsiveness and ongoing development) influence your decisions?
Keen to hear which metrics you find most reliable to get into a project.
More often than not I see these questions asked in the context of comparing Open Source solutions to B2B products with integration and continuous support contracts; you cannot really "evaluate" Open Source with expectations that high. Getting things to work well takes elbow-grease, and you can't expect free work to materialize and solve every problem for you. You should go into most of these Open Source projects with the assumption it will take some dedicated research on your part to get it working properly.