Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> This maybe didn't exist at the time of writing of this blog post. But this is not super difficult now a days- though it does take some time. You can use services like connectedpapers.com which will build out graphs of references and will at a glance tell you which papers are sited more. You can find the more reliable stuff .. ie the "node papers".

True, but the aim isn't really in finding which ones are cited the most, although it does help you in the ordeal. In a sense those tools help having a macro understanding, but are very prone to an initial seed bias. It is difficult to get out of a closed sub-section of the field. This is especially the case in technical papers, which often fail to address surrounding issues. These issues might still be technical, just not within the grasp of that particular sub-group of authors.

In the end, like you said, it is very time consuming. You do need to go through each one individually and build an understanding and intuition for what to look for, and how to get out of those "cycles" for a deeper understanding. And you really are better off reading them yourself.

> The review paper is the traditional way. It's usually okay.. but very biased towards the author's background. > > If it's very "fresh of the press" stuff then you judge it based on the journal's reputation and hope the reviews did their jobs. You will have more garbage to wade through. To me recent is generally bad...

Some guidelines like PRISMA, or the various assessments of self-bias are generally good indicators the author cared. Having sections like these will help you getting the aforementioned intuition for what else to look through, given you have a recognition from the source itself of their bias (your own assessment may be biased, so some ground truth is good). Plus really thorough description of their methods for gathering the information (databases, queries, and themes they spent time on).

Agreed recent is generally bad, you need to allow some time for things to have a chance to get looked at.



>True, but the aim isn't really in finding which ones are cited the most, although it does help you in the ordeal.

Yeah it turns out that, much like with websites, PageRank is a fantastic tool for ranking quality research papers until the researchers realize that's how they're being ranked.

Goodhart strikes again.


Thing is, they only help to an extent. There are many golden nuggets out in wild waiting to be seen, most don't realize this and end up writing or thinking along the same lines as others already have. You are limiting the potential of your review by doing this, and finding those golden nuggets is your true goal. You need to get out of the local maxima my brother. Your goal is to give these golden nuggets the sunlight they need.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: