Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is how I was interviewed when I was a scientist/engineer and how I got interviewed at gov labs when I switched to programming. I still refreshed material for my interviews but they were focused on the actual job areas. I was really shocked that this was not common in the software world and feels weird that as an AI research people are asking me about leet code and not about mathematical formulas, limitations, and analysis of architectures or my own research. PhD scientist interviews (at labs and universities) are essentially a short form of people's discretions with a focus on Q&A. It's always appeared successful to me and I figured that the leet code was always because 1) momentum and 2) there's so many applications that an arbitrary filter has no realistic effect on outcomes other than reducing the number of candidates (due to the difficulties of measuring merit). (Similar to university admissions) But I think we all have to admit that meritocracy is not realistic and act under this belief. It's fine to have arbitrary filters if we recognize them as arbitrary but not if we go around and tout superiority for passing these. But I guess that's a corollary to Goodhart's Law


Because those fields are currently too limited and that would only work for researchers that are exactly in the field the company wants talent for.

Researchers' fields/interests are too narrow, by far, for companies to find enough candidates. Machine learning departments go from low thousands (FANG) to maybe a dozen individuals on the low end (small regional banks, ...). That adds up to let's say 40.000 jobs worldwide.

Plus there's the non-cheating cheating. The majority of conferences (including ICLR/NIPS/CVPR) are still presentations by companies about how they "innovated" by letting an intern use 10-year old techniques, in an established library (ie. not pytorch, but an "integrated" solution, sometimes going as far as an Oracle tool) to look at their own proprietary data (in medical, social sciences, sometimes chemistry). This then delivers them a "paper", goes into the conference proceedings, and they make sure this delivers dozens, sometimes hundreds of citations for all individuals involved.

Don't get me wrong. Delivering a major paper at those conferences is a major, incredible accomplishment that's beyond me, for example. But there's 20-30 people on a yearly basis that "really"/honestly do that and over 5000 total presentations at those conferences. And there's 10000 or more candidates needed to fill positions at companies.

And then the question is: who would you rather hire? A math phd, or frankly even a CS master with no relevant machine learning papers, or that intern?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: