Hacker News new | past | comments | ask | show | jobs | submit login

The problem with sample code is that I have no idea if you actually wrote it. Maybe you copied it from Google, maybe your friend "helped" you. I have no idea. When I'm first interviewing you, I have no idea how honest you are. The only way I know for sure that you wrote the code is if I watch you write it in front of me.

I really like Fizzbuzz because for someone who is a good programmer, whether they've ever seen it or not, they can do it in about two minutes, and then at least I can believe that their code sample is theirs.




I once skipped the "simple programming exercise" part of the interview process for someone who was senior, and severely regretted it when it turned out he struggled with basic programming tasks.

He had a nice code sample (pre-github era).


Yes. I've interviewed "senior software engineers with 10+ years of C++ experience" that could not tell me the difference between a char and a char pointer.


I interviewed a self-declared "biometrics expert" once.

When I see "expert" on a CV I have learned to become cynical.

I asked him about his expertise and he spoke in platitudes about it - at a sort of BBC News kind of level. I'm certainly not an expert but I know sufficient to be able to ask useful questions about it.

He couldn't tell me anything at all about biometrics. I continued to probe.

Turns out that by "biometrics expert" he quite unashamedly meant he had, at the request of a manager, bought a USB fingerprint reader from PC World and installed it so that his manager didn't have to use a password any more.


>When I see "expert" on a CV I have learned to become cynical.

One has to play this game on marketing documents to be considered. For some reason "pretty good, definitely still learning" doesn't click with many HR people. You can't really blame candidates for trying to get hired by presenting themselves in an authoritative context.

I don't consider anyone an actual expert in something unless they have hard, indisputable credentials, like a long history of commits to the core project (for expertise in specific software/languages/frameworks), etc.


I completely agree that it's a sales vs. HR thing, and that the onus is on the candidate to make themselves appear as the best product on the market.

But something about "expert" really grates on me. I think it's that it is a self-anointed title in nearly all cases. There are legitimately experts in lots of different areas - if you happened to invent IDS, for example, you go right ahead and call yourself an expert. If you happen to have run a bunch of Snort rules across a prod environment for a year or two, you might be "experienced", go on, stretch it to "highly experienced" for all I care, but if you stretch it to "expert" then paint me cynical.


How is that even possible? Are you sure they weren't just having (to be crude) a brain fart?

Some of these stories about super experienced ("10+ years") developers not being able to answer the most basic of basic syntax or "algorithm" questions seem far-fetched to me.


Consider that most "valid" implementations of fizzbuzz are wholly dependent on awareness of the target language's modulo operator. Self-taught programmers can easily overlook that, and if a specific language is requested, it's easy to not know the syntax even if the concept is understood.

Really I don't think those sort of on-the-spot tests/trivia questions are representative by themselves. They may be a useful part of a larger investigation.

Two things to stop code sample plagiarism: choose a unique code challenge and, if you're really worried about it, give the client a laptop and tell them to bang something simple out right there and come back in 30-60 minutes to check.

Honestly, though, I've rarely depended on either code samples or code trivia to determine if someone is a good hire. If you sit and talk about dev with a person, you can tell if they're on their game or not 90% of the time. The sample/on-the-spot tests can help weed out that extra 10%.


I've also heard of a story where an "Expert in C" (verbatim from resume) thought that the * in the sample source code was a typo.

I'm not sure how this is possible either, but the world works in mysterious ways.


It's possible if the person lied on their resume about actually having 10+ years of experience.


We're moving towards a code sample as screening method, but then one of the interview sessions is an intensive review of the supplied code including tradeoffs made with other possibilities, etc. If they understand the code well enough to pass that session, personally it doesn't matter to me if they actually wrote it or not - they clearly could have.


In 2002, a company I interviewed with did just that. They tore my code apart, asking me to justify myself, and I felt like I completely bombed it.

In the end they said "Okay, you pass." When I asked why, the answer was "While we don't agree with all the choices, it's basically well-written and does something interesting. We just wanted to know if you were the one who wrote it."


Yeah, we make it clear to reviewers that the point isn't to inject their own opinions as to how it should have been written (and certainly not "your curly brace is in the wrong spot"!) but rather to determine:

- did they likely write the code, for obvious reasons - are they aware of the various tradeoffs and choices they made, i.e. we might not agree but can they make a reasonable argument for them - this demonstrates breadth of knowledge - how are their communication skills?


A lot of developers, myself included, do not perform well under a microscope. They almost can't even type when someone is watching them, making stupid mistakes and blanking on simple concepts. Luckily, the vast majority of programmers don't have to code under this kind of pressure on a daily basis. I find the most effective way to get a good sense of how someone can code is to give them homework to do that they can bring to the interview and discuss. That way they can take their time, put their best foot forward, and it won't weed out introverts or people with test or social anxiety.


I think I usually noticed when someone felt uneasy, so I left the room for a few minutes. Test anxiety might be really bad for some people, I know. But if it's that bad that they can't explain to me how to check if "n mod m == 0" then I think there are usually deeper underlying problems in their programming ability. And if it was really just anxiety, I'm sorry I'd rather have a false negative than hiring them.

I tried homework, it didn't really work that well for me, people would google and copy paste stuff and invest way too much time, in the end they couldn't explain what they did and it was just a waste of time for both parties. The other problem is that I usually let people work with whatever language/frameworks their familiar with. I ended up beeing not familiar with some C# stuff they used and couldn't really judge their code. Fizzbuzz is simple enough that you can see if it looks right in pretty much any language And it's done in 5 minutes even if that means accepting a few false positives.

Oh and, we had I few people where I noticed that they were really nervous and I thought maybe that's why they failed Fizzbuzz, so I invited them in for half a day and gave them a toy project to implement. Usually something really simple like, fetch this RSS Feed, parse it, save it somewhere, make sure that if you fetch it multiple times it doesn't save duplicate entries, and spit out the titles and URL's to stdout or a webpage or anything. A relatively real world exercise I think, nobody was constantly watching them, it was just a "get it done" job, they had internet, their own IDE, their language of choice and everything (a few people complained that they can't implement FizzBuzz without an IDE). And all of them failed. Hard. So I ended up not doing that anymore and treat a failed FizzBuzz as a K.O. criteria.


This is precisely what we do when hiring at our web dev/game/app agency. We have a somewhat casual meeting to first determine if the candidate is the type of person we'd want to work with every day and then we give them a challenge to take home with them. We provide a direct line to one of our developers that fulfills a similar role to what we are hiring for and encourage them to ask questions as they are completing the challenge. It's a very telling process. We are a very collaborative team and expect people to ask questions and grow together. When candidates ask questions that would be really easily answered by a quick Google search, or take far longer to complete the challenge than it should take someone who even sort of knows what they are doing, these are major red flags. At the end we review their code and see if their solutions are well thought out and up to basic standards. I should mention the challenge is typically extremely simple... Some candidates don't make it a priority and make excuses for days or weeks on end why they aren't finished. These don't tend to be the type of people we like to hire.


It's easy to tell if they wrote it. Just start asking them questions about it. How does this work? Why did you make this design decision? What happens in this case?

I would be delighted if an interviewer asked me such questions about my code.


Some interviewees talk up a big game. And some interviewers are too forgiving. "He fudged blah, but he probably meant factory pattern. Hired!"

Fizz buzz you can't skate around. Either you know how to implement something, or you don't.


On the other hand, when I need a new job, I am going to simply spend a week with an interview algorithms book and cram on all the little riddles and exercises that people seem to want.

I'd actually have to think for a minute on how to do fizz buzz right now, simply because it is so far remove from the actual engineering workflow.


> simply because it is so far remove from the actual engineering workflow.

Err, what? Fizzbuzz is NOT a riddle. It's intentionally meant to be a stupid simple test, and everyone has written stupid simple code at some point. Even as a "senior engineer" you'll have to write dumb business logic code, even if it's only occasional.

If you really struggle with fizzbuzz it would almost assuredly be a sign that you would really struggle to write quality software in a timely manner. It's also a sign that you really aren't familiar with even the basic control flow of your chosen language, which is similarly worrying.

The only part of fizzbuzz that's "removed from the actual engineering workflow" is MAYBE the modulus operation, which I'm willing to bet most interviewers probably won't give a shit if you goof on the syntax a little and write "%" when you should've written "%%".

And even then, a perfectly acceptable fizzbuzz can be written without using mod if you're not familiar with that basic arithmetic operation.


> I'd actually have to think for a minute on how to do fizz buzz right now, simply because it is so far remove from the actual engineering workflow.

This type of attitude is highly suspicious.

If you're not the kind of person who thinks fizz buzz is trivial, then there are actual red flags.


Fizzbuzz may seem natural to people with an inclination for numbers or formal schooling that forcibly crams lots of math you'll never use down your throat, but some people haven't been taught about modulus operations and some people may not be very good at division and would take a while to try to reinvent mod. Neither of those things are necessarily really big problems if that person is going to be writing business logic (though you might want to have someone check any code they write which involves money).

The idea of "fizzbuzz" is to show basic understanding of language control structures and flow. Better, more universal tests that actually use language constructs that are encountered in daily operation could definitely be devised to demonstrate the same fundamental concepts.


You don't have to leave it at just reviewing online code, but it's generally a 1st level filter. If someone has already written moderately advanced stuff online, have them write something at that level. If they can't do that, they wouldn't be able to do fizz buzz either.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: