Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Obviously, not all details are available, but the wording in the email suggests that the parent comment is anything but naive:

> This included making direct code changes to the Tesla Manufacturing Operating System under false usernames and exporting large amounts of highly sensitive Tesla data to unknown third parties.

This sounds like something out of the 1990s, that dark and romantic era of version control when we thought CVS was pretty cool actually and we didn't know what key-based authentication and 2FA were.

There are volunteer-ran projects that don't have this problem.

Edit: to be clear, I presume no one is debating the fact that someone with high enough credentials can push code to production. The questions that the email raises are:

1. Why can anyone, regardless of credentials, push mission-critical code without review (or, alternatively, if the changes did go through review, why did the review process not catch multiple malicious changes?)

2. Why can someone compromise several high-level credentials without anyone figuring it out (the changes were made, apparently, under "false usernames")?



Some manager asks IT multiple times over the course of a few weeks to create an account for a contractor, then give them permissions to access production type machines.

Or a contractor that was fired had their credentials appropriated by this manager, perhaps by that manager removing them from a "delete these accounts" list.

Those are a couple of mundane ways of getting a false username to a production machine. This is even easier when there is a lot of flux at the company with many people coming and going, a lot of account management happening etc.

It could have been that the accounts were local to specific machines and not managed by the company as a whole.


> Some manager asks IT multiple times over the course of a few weeks to create an account for a contractor, then give them permissions to access production type machines.

And -- keeping in mind that production type machines operate machinery that can kill -- this sounds okay to you?

Not to mention this:

> Or a contractor that was fired had their credentials appropriated by this manager, perhaps by that manager removing them from a "delete these accounts" list.

...keeping in mind that production type machines operate machinery that can kill, does it sound OK to you that anyone can get access to an account that they don't own and control it?

This particular case would be enough to have PCI certification come into question (if not for it to be revoked), and that's just about money, not life-and-death stuff.


Someone has to be responsible for managing people and organising access to the appropriate machines for them to do their job, if it isn't their manager then who is?


You can manage people and organize access without actually having the ability to gain access to their credentials. In fact, that's how it's supposed to work in safety-critical environments.


This is how it works in normal software companies too. I never see the credentials for my employees.


My point is, as a manager one can request that their subordinates get credentials to access systems. Therefore as a manager you could create a fictitious person (or use one that's recently left the company), and have them be given credentials to access those systems. Then you could use that fictional identity to do whatever nefarious things you want to do.

Then again it could be just as simple to create an alternate fictitious identity without going through IT but just by accessing the systems you have permission to access anyway.


In a normal company, you could absolutely not create a fictitious account that way, or re-use the credentials of someone who just left. But more important, there is a very, very long way from having created a fictitious person to being able to push stuff to production in their name.

The former restriction is maybe difficult enough to efficiently implement in an organization that it's excusable (we have a scheme for it at $work, but it unfortunately means that sometimes people show up at work and the paperwork isn't ready yet and some of the accounts they need aren't yet ready).

The latter, on the other hand, is security 101 and not implementing it on the production floor is just irresponsible. I really hope it's not what happened.


So what are the odds then that they created a new user account on some local machine and used that to make the changes?


If we're talking about changes to the software that's used to manufactures vehicles that are driving on public roads, I sure as hell hope the odds are zero.


I hope so too, but then again we constantly read stories where serious industrial equipment and critical infrastructure has their computer systems opened up to the wide Internet because someone thought they would like to control it from a crappy app on their phone. Etc.


We are talking about factory floor equipment, the kind that's designed to run air-gapped and where you find lots of old unpatched Windows 2012 installs because the machine was certified with that and patching would require recertification.

And I'm not joking - recently I was asked whether something (that was designed for a clustered Linux environment) could run on Windows XP because that's what was on the machine they wanted it to run.


> 1. Why can anyone, regardless of credentials, push mission-critical code without review (or, alternatively, if the changes did go through review, why did the review process not catch multiple malicious changes?)

Why do you suppose the unauthorized party was following the company's development practices? Maybe it was from the sysadmin side, somebody who worked on the toolchain used for reviewing and pushing things to production. So he was able to sidestep the normal review process. This can happen, what is important is that such things are discovered.


> So he was able to sidestep the normal review process.

He should not have been able to sidestep the normal review process. That's the problem in the first place. Even if you're from the sysadmin side. It should not be possible to do it.

You may think that looks exaggerated but I've worked in two places where we implemented such a process, both of them far more boring than Tesla and, I suspect, far less money to burn on infrastructure.

> This can happen, what is important is that such things are discovered.

No, what is important when working with mission-critical code is that such things are mitigated. Discovering such a problem in production code is already a problem, not a solution.


I'm interested to know how you plan to keep someone with `wheel` access from doing anything on a server that they maintain.

No, seriously.


You don't keep someone with wheel access from doing anything on the server. You:

1. Sign every review 2. Use the review signatures + the manufacturer's keys to sign reproducible builds of the production image (i.e. you cryptographically certify that "this image is authorized, and it includes this list of commits, that has gone through these reviews"). 3. Use a secure boot scheme of your choice to ensure that only signed images can be installed on production servers 4. Keep anyone with 'wheel' access away from the image signing keys, and anyone who can generate images away from 'wheel' access.

This way, you make sure that no one who has 'wheel' access can install a sabotaged image, that any image that can be installed has gone through an auditable trail of reviews, and reduce the attack surface that a malicious developer has control over to stuff that requires root access (which is still a lot of surface, but is harder to sneak past a review).

Root access to production servers does not need to mean that you can install arbitrary code on them, and with the right systems engineering, you can ensure that it does not trivially result in arbitrary code being run on production equipment.

Edit: this is all in the context of "questions that Tesla's answer raises". For all I know, the answer might be that they hired some brilliant genius who figured out how to sneak by whatever secure boot scheme they're using. The point is -- the post that sparked all this is not naive. This is real stuff. Companies that are concerned about it can ensure that unauthorized commits are so difficult to get into production that a disgruntled employee would rather just quit than go through with it.


What are you saying? That because perfect security is impossible, we should give up and do nothing?

It's possible a sysadmin with low-level access can exploit that and a variety of zero-day exploits and escalations of privilege in the layers above to systematically compromise the boot images, steal or falsify credentials and signing keys, and circumvent the safeguards and alarm systems which should be in place to prevent malicious modifications of the source code and the compiled binaries, while hiding his actions from his co-workers, sure whatever.

And if that's what happened to Tesla, wow, sucks to be them, that's amazing.

But if there are no safeguards, no review process, no alarm bells to go off and any damn person can submit malicious code effortlessly and they were basically working off the honor system... I'm going to blame the victim a little bit.


Only allow the server to run signed code and make sure that no one with wheel access has access to the signing key.


Well, the email specifically mentioned the employee had used other user accounts.

Depending on what those "other user accounts" had access to, it could go in many different directions. :)


1. Is possible in pretty much any environment especially one as complicated as manuf. automation.

2. If you already have privileged accounts you can escalate in pretty much any environment. And they obviously did figure it out.


1. It may have gone through the review process but if there is a massive pressure to ship (which there verifiably is at Tesla) then the review part of the process will be the first to degrade quality wise. Inexcusable but realistic.


Sounds like he gained the ability to create users, so he probably used that to get around any automated code review process as well.


Hmmm, it sounded to me more like the person had gained access to other employee's credentials. eg usernames/passwords or similar.


Every company I've worked at had a responsible person who was the only one able to push code into production. If you allow anyone to do that you are an idiot.


It sounds like they might use git and are not commit signing or something of the sorts.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: