Hacker Newsnew | past | comments | ask | show | jobs | submit | weinzierl's commentslogin

I'd curious about the banned Rust features. Surely, Rust has at lot fewer foot guns, but it isn't that there aren't any.

Rust has been better than C++ about marking where there's a foot gun and where practical just nerfing it. core::mem::uninitialized() is an example. In 2016 if you'd said "Initializing my Doodad is too expensive, what do I do?" you might be pointed at the (unsafe) core::mem::uninitialized() which "initializes" any variable, such as your Doodad but er, doesn't actually initialize it.

But that's a foot gun, there are niche cases where this crazy stunt actually is correct (ZSTs are one), but in general the compiler was in effect licensed to cause absolute mayhem because this isn't a Doodad at all, which isn't what you wanted. So, they did three things:

1. Made a new type wrapper for this purpose, MaybeUninit<Doodad> might not be a Doodad yet, so we can initialize it later and until then the compiler won't set everything on fire but it's the same shape as the Doodad.

2. Marked core::mem::uninitialized deprecated. If you use it now the compiler warns you that you shouldn't do that.

3. De-fanged it by, despite its name, scrawling the bit pattern 0x01 over all the memory. The compiler can see this has some particular value and for common types it's even valid, a bool will be true, a NonZeroU32 will be 1, and so on. This is slow and probably not what you intended, but we did warn you that it was a bad idea to call this function still so...


Heard of `#![forbid(unsafe_code)]` ?

It's that effectively enforcement of features that are banned?

Rust's compiler has six what it calls "lint levels" two of which can't be overridden by compiler flags, these handle all of its diagnostics and are also commonly used by linters like clippy.

Allow and Expect are levels where it's OK that a diagnostic happened, but with Expect if the diagnostic expected was not found then we get another diagnostic telling us that our expectation wasn't fulfilled.

Warn and Force-Warn are levels where we get a warning but compilation results in an executable anyway. Force-warn is a level where you can't tell the compiler not to emit this warning.

Deny and Forbid are levels where the diagnostic is reported and compilation fails, so we do not get an executable. Forbid, like Force-warn cannot be overriden with compiler flags.


#![deny(clippy::unwrap_used)]

I doubt unsafe would be blatantly banned. I was more thinking of things like glob imports.

I don't think ridicule is an effective threat for people with no shame to begin with.

Well, this is explicitly public ridicule. The penalty isn't just feeling shamed. It's reputational harm, immortalized via Google.

One of the theorized reasons for junk AI submissions is reputation boosting. So maybe this will help.

And I think it will help with people who just bought into the AI hype and are proceeding without much thought. Cluelessness can look a lot like shamelessness at first.


I think it makes sense, both for this, and for curl.

Presumably people want this for some kind of prestige, so they can put it on their CV (contributed to ghostty/submitted security issue to curl).

If we change that equation to have them think "wait, if I do this, then when employers Google me they'll see a blog post saying I'm incompetent" changes calculation that is neutral/positive for if their slop gets accepted to negative/positive.

Seems like it's addressing the incentives to me.


"The biggest surprise to me with all this low-quality contribution spam is how little shame people apparently have."

And this is one half of why I think

"Bad AI drivers will be [..] ridiculed in public."

isn't a good clause. The other is that ridiculing others, not matter what, is just no decent behavior. Putting it as a rule in your policy document makes it only worse.


> The other is that ridiculing others, not matter what, is just no decent behavior.

Shaming people for violating valid social norms is absolutely decent behaviour. It is the primary mechanism we have to establish social norms. When people do bad things that are harmful to the rest of society, shaming them is society's first-level corrective response to get them to stop doing bad things. If people continue to violate norms, then society's higher levels of corrective behaviour can involve things like establishing laws and fining or imprisoning people, but you don't want to start with that level of response. Although putting these LLM spammers in jail does sound awfully enticing to me in a petty way, it's probably not the most constructive way to handle the problem.

The fact that shamelessness is taking over in some cultures is another problem altogether, and I don't know how you deal with that. Certain cultures have completely abdicated the ability to influence people's behaviour socially without resorting to heavy-handed intervention, and on the internet, this becomes everyone in the world's problem. I guess the answer is probably cultivation of spaces with strict moderation to bar shameless people from participating. The problem could be mitigated to some degree if a Github-like entity outright banned these people from their platform so they could not continue to harass open-source maintainers, but there is no platform like that. It unfortunately takes a lot of unrewarding work to maintain a curated social environment on the internet.


In a functioning society the primary mechanism to deal with violation of social norms is (temporary or permanent) social exclusion and in consequence the loss of future cooperative benefits.

To demand public humiliation doesn’t just put you on the same level as our medieval ancestors, who responded to violations of social norms with the pillory - it’s actually even worse: the contemporary internet pillory never forgets.


You think exile is a better first step than shame? That's certainly a take. On the internet, that does manifest as my suggested way of dealing with people where shame doesn't work, a curated space where offenders are banned -- but I would still advocate for attempting lesser corrective behaviour first before exclusion. Moreover, exclusion only works if you have a means to viably exclude people. Shame is something peers can do; exclusion requires authority.

Shame is also not the same thing as "public humiliation". They are publicly humiliating themselves. Pointing out that what they publicly chose to do themselves is bad is in no way the same as coercing them into being humiliated, which is what "public humiliation as a medieval punishment" entails. For example, the medieval practice of dragging a woman through the streets nude in order to humiliate her is indeed abhorrent, but you can hardly complain if you march through the streets nude of your own volition, against other people's desires, and are then publicly shamed for it.


No society can function without enforced rules. Most people do the pro-social thing most of the time. But for the rest, society must create negative experiences that help train people to do the right thing.

What negative experience do you think should instead be created for people breaking these rules?


Temporary or permanent social exclusion, and consequently the loss of future cooperative benefits.

A permanent public internet pillory isn’t just useless against the worst offenders, who are shameless anyway. It’s also permanently damaging to those who are still learning societal norms.

The Ghostty AI policy lacks any nuance in this regard. No consideration for the age or experience of the offender. No consideration for how serious the offense actually was.


Drive-by PRs don't come from people interested in participating in the community in question. They have infinite places to juke their stats.

I see plenty of nuance beyond the bold print. They clearly say they love to help junior developers. Your assumption that they will apply this without thought is, well, your assumption. I'd rather see what they actually do instead of getting wrapped up in your fantasies.


Thanks to Social Media bubbles, there's no social exclusion possible anymore. Shameless people just go online find each other and reinforce each others' shamelessness. I bet there's a Facebook group for people who don't return their shopping carts.

Getting to live by the rules of decency is a privilege now denied us. I can accept that but I don't have to like it or like the people who would abuse my trust for their personal gain.

Tit for tat


It is well supported that TFT with a delayed mirroring component and Generous Tit for Tat where you sometimes still cooperate after defection are pretty succesful.

What is written in the Ghostty AI policy lacks any nuance or generosity. It's more like a Grim Trigger strategy than Tit for Tat.


You can't have 1,000,000 abusers and be nuanced and generous to all of them all the time. At some point either you either choose to knowingly enable the abuse or you draw a line in the sand, drop the hammer, send a message, whatever you want to call the process of setting boundaries in anger. Getting a hammer dropped on them isn't going to feel fair to the individuals it falls on, but it's also unrealistic to expect that a mob-like group can trample with impunity because of the fear of being rude or unjust to an individual member of that mob.

It is understanding of these dynamics that lead to us to our current system of law: punitive justice, but forgiveness through pardons.


The exception that immediately sprung to my mind is "the Dude" from The Big Lebowski. Maybe other Coen brothers’ "heroes" also fit the bill, but I’m not so familiar with the rest of their œuvre.

Probably the best pre-AI take of the isometric pixel art NYC is poster from the art collective eboy. In the early 2000s their art was featured in MoMA (but I don't remember the NYC poster specifically).

https://www.eboy.com/products/new-york-colouring-poster


This is so good, just ordered one!

It existed for Laptops 20 years ago (or a little less). You got sent stickers to put on your Laptop and had to send weekly pictures with you Laptop and people in it, then you got paid.

"The team also used Blender heavily for layout and previs, converting splat sequences into lightweight proxy caches for scene planning."

"The 501(c)(6) differs from the more familiar 501(c)(3) designation in that we are not a charity. The 501(c)(3) is explicitly designed for charitable organizations, and confers the additional benefit of donations being tax-deductible. Over time, though, the definition of a 501(c)(3) has become extremely distorted, especially in the software space, since companies were able to convince the IRS that making open-source software is a charitable/scientific activity. The result is that large companies were able to fund their own development by creating a “charity”, open-sourcing some of their core technology, and then building their extremely lucrative closed-source software on top. That way they get to deduct the core tech expenses from their taxes! What a deal!"

I get that, but I don't understand why it supports a 501(6) in this case[1].

Just because others have abused it doesn’t mean you should give up on it. Even if it's only about sending the right signal, that still matters.

Or is this about brutal honesty and they are saying bluntly: We're not a charity, so don't expect us to act like one in the first place.

If it is that, then why would anyone support them apart from their sponsoring organizations?

EDIT: Reading the whole thing carefully, I think they are going for an exclusive club. I genuinely wish them well, but to me it looks like a quite quixotic endeavour.

[1] There are many cases where a 501(6) makes sense. I'm strictly arguing the "Handmade Software Foundation" case here. Otherwise it gets complicated quickly.


The point is that as a 501(c)(6) we are directly allowed to act in the interests of the software industry, without having to invent a tortured explanation for why benefiting a very lucrative industry is Charity, Actually.

The hope is that people will sponsor us because we directly boost the creation and publishing of high-quality software, and give some measure of benefits to our paying members, which is typical 501(c)(6) stuff.


My impression is that recently ChatGPT tries to avoid going out to research on the Internet as long as it can. I have to tell it to pull info from the web or verify its answers on the web explicitly.

Could it be that they are trying to save traffic?


My non-existent marketing instinct would tell me that they are trying to keep you inside the app to convince you that ChatGPT is the internet, the same way some people wouldn't know there's life outside Facebook.

My grumpy instinct tells me they know that they're poisoning the internet and they have given out on trying to weed out the fake websites from the real ones.


I agree with your point, and superficially OP is a prime example.

Not to excuse the guy, but I think that, looking deeper, the situation with geohot is more involved. He grew up in a lower-middle-class household and was lucky to be a smart kid in a time when being a nerd could be a ticket out.

I guess not unlike many of us here on HN.

Unlike many of us, his explorations in the corporate world were all short stints. If I’ve kept tabs correctly, he never stayed longer than a year. Sometimes only for weeks.

Apart from that, I often take the pattern you noticed more as confession, penance, and a "tell your children not to walk my way" kind of message. Maybe I read this stuff too generously.


Sure, self awareness is important. When you tell your kids not to walk your way, you take accountability. You say that what you did was bad, and you are accountable for it. You also acknowledge that what you did brought you to where you are, but given the chance you would take a different way. It’s not bad to have moral principles after you’ve done what you fight against, as long as you do it with accountability and self awareness.

OPs post had neither.


Then he should know better the line he’s selling.

“Opt out of capitalism” doesn’t work when you’re trying to feed your family. He offers no alternative, speaks from a place of safety with no acknowledgment that the people he’s addressing don’t have the same safety net as he does.[0]

He’s not wrong. We are all fucked. But if it were as simple as “not participating” (whatever that means), then we wouldn’t be.

[0]: to be fair he does address others at tech companies, maybe he assumes that everyone working in big tech has a safety net, which is perhaps not as unreasonable as I first thought.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: