Hacker Newsnew | past | comments | ask | show | jobs | submit | LudwigNagasena's commentslogin

What are the exact features that require it to be a new language with new syntax?

Reactive DOM updates – When you change state, the compiler tracks dependencies and generates efficient update code. In WebCC C++, you manually manage every DOM operation and call flush().

JSX-like view syntax – Embedding HTML with expressions, conditionals (<if>), and loops (<for>) requires parser support. Doing this with C++ macros would be unmaintainable.

Scoped CSS – The compiler rewrites selectors and injects scope attributes automatically. In WebCC, you write all styling imperatively in C++.

Component lifecycle – init{}, mount{}, tick{}, view{} blocks integrate with the reactive system. WebCC requires manual event loop setup and state management.

Efficient array rendering – Array loops track elements by key, so adding/removing/reordering items only updates the affected DOM nodes. The compiler generates the diffing and patching logic automatically.

Fine-grained reactivity – The compiler analyzes which DOM nodes depend on which state variables, generating minimal update code that only touches affected elements.

From a DX perspective: Coi lets you write <button onclick={increment}>{count}</button> with automatic reactivity. WebCC is a low-level toolkit – Coi is a high-level language that compiles to it, handling the reactive updates and DOM boilerplate automatically.

These features require a new language because they need compiler-level integration – reactive tracking, CSS scoping, JSX-like templates, and efficient array updates can't be retrofitted into C++ without creating an unmaintainable mess of macros and preprocessors. A component-based declarative language is fundamentally better suited for building UIs than imperative C++.


TL;DR There is no secret sauce, it's the same set of techniques you’ve seen in most PostgreSQL scaling guides. Those techniques do work.

The article puts punctuation into its rendering of the original text. That confused me too.

I'll note that it's not this decision is not coming from the newspaper article's writer, it's coming from any common transliteration of the manuscript that you'll find. But it's clearly a transliteration decision made because the people doing this assume it is an interjection, and they're using modern punctuation rules accordingly.

> But it's clearly a transliteration decision made because the people doing this assume it is an interjection, and they're using modern punctuation rules accordingly.

I think that's what I missed.


It clarifies exactly what that means. It doesn’t say that the information have to pass the test of time. Only that it is not a place of original reporting, unsourced gossip, etc.

That's a distinction without a difference, in the end you still have an arbitrary bash command that you have to validate.

And it is simply easier to whitelist directories than individual commands. Unix utilities weren't created with fine-grained capabilities and permissions in mind. Wherever you add a new script or utility to a whitelist, you have to actively think whether any new combination may lead to privileges escalation or unintended effects.


> That's a distinction without a difference, in the end you still have an arbitrary bash command that you have to validate.

No, you don't. You have a command generated by auditable, conventional code (in the agent wrapper) rather than by a neural network.


That command will have to take some input from neural network though? And we're back in Bobby Tables scenario

No, that argument makes no sense. SQL injection doesn't happen because of where the input comes from; it happens because of how the input is handled. We can avoid Bobby Tables scenarios while receiving input that influences SQL queries from humans, never mind neural networks. We do it by controlling the system that transforms the input into a query (e.g. by using properly parameterized queries).

Right, in DBs it's proper param binding + prepared statements.

I see what you're saying, makes sense.

FWIW there is (in analytics) also RBAC layer, like "BI tool acting on behalf of user X shall never make edits to tables Y and Z"


If you are a professional in a sphere like engineering, then getting eg 10k views on your videos is very remarkable and acts as a indirect proof of acclaim. But when it is the whole metric, then it just overvalues public professions where in itself 10k views is nothing remarkable. That's the core issue as far as I understand.

(But even for professionals, it's a very gameable metric. There is a whole industry that helps getting published material and appearances for O-1 applications.)


Proof of interest, not acclaim. And online interest is heavily skewed to the narrow activities of entertainment and education - professional community communication happens but in far smaller numbers vs the other two.

It’s not hard to set up a router/proxy for Claude Code to use something else.


Pearson correlation = cosine of the angle between centered random variables. Finite-variance centered random variables form a Hilbert space so it’s not a coincedence. Standard deviation is the length of the random variable as a vector in that space.


> Such a huge amount of ML can be framed through the lens of kernel methods

And none of them are a reinvention of kernel methods. There is such a huge gap between the Nadaraya and Watson idea and a working Attention model, calling it a reinvention is quite a reach.

One might as well say that neural networks trained with gradient descent are a reinvention of numerical methods for function approximation.


> One might as well say that neural networks trained with gradient descent are a reinvention of numerical methods for function approximation.

I don't know anyone who would disagree with that statement, and this is the standard framing I've encountered in nearly all neural network literature and courses. If you read any of the classic gradient based papers they fundamentally assume this position. Just take a quick read of "A Theoretical Framework for Back-Propagation (LeCun, 1988)" [0], here's a quote from the abstract:

> We present a mathematical framework for studying back-propagation based on the Lagrangian formalism. In this framework, inspired by optimal control theory, back-propagation is formulated as an optimization problem with nonlinear constraints.

There's no way you can read that a not recognize that you're reading a paper on numerical methods for function approximation.

The issue is that Vaswani, et al never mentions this relationship.

0. http://yann.lecun.com/exdb/publis/pdf/lecun-88.pdf


If you mention every mathematical relationship one can think of in your paper, it’s going to get rejected for being way over the page limit lol.


OLS estimator is the minimum-variance linear unbiased estimator even without the assumption of Gaussian distribution.


Yes, and if I remember correctly, you get the Gaussian because it's the minimum entropy (least additional assumptions about the shape) continuous distribution given a certain variance.


And given a mean.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: