> In Odin all variables are automatically zero initialized. Not just integers and floats. But all structs as well. Their memory is filled with zeroes when those variables are created.
> This makes ZII extra powerful! There is little risk of variables accidentally being uninitialized.
The cure is worse than the problem. I don't want to 'safely' propagate my incorrect value throughout the program.
If we're in the business of making new languages, why not compile-time error for reading memory that hasn't been written? Even a runtime crash would be preferable.
Being initialized to zero is at least repeatable, so if you forget to initialize something you'll notice it immediately in testing. The worst part about uninitialized variables is that they frequently are zero and things seem to work until you change something else that previously happened to use the same memory.
> The worst part about uninitialized variables is that they frequently are zero and things seem to work until you change something else that previously happened to use the same memory.
This is not the whole story. You're making it sound like uninitialized variables _have_ a value but you can't be sure which one. This is not the case. Uninitialized variables don't have a value at all! [1] has a good example that shows how the intuition of "has a value but we don't know which" is wrong:
use std::mem;
fn always_returns_true(x: u8) -> bool {
x < 120 || x == 120 || x > 120
}
fn main() {
let x: u8 = unsafe { mem::MaybeUninit::uninit().assume_init() };
assert!(always_returns_true(x));
}
If you assume an uninitialized variable has a value (but you don't know which) this program should run to completion without issue. But this is not the case. From the compiler's point of view, x doesn't have a value at all and so it may choose to unconditionally return false. This is weird but it's the way things are.
It's a Rust example but the same can happen in C/C++. In [2], the compiler turned a sanitization routine in Chromium into a no-op because they had accidentally introduced UB.
> You're making it sound like uninitialized variables _have_ a value but you can't be sure which one.
Because that's a valid conceptualization you could have for a specific language. Your approach and the other person's approach are both valid but different, and as I said in another comment, they come with different compromises.
If you are thinking like some C programmers, then `int x;` can either have a value which is just not known at compile time, or you can think of it having a specialized value of "undefined". The compiler could work with either definition, it just happens that most compilers nowadays do for C and Rust at least use the definition you speak of, for better or for worse.
> C programmers, then `int x;` can either have a value which is just not known at compile time
I am pretty sure that in C, when a program reads uninitialized variable, it is an "undefined behavior", and it is pretty much allowed to be expected to crash — for example, if the variable turned out to be on an unallocated page of stack memory.
So literally the variable does not have a value at all, as that part of address space is not mapped to physical memory.
It is "undefined behaviour" in C (which is an overloaded term which I will not discuss why I hate it in this comment). But my point was that is how many people conceptualize it, and for many things people do expect it to be one of the possible values, just not knowable ahead of time.
However, I was using that "C programmers" bit to explain the conceptualization aspect, and how it also applies to other languages. Not every language, even systems languages, have the same concepts as C, especially the same construction as "UB".
You're assuming that's the style of programming others want to program in. Some people want the "ZII" approach. Your approach is a trade-off with costs which many others would not want to make. So it's not "preferable", it's a different compromise.
That's clearly correct, as e.g. Go uses this style and there are lots of happy Go users.
I want to push back on the idea that it's a "trade-off", though -- what are the actual advantages of the ZII approach?
If it's just more convenient because you don't have to initialize everything manually, you can get that with the strict approach too, as it's easy to opt-in to the ZII style by giving your types default initializers. But importantly, the strict approach will catch cases where there isn't a sensible default and force you to fix them.
Is it runtime efficiency? It seems to me (but maybe not to everyone) that initialization time is unlikely to be significant, and if you make the ZII style opt-in, you can still get efficiency savings when you really need them.
The explicit initialization approach seems strictly better to me.
> It seems to me... that initialization time is unlikely to be significant
The thing is, initialization cost is a lot more than you think it is, especially when it's done on a per-object level rather than a "group" level.
This is kind of the point of trying to make the zero value useful, it's trivially initialized. And in languages that are much more strict in their approach, it is done at that per-object level which means instead of the cost of initialization being anywhere from free (VirtualAlloc/mmap has to produce zeroed memory) to trivially-linear (e.g. memset), to being a lot more nested hierarchies of initialization (e.g. for-loop with constructor for each value).
It's non-obvious why the "strict approach" would be worse, but it's more about how people actually program rather than a hypothetical approach to things.
So of course each style is about trade-offs. There are no solutions, only trade-offs. And different styles will have different trade-offs, even if they are not immediately obvious and require a bit of experience.
That's what I was trying to get at by talking about making ZII opt-in. If you're using a big chunk of memory — say a matrix, or an array of matrices — it's a win if you can zero-initialize it cheaply or for free, sure. In JS, for example, you'd allocate an ArrayBuffer and use it immediately (via a TypedArray or DataView).
But still, in other parts of the program, ZII is bad! That local or global variable pointing at an ArrayBuffer should definitely not be zero-initialized. Who wants a null pointer, or a pointer to random memory of unknown size? Much better to ensure that a) you actually construct a new TypedArray, and b) you don't use it until it's constructed.
I guess if you see the vast majority of your action happening inside big arrays of structs, pervasive ZII might make sense. But I see most of the action happening in local and temporary variables, where ZII is bad and explicit initialization is what you want.
Moving from JavaScript to TypeScript, to some extent you can get the best of both worlds. TS will do a very good (though not perfect) job of forcing you to initialize everything correctly, but you can still use TypedArray and DataView and take advantage of zero-initialization when you want to.
ZII for local variables reminds me of the SmallTalk / Obj-C thing where you could send messages to nil and they're silently ignored. I don't really know SmallTalk, but in Obj-C, to the best of my knowledge most serious programmers think messages to nil are a bad idea and a source of bugs.
Maybe this is another aspect where the games programming mindset is skewing things (besides the emphasis on low-level performance). In games, avoiding crashes is super important and you're probably willing to compromise on correctness in some cases. In most non-games applications, correctness is super important, and crashing early if something goes wrong is actually preferable.
Making it opt-in, means making the hierarchical approach the default. Whatever you make "opt-in" means you are by default discouraging its use. And what you are suggesting as the default is not what I wanted from Odin (I am the creator by the way).
I normally say "try to make the zero value useful" and not "ZII" (which was a mostly jokey term Casey Muratori came up with to reflect against RAII) because then it is clear that there are cases when it is not possible to do ZII. ZII is NOT a _maxim_ but what you should default to and then do something else where necessary. This is my point, and I can probably tell you even more examples of where "ZII is bad" than you could think of, but this is what is a problem describing the problem to people: they take it as a maxim not a default.
And regarding pointers, I'm in the camp that nil-pointers are the most trivial type of invalid pointer to catch empirically speaking. Yes they cause problems, but because how modern systems are structured with virtual memory, they are empirically trivial to catch and deal with. Yes you could design the type system of a language to make nil-pointers not be a thing unless you explicit opt into them, but then that has another trade-off which may or may not be a good thing depending on the application.
The Objective-C thing is just a poorly implemented system for handling `nil`. It should have been more consistent but wasn't. That's it.
I'd argue "correctness" is important in games too, but the conception of "correctness" is very different there. It's not about provability but testability, which are both valid forms of "correctness" but very different.
And in some non-game applications, crashing early is also a very bad thing, and for some games, crashing early is desired over corrupted saves or other things. It's all about which trade-offs you can afford, and I would not try to generalize too much.
Yeah, that's fair, clearly this sort of thing is why we have multiple languages in the first place!
I don't think I'll ever abandon the idea that making code "correct by construction" is a good goal. It might not always be achievable or practical but I strongly feel it's always something to aim for. For me, silent zero initialization compromises that because there isn't always a safe default.
I think nil pointers are like NaNs in arithmetic. When a nil or a NaN crops up, it's too late to do anything useful with it, you generally have to work backwards in the debugger to figure out where the real problem started. I'd much rather be notified of problems immediately, and if that's at compile time, even better.
In the real world, sure, I don't code review every single arithmetic operation to see if it might overflow or divide by zero. But when the compiler can spot potential problem areas and force me to check them, that's really useful.
That would require having constructors, which is not something Odin will ever have nor should it. However you can just initialize with a constant or variable or just use a procedure to initialize with. Odin is a C alternative after all, so it's a fully imperative procedural language.
Not sure if anyone has mentioned it, but you can additionally disable ZII in any variable by describing the value as "---" in your declaration, useful when writing high performance code, here is an example:
I always find this opinion intriguing, where it's apparently fine that globals are initialized to zero, but you are INSANE to suggest it's the default for locals. What kind of programs are y'all writing?
Clearly the lack of zeroing in C was a trade-off at the time. Just like UB on signed overflow. And now people seem to consider them "obvious correct designs".
I'd prefer proper analysis for globals too, but that is substantially harder.
"Improperly using a variable before it is initialized" is a very common class of bug, and an easy programming error to make. Zero-initializing everything does not solve it! It just converts the bugs from ones where random stack frame trash is used in lieu of the proper value into ones where zeroes are used. If you wanted a zero value, it's fine, but quite possibly you wanted something else instead and missed it because of complex initialization logic or something.
What I want is a compiler that slaps me when I forget to initialize a proper value, not one that quietly picks a magic value it thinks I might have meant.
I agree that zero-initializing doesn't really help avoid incorrect values (which is what the author focuses on) but at least you don't have UB. This is the main selling point IMO.
Then why not just require explicit initialization? If "performance" is your answer then adding extra optimization capabilities to the compiler that detects 0 init would be a solution which could skip any writes if the allocator guarantees 0 initialization of allocated memory. A much safer alternative. Replacing one implicit behavior with another is hardly a huge success...
"Often enough" is what's introducing the risk for bugs here.
I "often enough" drive around with my car without crashing. But for the rare case that I might, I'm wearing a seatbelt and have an airbag. Instead of saying "well I better be careful" or running a static analyzer on my trip planning that guarantees I won't crash. We do that when lives are on the line, why not apply those lessons to other areas where people have been making the same mistakes for decades?
Please, can we stop assuming every single software has actual lives on the line? These comment threads always devolve into implicit advertisement of Rust/Ada and other super strict languages because “what about safety?!”
It is impossible to post about a language on this forum before the pearl clutching starts if the compiler is a bit lenient instead of triple checking every single expression and making your sign a release of liability.
Sometimes, ergonomics and ease-of-programming win over extreme safety. You’ll find that billion dollar businesses have been built on zero-as-default (like in Go) and often people reaching for it or Go are just writing small personal apps, not cruise missile navigation system.
I'm actually with you on the ease of use. I don't see this as the opposite to safety. To me, making it harder for me to make mistakes means it's easier to use. That is, easier to use right and harder to use wrong. I'm not a Rust or Ada advocate. I'm just saying that making it harder to make the same mistakes people have been doing for decades would be a good thing. That would contribute to ease-of-use in my book since there are fewer things you need to think about that could possibly go wrong.
Or are you saying that a certain level of bugs is fine and we are at that level? Are you fine with the quality of all the software out there? Then yes, this discussion is probably not for you.
> Are you fine with the quality of all the software out there?
This is the kind of generalisation I'm ranting against.
It is not constructive to extrapolate any kind of discussion about a single, perhaps niche, programming languages with applicable advice for "all the software out there". But you probably knew that already.
TL;DR: I disagree, and I will say upfront that my views on software are extreme. I think quality is a glaring issue in most software.
There is a lot of subpar software out there, and the rest is largely decent-but-not-great. If it's security I want, that's commonly lacking, and hugely so. If it's performance I want, that's commonly lacking[0]. If it's documentation...you get the idea. We should have rigor by default, and if that means software is produced slower, I frankly don't see the problem with that. (Although commercial viability has gone out the window unless big players comply.) Exceptions will be carved out depending on the scope of the program. It's much harder to add in rigor post hoc. The end goal is quality.
The other issue is that a program's scope is indeed broader than controlling lives, and yet there are many bad outcomes. If I just get my passwords stolen or my computer crashes daily or my messaging app takes a bit too long to load every time, what is the harm? Of course those are wildly different outcomes, but I think at least the first and second are obviously quality issues, and I think the third is also important. Why is the third important? When software is such an integral part of users' lives, minor issues cause faults that prompt workarounds or inefficiencies. [1] discusses a similar line of thought. I know I personally avoid doing some actions commonly (e.g. check LinkedIn) because they involve pain points around waiting for my browser to load and whatnot, nothing major but something that's always present. Software ("automation") in theory makes all things that the user implicitly desires to be non-pain points for the user. An interesting blend of issues is system dialog password prompts, which users will generally try to either avoid or address on autopilot, which tends to reduce security. Or take system update restarts, which induce not updating frequently. Or take what is perhaps my favorite invectives: blaming Electron apps. One Electron app can be inconvenient. Multiple Electron apps can be absurd. I feel like I shouldn't have to justify calling out Electron on HN, but I do, but I won't here. And take unintended uses: if I need to set down an injured person across two chairs, I sure hope a chair doesn't break or something. Sure, that's not the intended use case of a chair, but I don't think it's unreasonable that a well-made chair would not fail to live up to my expectations. I wouldn't put an elephant on the chair either way, because intuitively I don't expect that much. Even then, users may expect more out of software than is reasonable, but that should be remedied and not overlooked.
Do not mistake having users for having a quality product.
You seem to use eager evaluation of usability whereas in practice most people only need lazy evaluation. We use risk assessment of going from point A to point B, two concrete points. You seem to use risk assessment equivalent to JavaScript's array.flat(Infinity).
It is undefined behavior in C. In many languages it is defined behavior; for instance in Go, dereferencing a nil pointer explicitly panics, which is a well-defined operation. It may, of course, crash your program, and the whole topic of 'should pointers even be able to be nil?' is a valid separate other question, but given that they exist, the operation of dereferencing a nil pointer is not undefined behavior in Go.
To many people reading this this may be a "duh" but I find it is worth pointing out, because there are still some programmers who believe that C is somehow the "default" or "real" language of a computer and that everything about C is true of other languages, but that is not the case. Undefined behavior in C is undefined in C, specifically. Try to avoid taking ideas about UB out of C, and to the extent that they are related (which slowly but surely decreases over time), C++. It's the language, not the hardware that is defining UB.
A compiler doesn't have to accept all possible programs. If it can't prove that a variable is initialized before being read, then it can simply require that you explicitly initialize it.
Not accepting many C programs, maybe. It's pretty easy to create a language where declaration is initialization of some sort, as evidenced by the large number of languages in common use where, one way or another, that's already the case.
This isn't some whacko far out idea. Most languages already today don't have any way (modulo "unsafe", or some super-carefully declared and defined method that is not the normal operation of the language) of reading uninitialized memory. It's only the residual C-likes bringing up the rear where this is even a question.
(I wouldn't count Odin's "explicitly label this as not getting initialized"; I'm talking about defaults being sharp and pointy. If a programmer explicitly asks for the sharp and pointy, then it's a valid choice to give it to them.)
Curiously, C# does both. It uses compile-time checks to stop you from accessing an uninitialized local and from exiting a struct constructor without initializing all fields; and yet, the CLR (the VM C# compiles to) zero-initializes everything anyway.
This is a pain. I recently switched from Java (and its whole Optional/null mess) to C#. I was initially impressed by its nullable checks, but then I discovered 'default'. Now I gotta check that Guids aren't 0000...? It makes me miss the Java situation.
Only if you go out of your way to author a method with (Guid someGuid = default) argument. I've never seen it happen with Guids, if someone gives you default(Guid) - they did it on purpose, it's no different to explicitly setting `0` to an integer-typed UserID property.
If supplying Guid is optional, you just make it Guid?.
To be fair, I don't think offering default(T) by default (ha) is the best choice for structs. In F#, you have to explicitly do `Unchecked.defaultOf` and otherwise it will just not let you have your way - it is watertight. I much prefer this approach even if it can be less convenient at times.
No, that's just the memory model of CLI and the choice made by C#. By default, it emits localsinit flag for methods which indicates that all local variables must be zero-initialized first. On top of that, you can't really access unitialized memory in C# and F# anyway unless you use unsafe. It's a memory safety choice indeed but it has nothing to do with P/Invoke.
These are not very good arguments and Casey Muratori is hugely biased against RAII and C++ techniques for some reason, probably familiarity with C.
He thinks that every RAII variable is a failure point and that you only have to think about ownership if you are using RAII, so it incurs mental overhead.
The reality is that you have to understand the lifetime and ownership of your allocations no matter what. If the language does nothing for you the allocation will still have a lifetime and a place where the memory is deallocated.
He also talks about combining multiple allocations in to a single allocation that then gets split into multiple pointers, but that could easily be done in C++.
When I first heard about Odin, I thought, why another C replacement?! What's wrong with rust or zig? Then, after looking into it, I had a very similar experience to the author. Someone made a language just for me! It's for people who prefer C over C++ (or write C with a C++ compiler). It has the things that a C programmer has to implement themselves like tagged unions, slices, dynamic arrays, maps, and custom allocators. While providing quality of life features like distinct typing, multiple return values, and generics. It just hits that sweet spot. Now, I'm spoiled.
May I ask what specifically you dislike about Rust (and Zig)? All the features you mentioned are also present in these languages. Do you care about a safety vs. simplicity of the language, or something else entirely?
It's indeed some kind of sweet spot. It has those things from C I liked. And it made my favorite workflows from C into "first class citizens". Not everyone likes those workflows, but for people like me it's pretty ideal.
By far one of the best languages I have ever used professionally and as a hobbyist, which is why I donate every month to keep the project alive.
I am dropping the link here so for those who can, should donate, and even if you don't use it, you should consider supporting this and other similar endeavors so they can't stop the signal, and keep it going: https://github.com/sponsors/odin-lang
You can do lot's of the same things in C too, as the author mentions, without too much pain. See for example [1] and [2] on arena allocators (which can be used exactly as the temporary allocator mentioned in the post) and on accepting that the C standard library is fundamentally broken.
From what I can tell, the only significant difference between C and Odin mentioned in the post is that Odin zero-initializes everything whereas C doesn't. This is a fundamental limitation of C but you can alleviate the pain a bit by writing better primitives for yourself. I.e., you write your own allocators and other fundamental APIs and make them zero-initialize everything.
So one of the big issues with C is really just that the standard library is terrible (or, rather, terribly dated) and that there is no drop-in replacement (like in Odin or Rust where the standard library seems well-designed). I think if someone came along and wrote a new C library that incorporates these design trends for low-level languages, a lot of people would be pretty happy.
The author literally says that they used to do that in C. And I've done a lot of those things in C too, it just doesn't mean that C has good defaults nor good ergonomics for many of the tasks other languages have be designed to be good with.
I would not agree that the ergonomics are so much better in Odin that switching to another language is worth giving up the advantages of a much larger ecosystem. For a hobby project this may not matter at all, of course.
Odin has very good FFI with its `foreign import` system, so you can still use libraries written in C, Objective-C, or any other language. And Odin does support tools like asan, tsan, etc already too. So what are the thing that you are giving up if you were using Odin instead of C?—in practice.
I am not a C programmer, but I have been wondering this for a long time: People have been complaining about the standard library for literal decades now. Seemingly, most people/companies write their own abstractions on top of it to ease the pain and limit exposure to the horrors lurking below.
Why has nobody come along and created an alternative standard library yet? I know this would break lots of things, but it’s not like you couldn’t transition a big ecosystem over a few decades. In the same time, entire new languages have appeared, so why is it that the C world seems to stay in a world of pain willingly?
Again, mind you, I’m watching from the outside, really just curious.
> Why has nobody come along and created an alternative standard library yet?
Probably, IMO, because not enough people would agree on any particular secondary standard such that one would gain enough attention and traction¹ to be remotely considered standard. Everyone who already has they own alternatives (or just wrappers around the current stdlib) will most likely keep using them unless by happenstance the new secondary standard agrees (by definition, a standard needs to be at least somewhat opinionated) closely with their local work.
Also, maintaining a standard, and a public implementation of it, could be a faffy and thankless task. I certainly wouldn't volunteer for that!
[Though I am also an outsider on the matter, so my thoughts/opinions don't have any particular significance and in insider might come along and tell us that I'm barking up the wrong tree]
--------
[1] This sort of thing can happen, but is rare. jquery became an unofficial standard for DOM manipulation and related matters for quite a long time, to give one example - but the gulf between the standard standard (and its bad common implementations) at the time and what libraries like jquery offered was much larger than the benefits a secondary C stidlib standard might give.
> Why has nobody come along and created an alternative standard library yet?
Everybody has created their own standard library. Mine has been honed over a decade, why would I use somebody else's? And since it is designed for my use cases and taste, why would anyone use mine?
Because to be _standard_, it would have to come with the compiler toolchain. And if it's scattered around on the internet, people will not use it.
I tried to create my own alternative about a decade ago which eventually influenced my other endeavours.
But another big reason is that people use C and its stdlib because that's what it is. Even if it is bad, its the "standard" and trivially available. Most code relies on it, even code that has its own standard library alternative.
> Why has nobody come along and created an alternative standard library yet?
Because people are so terribly opinionated that the only common denominator is that the existing thing is bad. For every detail that somebody will argue a modern version should have, there will be somebody else arguing the exact opposite. Both will be highly opinionated and for each of them there is probably some scenario in which they are right.
So, the inability of the community to agree on what "good" even means, plus the extreme heterogenity of the use cases for C is probably the answer to your question.
> I think if someone came along and wrote a new C library that incorporates these design trends for low-level languages, a lot of people would be pretty happy.
I suppose glib comes the closest to this? At least the closest that actually sees fairly common usage.
I never used it myself though, as most of my C has been fairly small programs and I never wanted to bother people with the extra dependency.
As long as programmers view a program as a mechanism that manipulates bytes in flat memory, we will be stuck in a world where this kind of topic seems like a success. In that world, an object puts some structure above those memory bytes and obviously an allocator sounds like a great feature. But you'll always have those bytes in the back of your mind and will never be able to abstract things without the bytes in memory leaking through your abstractions. The author even gives an example for a pretty simple scenario in which this is painful, and that's SOA. As long as your data abstraction is fundamentally still a glorified blob of raw bytes in memory, you'll be stuck there.
Instead, data needs to be viewed more abstractly. Yes, it will eventually manifest in memory as bytes in some memory cell, but how that's layouted and moved around is not the concern of you as the programmer that's a user of data types. Looking at some object attributes foo.a or foo.b is just that - the abstract access of some data. Whether a and b are adjacent in memory should be insubstantial or are even on the same machine or are even backed by data cells in some physical memory bank. Yes, in some very specific (!) cases, optimizing for speed makes it necessary to care about locality, but for those cases, the language or library need to provide mechanisms to specify those requirements and then they will layout things accordingly. But it's not helpful if we all keep writing in some kind of glorified assembly language. It's 2025 and "data type" needs to mean something more abstract than "those bytes in this order layed out in memory like this", unless we are writing hand-optimized assembly code which most of us never do.
Well, the DOD people keep finding that caring about the cache is more helpful regaring performance then the casual programmer might think. Even compiler people are thinking about ditching the classical AST for something DOD-based. I admin HPC systems as a dayjob, and I rarely see programmers aware of modern CPU design and how to structure your data such that it actually performs. I get that you'd like to add more abstractions to make programming easier, but I worry that this only adds to the (already rampant) inefficiency of most programs. The architecture is NOT irrelevant. And with every abstraction you put in, you increase the distance the programmer has from knowing how the architecture works. Maybe thats fine for Python and other high level stuff, but it is not a good idea IMO when dealing with programs with longer runtimes...
> caring about the cache is more helpful regaring performance then the casual programmer might think.
Cache is easily the most important consideration if you intend to go fast. The instructions are meaningless if they or their dependencies cannot physically reach the CPU in time.
The latency difference between L1/L2 and other layers of memory is quite abrupt. Keeping workloads in cache is often as simple as managing your own threads and tightly controlling when they yield to the operating system. Most languages provide some ability to align with this, even the high level ones.
IMO, DOD shows that you don’t have to sacrifice developer ergonomics for performance.
ECS is vastly superior as an abstraction that pretty much everything that we had before in games. Tightly coupled inheritance chains of the 90s/2000s were minefields of bugs.
Of course perhaps not every type of app will have the same kind of goldilocks architecture, but I also doubt anyone will stumble into something like that unless they’re prioritizing it, like game programmers did.
I won't get into it too much but virtually no one needs ECS, and if you have to ask how to do it, it's not for you. There are much better ways to organize a game for most people than the highly generic relational-database-like structure that is ECS. ECS does make sense in certain contexts but most people do not need it.
But I agree that DOD in practice is not a compromise between performance and ergonomics, and Odin kind of shows how that is possible.
That's great! Let the compiler figure out the optimal data layout then! Of course the architecture is relevant. But does everybody need to consider L2 and L3 sizes all the time? Optimizing this is for machines, with very rare exceptions. Expecting every programmer to do optimal data placement by hand is similar to expecting every programmer to call malloc and free in the right order and the correct number of times. And we know how reliable that turned out.
The compiler cannot know the _purpose_ of your program, and thus cannot "figure out the optimal data layout". It's metaphysically not possible, let alone technically.
Not everybody needs to worry about L2 or L3 most of the time, but if you are using a systems-level programming language where it might be of a concern to you at some point, it's extremely useful to be able to have that control.
> expecting every programmer to call malloc and free in the right order
The point of custom allocators is to not need to do the `malloc`/`free` style of memory allocation, and thus reduce the problems which that causes. And even if you do still need that style, Odin and many other languages offer features such as `defer` or even the memory tracking allocator to help you find the problems. Just like what was said in the article.
I am reluctant to believe compiler optimisations can do everything. Kind of reminds me of the time when people thought auto parallelisation would be a plausible thing. It never really happened, at least not in a predictably efficient way.
> That's great! Let the compiler figure out the optimal data layout then!
GHC, which is without a doubt the smartest compiler you can get your grubby mitts on, is still an extremely stupid git that can't be trusted to do basic optimizations. Which is exactly why it exposes so many special intrinsic functions. The "sufficiently smart compiler" myth was thoroughly discounted over 20 years ago.
> As long as programmers view a program as a mechanism that manipulates bytes in flat memory...
> Yes, it will eventually manifest in memory as bytes in some memory cell...
So people view a program how the computer actually deals with it? And how they need to optimize for since they are writing programs for that machine?
So what is an example of you abstraction that you are talking about? Is there a language that already exists that is closer to what you want? Otherwise you are talking vaguely and abstractly and it doesn't really help anyone understand your point of view.
Real world example. You go sit in your ICE car. You press the gas pedal and the car starts moving. And that's your mental model. Depressing pedal = car moves. You do not think "depress pedal" = "more gasoline to the engine" = stronger combustion" = "higher rpm" = "higher speed". But that's the level those C and C-like language discussions are always on. The consequence of you using this abstraction in your car is that switching to a hybrid or lately an EV is seemless for most people. Depress pedal, vehicle moves faster. Whether there is a battery involved or some hydrogen magic or an ICE is insubstantial. Most of the time. Exceptions are race track drivers. But even those drop off their kids at school during which they don't really care what's under the hood as long as "depress pedal" = "vehicle moves faster".
This may be true, but it's also false. Many regular drivers have an understanding of how the machine they're driving works. Mechanical sympathy is one of the most important things I've ever learnt. It applies to software as well. Knowing how the data structures are laid out in memory, knowing how the cache works, knowing how the compiler messes with the loops and the variables. These aren't necessarily vital information, and good compilers mean that you can comfortably ignore much of these things, but this knowledge definitely makes you a better developer. Same as knowing how the fuel injection system or the aspiration of your ICE will make you a better driver.
I'm totally with you that it's useful knowledge. One of the main differences between a Youtube/bootcamp trained programmer and a university-CS-educated software engineer, though either "side" has outliers too.
But there is a fine line between having general understanding of the details of what's going on inside your system and using that knowledge to do very much premature optimizations and getting stuck in a corner that is hard to get out of. Large parts of our industry are in such a corner.
It's fun to nerd out about memory allocators, but that's not contributing to overall improvements of software engineering as a craft which is still too much ad hoc hacking and hoping for the best.
Actually I do, and I include the inertia and momentum of every piece of the drive-train as well, and the current location of the center of gravity. I'm thinking about all of these things through the next predicted 5 seconds or so at any given time. It comes naturally and subconsciously. To say nothing of how you really aren't going to be driving a standard transmission without that mental model.
Your analogy is appropriate for your standard American whose only experience with driving a car is the 20 minute commute to work in an automatic, and thus more like a hobbyist programmer or a sysadmin than someone whose actual craft is programming. Do you really think truckers don't know in their gut what their fuel burn rate is based on how far they've depressed the pedal?
Uhhhhh that's kind of how I think about the gas pedal though. There's some lag. The engine might stall a bit if you try to accelerate uphill in a wrong way. There's ideal RPM range. Etc.
And you were perhaps asking about programming languages. Python does not model objects as bytes in physical memory. Functional languages normally don't. That all has consequences, some of which the "close to the metal" folks don't like. But throwing the "but performance" argument at anyhing that moves us beyond the 80s is really getting old.
Thank you for telling me you have no idea why people want or need to use a systems-level programming language.
And yes, I explicitly asked for a language: "Is there a language that already exists that is closer to what you want?", which means you reading comprehension isn't very high.
In your analogy, it's still extremely oversimplified because what about a manual car, of which I have only ever driven. I don't have just acceleration and break, but also a clutch. I also have to many other things too to deal with. It's no where near as simple as you are making out, and thus kind of makes your analogy useless.
> Thank you for telling me you have no idea why people want or need to use a systems-level programming language.
> And yes, I explicitly asked for a language: "Is there a language that already exists that is closer to what you want?", which means you reading comprehension isn't very high.
Really? Two insults packaged into two paragraphs? Was that really necessary? It's possible to discuss technical disagreements without insulting others.
I'm doing systems-level programming every day, some of it involves C. It provides me with the perspective from which I'm expressing my views. There are other views, thankfully, and a discussion allows to highlight the differences and perhaps provide everybody with a learning opportunity. That's what I'm here for.
Obviously I saw that you asked for a language and I replied to that. I separated the concrete answer to avoid getting things mixed up with the more general point.
In the kinds of applications that require a systems language you need to know the object layouts, it isn’t avoidable. Algorithm selection over those objects is dependent on the physical object layout and hardware architecture based on the use case. The compiler doesn’t do any of this and largely can’t because it doesn’t understand what you are trying to do. It has nothing to do with “hand-optimized assembly code”.
You are making a classic “sufficiently smart compiler” argument. These types of problems can’t be automagically solved without strong general AI inside the compiler. See also: SIMD, auto-parallelization, etc. We don’t have strong general AI, never mind inside the compiler.
Until we have such a compiler, you will be dependent on people caring a lot about physical data layout to make your software scalable and efficient.
Ideally, the same language would allow programmers to see things at different abstraction levels, no? Because when you are stuck with bytes and allocators and doing everything else manually, it's detious and you develop hand arthritis in your 30s. But when you have only abstractions and the performances are inacceptable because no magic happened, then it's not great either.
While I agree with you to some extent - working with a higher-level language where you _don't_ have that kind of visibility is its own kind of liberating - Odin is very specifically not that kind of language, and is designed for people who want or need to operate in a machine-sympathetic fashion. I don't think that's necessary all the time, but some form of it does need to exist.
> Instead, data needs to be viewed more abstractly.
There is no instead here. This is not a choice that has to be made once and for all and there is no correct way to view things.
Languages exist if you want to have a very abstract view of the data you are manipulating and they come with toolchains and compilers that will turn that into low level representation.
That doesn’t preclude the interest of languages which expose this low level architecture.
and we should probably look at alcoholic liver disease as an expression of capitalism.
data is bytes. period. your suggestion rests on someone else seeing how it is the case and dealing with it to provide you with ways of abstraction you want. but there is an infinity of possible abstractions – while virtual memory model is a single solid ground anyone can rest upon. you’re modeling your problems on a machine – have some respect for it.
in other words – most abstractions are a front-end to operations on bytes. it’s ok to have various designs, but making lower layers inaccessible is just sad.
i say it’s the opoposite – it’s 2025, we should stop stroking the imaginaries of the 80s and return to the actual. just invest in making it as ergonomic and nimble as possible.
i find it hard understand why some programmers are so intent on hiding from the space they inhabit.
Odin was made for me, also. It has been 4 years and I’m still discovering little features that give me the control and confidence I wish I’d had writing C++.
I returned to the language after a stint of work in other tech and to my utter amazement, the parametric polymorphism that was added to the language felt “right” and did not ruin the comprehensibility of the core library.
Loads of libc allocate. The trivial ones being malloc/calloc/free/strdup/etc, but many other things within it will also allocate like qsort. And that means you cannot change how those things allocate either.
malloc/calloc/free is the allocator, so it makes no sense to pass it an allocator to it. qsort does not allocate. I think strdup is the only other function that allocates and it is a fairly new convenience function that would not be as convenient if you had to pass an allocator.
You're saying you like Odin because it provided this feature in stdlib but how hard would it be if C provided this? And if C provided this, you'd stay with C? So this is a failure of the C community to not evolve and improve?
I've been messing around with Odin and Raylib for a few weeks. I've been interested in trying Raylib for a long time, it has a huge list language bindings. I chose Odin for different reasons than I think many would. Perhaps superficial reasons.
I'm a game-play programmer and not really into memory management or complex math. I like things to be quick and easy to implement. My games are small. I have no need for custom allocators or SOA. All I want is a few thousand sprites at ~120fps. I normally just work in the browser with JS. I use Odin like it's a scripting language.
I really like the dumb stuff like... no semicolons at the end of lines, no parentheses around conditionals, the case statement doesn't need breaks, no need to write var or let, the basic iterators are nice. Having a built in vector 2 is really nice. Compiling my tiny programs is about as fast as refreshing a browser page.
I also really like C style procedural programing rather than object oriented code, but when you work in a language that most people use as OO, or the standard library is OO, your program will end up with mixed paradigms.
It's only been a few weeks, but I like Odin. It's like a statically typed and compiled scripting language.
I don't mean to promote it because the nature programming language version 0.5 is not ready yet, but the nature programming language https://github.com/nature-lang/nature basically meets your expectations, except for the use of var to declare variables, probably because I also really like simplicity.
Here's an example of how I use the nature and raylib bindings.
I like this aspect about Odin. It doesn't try to fundamentally solve any new problems. Instead it does many things right. So it becomes hard to say "this is why you should use Odin". It's more like, try it for yourself and see if you like it :)
The author is excited that they can do all the things in Odin that they can do in C.
So it strikes me that a new language may be the wrong approach to addressing C's issues. Can they truly not be addressed with C itself?
E.g., here's a list of some commonly mentioned issues:
* standard library is godawful, and composed almost entirely of foot guns. New languages fix this by providing new standard libraries. But that can be done just as well with C.
* lack of help with safety. The solutions people put forward generally involve some combination of static analysis disallowing potentially unsafe operations, runtime checks, and provided implementations of mechanisms around potentially unsafe operations (like allocators, and slices). Is there any reason these cannot be done with C (in fact, I know they all have been done).
* lack of various modern conveniences. I think there's two aspects of this. One is aesthetics -- people can feel that C code is inelegant or ugly. Since that's purely a matter of personal taste, we have to set that aside. The other is that C can often be pretty verbose. Although the syntax is terse, its low-level nature means that, in practice, you can end up writing a relatively large number of lines of code to do fairly simple things. C alternatives tend to provide syntax conveniences that streamline common & preferred patterns. But it strikes me that an advanced enough autocomplete would provide the same convenience (albeit without the terseness). We happen to have entered the age of advanced autocomplete.
Building a new language, along with the ecosystem to support it, is a lot of fun. But it also seems like a very inefficient way to address C's issues because you have to recreate so much (including all the things about C that aren't broken), and you have to reach some critical mass of adoption/usage to become relevant and sustainable. And to be frank, it's also a pretty ineffective way to address C's issues because it doesn't actually do anything to help all the existing C code. Very few projects are in a position to be rewritten. Much better would be to have a fine-grained set of solutions that code bases could adopt incrementally according to need and opportunity
Of course, I realize all this has been happening with C all along. I'm just pointing out that seems like the right approach, while these C alternatives, while fun and exciting (as far as these things go), they are probably just sound and fury that will ultimately fade away. (In fact, it might be worse if some catch on... C and all the C code bases will still be there, we'll just have more fragmentation.)
The most powerful objection to the proposition that C can be fixed is the many, many attempts lying by the side of the road.
There seems to be some sort of force in the programming language landscape that prevents a language that is too similar to another language from being able to succeed. And I don't just mean something like "Python versus Ruby", although IMHO even that was a bit of a fluke due to geography, but the general inability to create a variant of C that everybody uses.
The other problem is you still end up pushed in the direction of a new language anyhow. Let's say you create C-New and it "fixes pointers" so now they're safe. I don't care how you do that. But obviously that involves some sort of writing into C-New new guarantees that pointers take. But if you're conceiving of this as "still basically C", such that you can just call into C code, when you pass your C-New pointer into C-Old, you can no longer make those guarantees. You still basically have to treat C-Old as a remote call, just like Python or Go or Lua, and put it at arm's length.
The extent to which you can "fix C" without creating this constraint is fairly limited. It's a very well defined language at this point with extremely strong opinions.
As for "C alternatives", actually, the era of C alternatives has passed. C++, Java, Objective-C, C#, many takes on the problem, none perhaps nailing the totality of the C problem space but the union of them all pretty much does. The era we have finally, at long last, it's about time we entered is the era of programming languages that aren't even reactions to C anymore, but are just their own thing.
The process of bringing up an ecosystem that isn't C is now well-trod. It's risky, certainly, but it's been done a dozen times over. It's often the only practical way forward.
I'm the creator of the Odin programming language and I originally tried to approach it by fixing C. And my conclusion was that C could not be fixed.
I made my own standard library to replace libc. The lack of safety is hard to do when you don't have a decent enough type system. C's lack of a proper array type is a good example of this.
Before making Odin, I tried making my own C compiler with some extensions, specifically adding proper arrays (slices) with bounds checking, and adding `defer`. This did help things a lot, but it wasn't enough. C still had fundamentally broken semantics in so many places that just "fixing" the problems of C in C was not enough.
I didn't want to make Odin initially, but it was the conclusion I had after trying to fix something that cannot be fixed.
I feel like Odin is the closest to "normal C", especially in its simplicity, which is often undervalued. If C was easily fixable it probably would've been done already anyway...
> In Odin all variables are automatically zero initialized. Not just integers and floats. But all structs as well. Their memory is filled with zeroes when those variables are created.
> This makes ZII extra powerful! There is little risk of variables accidentally being uninitialized.
The cure is worse than the problem. I don't want to 'safely' propagate my incorrect value throughout the program.
If we're in the business of making new languages, why not compile-time error for reading memory that hasn't been written? Even a runtime crash would be preferable.
Being initialized to zero is at least repeatable, so if you forget to initialize something you'll notice it immediately in testing. The worst part about uninitialized variables is that they frequently are zero and things seem to work until you change something else that previously happened to use the same memory.
> The worst part about uninitialized variables is that they frequently are zero and things seem to work until you change something else that previously happened to use the same memory.
This is not the whole story. You're making it sound like uninitialized variables _have_ a value but you can't be sure which one. This is not the case. Uninitialized variables don't have a value at all! [1] has a good example that shows how the intuition of "has a value but we don't know which" is wrong:
If you assume an uninitialized variable has a value (but you don't know which) this program should run to completion without issue. But this is not the case. From the compiler's point of view, x doesn't have a value at all and so it may choose to unconditionally return false. This is weird but it's the way things are.It's a Rust example but the same can happen in C/C++. In [2], the compiler turned a sanitization routine in Chromium into a no-op because they had accidentally introduced UB.
[1]: https://www.ralfj.de/blog/2019/07/14/uninit.html
[2]: https://issuetracker.google.com/issues/42402087?pli=1
The unsafe part is supposed to tell you that any assumptions you might make might not hold true.
> You're making it sound like uninitialized variables _have_ a value but you can't be sure which one.
Because that's a valid conceptualization you could have for a specific language. Your approach and the other person's approach are both valid but different, and as I said in another comment, they come with different compromises.
If you are thinking like some C programmers, then `int x;` can either have a value which is just not known at compile time, or you can think of it having a specialized value of "undefined". The compiler could work with either definition, it just happens that most compilers nowadays do for C and Rust at least use the definition you speak of, for better or for worse.
> C programmers, then `int x;` can either have a value which is just not known at compile time
I am pretty sure that in C, when a program reads uninitialized variable, it is an "undefined behavior", and it is pretty much allowed to be expected to crash — for example, if the variable turned out to be on an unallocated page of stack memory.
So literally the variable does not have a value at all, as that part of address space is not mapped to physical memory.
Interestingly enough, C++26 introduces "erroneous behavior" and uses it for uninitialized variables, rather than undefined behavior.
It is "undefined behaviour" in C (which is an overloaded term which I will not discuss why I hate it in this comment). But my point was that is how many people conceptualize it, and for many things people do expect it to be one of the possible values, just not knowable ahead of time.
However, I was using that "C programmers" bit to explain the conceptualization aspect, and how it also applies to other languages. Not every language, even systems languages, have the same concepts as C, especially the same construction as "UB".
It is undefined in C for automatic variables whose address was not taken (and in this case a compiler should be able to warn).
As someone who recently wondered what kinds of things might happen, im actually very glad for GPs clarification.
You're assuming that's the style of programming others want to program in. Some people want the "ZII" approach. Your approach is a trade-off with costs which many others would not want to make. So it's not "preferable", it's a different compromise.
That's clearly correct, as e.g. Go uses this style and there are lots of happy Go users.
I want to push back on the idea that it's a "trade-off", though -- what are the actual advantages of the ZII approach?
If it's just more convenient because you don't have to initialize everything manually, you can get that with the strict approach too, as it's easy to opt-in to the ZII style by giving your types default initializers. But importantly, the strict approach will catch cases where there isn't a sensible default and force you to fix them.
Is it runtime efficiency? It seems to me (but maybe not to everyone) that initialization time is unlikely to be significant, and if you make the ZII style opt-in, you can still get efficiency savings when you really need them.
The explicit initialization approach seems strictly better to me.
> It seems to me... that initialization time is unlikely to be significant
The thing is, initialization cost is a lot more than you think it is, especially when it's done on a per-object level rather than a "group" level.
This is kind of the point of trying to make the zero value useful, it's trivially initialized. And in languages that are much more strict in their approach, it is done at that per-object level which means instead of the cost of initialization being anywhere from free (VirtualAlloc/mmap has to produce zeroed memory) to trivially-linear (e.g. memset), to being a lot more nested hierarchies of initialization (e.g. for-loop with constructor for each value).
It's non-obvious why the "strict approach" would be worse, but it's more about how people actually program rather than a hypothetical approach to things.
So of course each style is about trade-offs. There are no solutions, only trade-offs. And different styles will have different trade-offs, even if they are not immediately obvious and require a bit of experience.
A good little video on this is from Casey Muratori, "Smart-Pointers, RAII, ZII? Becoming an N+2 programmer": https://www.youtube.com/watch?v=xt1KNDmOYqA
That's what I was trying to get at by talking about making ZII opt-in. If you're using a big chunk of memory — say a matrix, or an array of matrices — it's a win if you can zero-initialize it cheaply or for free, sure. In JS, for example, you'd allocate an ArrayBuffer and use it immediately (via a TypedArray or DataView).
But still, in other parts of the program, ZII is bad! That local or global variable pointing at an ArrayBuffer should definitely not be zero-initialized. Who wants a null pointer, or a pointer to random memory of unknown size? Much better to ensure that a) you actually construct a new TypedArray, and b) you don't use it until it's constructed.
I guess if you see the vast majority of your action happening inside big arrays of structs, pervasive ZII might make sense. But I see most of the action happening in local and temporary variables, where ZII is bad and explicit initialization is what you want.
Moving from JavaScript to TypeScript, to some extent you can get the best of both worlds. TS will do a very good (though not perfect) job of forcing you to initialize everything correctly, but you can still use TypedArray and DataView and take advantage of zero-initialization when you want to.
ZII for local variables reminds me of the SmallTalk / Obj-C thing where you could send messages to nil and they're silently ignored. I don't really know SmallTalk, but in Obj-C, to the best of my knowledge most serious programmers think messages to nil are a bad idea and a source of bugs.
Maybe this is another aspect where the games programming mindset is skewing things (besides the emphasis on low-level performance). In games, avoiding crashes is super important and you're probably willing to compromise on correctness in some cases. In most non-games applications, correctness is super important, and crashing early if something goes wrong is actually preferable.
> the SmallTalk / Obj-C thing where you could send messages to nil and they're silently ignored.
Messages sent to the Smalltalk UndefinedObject instance are not silently ignored — #doesNotUnderstand.
Sometimes that run time message lookup has been used to extend behavior —
1986 "Encapsulators: A New Software Paradigm in Smalltalk-80"
https://dl.acm.org/doi/pdf/10.1145/28697.28731
Making it opt-in, means making the hierarchical approach the default. Whatever you make "opt-in" means you are by default discouraging its use. And what you are suggesting as the default is not what I wanted from Odin (I am the creator by the way).
I normally say "try to make the zero value useful" and not "ZII" (which was a mostly jokey term Casey Muratori came up with to reflect against RAII) because then it is clear that there are cases when it is not possible to do ZII. ZII is NOT a _maxim_ but what you should default to and then do something else where necessary. This is my point, and I can probably tell you even more examples of where "ZII is bad" than you could think of, but this is what is a problem describing the problem to people: they take it as a maxim not a default.
And regarding pointers, I'm in the camp that nil-pointers are the most trivial type of invalid pointer to catch empirically speaking. Yes they cause problems, but because how modern systems are structured with virtual memory, they are empirically trivial to catch and deal with. Yes you could design the type system of a language to make nil-pointers not be a thing unless you explicit opt into them, but then that has another trade-off which may or may not be a good thing depending on the application.
The Objective-C thing is just a poorly implemented system for handling `nil`. It should have been more consistent but wasn't. That's it.
I'd argue "correctness" is important in games too, but the conception of "correctness" is very different there. It's not about provability but testability, which are both valid forms of "correctness" but very different.
And in some non-game applications, crashing early is also a very bad thing, and for some games, crashing early is desired over corrupted saves or other things. It's all about which trade-offs you can afford, and I would not try to generalize too much.
Yeah, that's fair, clearly this sort of thing is why we have multiple languages in the first place!
I don't think I'll ever abandon the idea that making code "correct by construction" is a good goal. It might not always be achievable or practical but I strongly feel it's always something to aim for. For me, silent zero initialization compromises that because there isn't always a safe default.
I think nil pointers are like NaNs in arithmetic. When a nil or a NaN crops up, it's too late to do anything useful with it, you generally have to work backwards in the debugger to figure out where the real problem started. I'd much rather be notified of problems immediately, and if that's at compile time, even better.
In the real world, sure, I don't code review every single arithmetic operation to see if it might overflow or divide by zero. But when the compiler can spot potential problem areas and force me to check them, that's really useful.
If you don't want to make it "opt-in" would it at least make sense to make it "opt-out"? Does Odin have a way for specific types to omit a zero value?
That would require having constructors, which is not something Odin will ever have nor should it. However you can just initialize with a constant or variable or just use a procedure to initialize with. Odin is a C alternative after all, so it's a fully imperative procedural language.
Not sure if anyone has mentioned it, but you can additionally disable ZII in any variable by describing the value as "---" in your declaration, useful when writing high performance code, here is an example:
Yes, this is mentioned explicitly in the article.
I always find this opinion intriguing, where it's apparently fine that globals are initialized to zero, but you are INSANE to suggest it's the default for locals. What kind of programs are y'all writing?
Clearly the lack of zeroing in C was a trade-off at the time. Just like UB on signed overflow. And now people seem to consider them "obvious correct designs".
I'd prefer proper analysis for globals too, but that is substantially harder.
"Improperly using a variable before it is initialized" is a very common class of bug, and an easy programming error to make. Zero-initializing everything does not solve it! It just converts the bugs from ones where random stack frame trash is used in lieu of the proper value into ones where zeroes are used. If you wanted a zero value, it's fine, but quite possibly you wanted something else instead and missed it because of complex initialization logic or something.
What I want is a compiler that slaps me when I forget to initialize a proper value, not one that quietly picks a magic value it thinks I might have meant.
I agree that zero-initializing doesn't really help avoid incorrect values (which is what the author focuses on) but at least you don't have UB. This is the main selling point IMO.
Then why not just require explicit initialization? If "performance" is your answer then adding extra optimization capabilities to the compiler that detects 0 init would be a solution which could skip any writes if the allocator guarantees 0 initialization of allocated memory. A much safer alternative. Replacing one implicit behavior with another is hardly a huge success...
I'd guess it was because 0 init is desired often enough that this is a convenient implicit default?
"Often enough" is what's introducing the risk for bugs here.
I "often enough" drive around with my car without crashing. But for the rare case that I might, I'm wearing a seatbelt and have an airbag. Instead of saying "well I better be careful" or running a static analyzer on my trip planning that guarantees I won't crash. We do that when lives are on the line, why not apply those lessons to other areas where people have been making the same mistakes for decades?
For the same reason you wear a seatbelt and not a 7-point crash harness.
Please, can we stop assuming every single software has actual lives on the line? These comment threads always devolve into implicit advertisement of Rust/Ada and other super strict languages because “what about safety?!”
It is impossible to post about a language on this forum before the pearl clutching starts if the compiler is a bit lenient instead of triple checking every single expression and making your sign a release of liability.
Sometimes, ergonomics and ease-of-programming win over extreme safety. You’ll find that billion dollar businesses have been built on zero-as-default (like in Go) and often people reaching for it or Go are just writing small personal apps, not cruise missile navigation system.
It gets really tiring.
/rant
I'm actually with you on the ease of use. I don't see this as the opposite to safety. To me, making it harder for me to make mistakes means it's easier to use. That is, easier to use right and harder to use wrong. I'm not a Rust or Ada advocate. I'm just saying that making it harder to make the same mistakes people have been doing for decades would be a good thing. That would contribute to ease-of-use in my book since there are fewer things you need to think about that could possibly go wrong.
Or are you saying that a certain level of bugs is fine and we are at that level? Are you fine with the quality of all the software out there? Then yes, this discussion is probably not for you.
> Are you fine with the quality of all the software out there?
This is the kind of generalisation I'm ranting against.
It is not constructive to extrapolate any kind of discussion about a single, perhaps niche, programming languages with applicable advice for "all the software out there". But you probably knew that already.
TL;DR: I disagree, and I will say upfront that my views on software are extreme. I think quality is a glaring issue in most software.
There is a lot of subpar software out there, and the rest is largely decent-but-not-great. If it's security I want, that's commonly lacking, and hugely so. If it's performance I want, that's commonly lacking[0]. If it's documentation...you get the idea. We should have rigor by default, and if that means software is produced slower, I frankly don't see the problem with that. (Although commercial viability has gone out the window unless big players comply.) Exceptions will be carved out depending on the scope of the program. It's much harder to add in rigor post hoc. The end goal is quality.
The other issue is that a program's scope is indeed broader than controlling lives, and yet there are many bad outcomes. If I just get my passwords stolen or my computer crashes daily or my messaging app takes a bit too long to load every time, what is the harm? Of course those are wildly different outcomes, but I think at least the first and second are obviously quality issues, and I think the third is also important. Why is the third important? When software is such an integral part of users' lives, minor issues cause faults that prompt workarounds or inefficiencies. [1] discusses a similar line of thought. I know I personally avoid doing some actions commonly (e.g. check LinkedIn) because they involve pain points around waiting for my browser to load and whatnot, nothing major but something that's always present. Software ("automation") in theory makes all things that the user implicitly desires to be non-pain points for the user. An interesting blend of issues is system dialog password prompts, which users will generally try to either avoid or address on autopilot, which tends to reduce security. Or take system update restarts, which induce not updating frequently. Or take what is perhaps my favorite invectives: blaming Electron apps. One Electron app can be inconvenient. Multiple Electron apps can be absurd. I feel like I shouldn't have to justify calling out Electron on HN, but I do, but I won't here. And take unintended uses: if I need to set down an injured person across two chairs, I sure hope a chair doesn't break or something. Sure, that's not the intended use case of a chair, but I don't think it's unreasonable that a well-made chair would not fail to live up to my expectations. I wouldn't put an elephant on the chair either way, because intuitively I don't expect that much. Even then, users may expect more out of software than is reasonable, but that should be remedied and not overlooked.
Do not mistake having users for having a quality product.
[0] https://news.ycombinator.com/item?id=43971464 [1] https://blog.regehr.org/archives/861
You seem to use eager evaluation of usability whereas in practice most people only need lazy evaluation. We use risk assessment of going from point A to point B, two concrete points. You seem to use risk assessment equivalent to JavaScript's array.flat(Infinity).
If you zero initialize a pointer and then dereference it as if it were properly initialized, isn't that UB?
It is undefined behavior in C. In many languages it is defined behavior; for instance in Go, dereferencing a nil pointer explicitly panics, which is a well-defined operation. It may, of course, crash your program, and the whole topic of 'should pointers even be able to be nil?' is a valid separate other question, but given that they exist, the operation of dereferencing a nil pointer is not undefined behavior in Go.
To many people reading this this may be a "duh" but I find it is worth pointing out, because there are still some programmers who believe that C is somehow the "default" or "real" language of a computer and that everything about C is true of other languages, but that is not the case. Undefined behavior in C is undefined in C, specifically. Try to avoid taking ideas about UB out of C, and to the extent that they are related (which slowly but surely decreases over time), C++. It's the language, not the hardware that is defining UB.
> why not compile-time error for reading memory that hasn't been written
https://en.wikipedia.org/wiki/Rice%27s_theorem?useskin=vecto...
A compiler doesn't have to accept all possible programs. If it can't prove that a variable is initialized before being read, then it can simply require that you explicitly initialize it.
Sure, but then not accepting many programs would be the answer to parent's question "why not"
Not accepting many C programs, maybe. It's pretty easy to create a language where declaration is initialization of some sort, as evidenced by the large number of languages in common use where, one way or another, that's already the case.
This isn't some whacko far out idea. Most languages already today don't have any way (modulo "unsafe", or some super-carefully declared and defined method that is not the normal operation of the language) of reading uninitialized memory. It's only the residual C-likes bringing up the rear where this is even a question.
(I wouldn't count Odin's "explicitly label this as not getting initialized"; I'm talking about defaults being sharp and pointy. If a programmer explicitly asks for the sharp and pointy, then it's a valid choice to give it to them.)
You're talking about Mojo there. Even memory allocated with UnsafePointer must be explicitly initialised before it can be written to or read from.
> why not compile-time error for reading memory that hasn't been written?
so... like Rust?
Curiously, C# does both. It uses compile-time checks to stop you from accessing an uninitialized local and from exiting a struct constructor without initializing all fields; and yet, the CLR (the VM C# compiles to) zero-initializes everything anyway.
This is a pain. I recently switched from Java (and its whole Optional/null mess) to C#. I was initially impressed by its nullable checks, but then I discovered 'default'. Now I gotta check that Guids aren't 0000...? It makes me miss the Java situation.
You don't need the "default" keyword to run into that. A simple "new Guid()" gives you all-zeroes (try it!). Nice and foot-gunny.
Only if you go out of your way to author a method with (Guid someGuid = default) argument. I've never seen it happen with Guids, if someone gives you default(Guid) - they did it on purpose, it's no different to explicitly setting `0` to an integer-typed UserID property.
If supplying Guid is optional, you just make it Guid?.
To be fair, I don't think offering default(T) by default (ha) is the best choice for structs. In F#, you have to explicitly do `Unchecked.defaultOf` and otherwise it will just not let you have your way - it is watertight. I much prefer this approach even if it can be less convenient at times.
That’s likely because p/invoke is quite common.
No, that's just the memory model of CLI and the choice made by C#. By default, it emits localsinit flag for methods which indicates that all local variables must be zero-initialized first. On top of that, you can't really access unitialized memory in C# and F# anyway unless you use unsafe. It's a memory safety choice indeed but it has nothing to do with P/Invoke.
The main motivation to use unsafe is p/invoke.
Without unsafe, zero init is not needed.
> The main motivation to use unsafe is p/invoke.
This is opposite to the way unsafe (either syntax or known unsafe APIs) is used today.
Explicit use of unsafe is used for things like avoiding allocation, sure.
All use of p/invoke is also unsafe though, even if the keyword isn’t used. And it’s much more common to wrap a C library than to write a buffer pool.
Here's Casey Muratori on his habit of moving to ZII: https://www.youtube.com/watch?v=xt1KNDmOYqA
Much better outcomes and failure modes than RAII. IIRC, Odin mentions game programming as one of its use cases.
These are not very good arguments and Casey Muratori is hugely biased against RAII and C++ techniques for some reason, probably familiarity with C.
He thinks that every RAII variable is a failure point and that you only have to think about ownership if you are using RAII, so it incurs mental overhead.
The reality is that you have to understand the lifetime and ownership of your allocations no matter what. If the language does nothing for you the allocation will still have a lifetime and a place where the memory is deallocated.
He also talks about combining multiple allocations in to a single allocation that then gets split into multiple pointers, but that could easily be done in C++.
When I first heard about Odin, I thought, why another C replacement?! What's wrong with rust or zig? Then, after looking into it, I had a very similar experience to the author. Someone made a language just for me! It's for people who prefer C over C++ (or write C with a C++ compiler). It has the things that a C programmer has to implement themselves like tagged unions, slices, dynamic arrays, maps, and custom allocators. While providing quality of life features like distinct typing, multiple return values, and generics. It just hits that sweet spot. Now, I'm spoiled.
May I ask what specifically you dislike about Rust (and Zig)? All the features you mentioned are also present in these languages. Do you care about a safety vs. simplicity of the language, or something else entirely?
It's indeed some kind of sweet spot. It has those things from C I liked. And it made my favorite workflows from C into "first class citizens". Not everyone likes those workflows, but for people like me it's pretty ideal.
Yep. It’s my favorite C-replacement. It compiles fast. It has all of the pieces and abstractions I care about and none of the cruft I don’t.
By far one of the best languages I have ever used professionally and as a hobbyist, which is why I donate every month to keep the project alive.
I am dropping the link here so for those who can, should donate, and even if you don't use it, you should consider supporting this and other similar endeavors so they can't stop the signal, and keep it going: https://github.com/sponsors/odin-lang
You can do lot's of the same things in C too, as the author mentions, without too much pain. See for example [1] and [2] on arena allocators (which can be used exactly as the temporary allocator mentioned in the post) and on accepting that the C standard library is fundamentally broken.
From what I can tell, the only significant difference between C and Odin mentioned in the post is that Odin zero-initializes everything whereas C doesn't. This is a fundamental limitation of C but you can alleviate the pain a bit by writing better primitives for yourself. I.e., you write your own allocators and other fundamental APIs and make them zero-initialize everything.
So one of the big issues with C is really just that the standard library is terrible (or, rather, terribly dated) and that there is no drop-in replacement (like in Odin or Rust where the standard library seems well-designed). I think if someone came along and wrote a new C library that incorporates these design trends for low-level languages, a lot of people would be pretty happy.
[1]: https://www.rfleury.com/p/untangling-lifetimes-the-arena-all...
[2]: https://nullprogram.com/blog/2023/10/08/
The author literally says that they used to do that in C. And I've done a lot of those things in C too, it just doesn't mean that C has good defaults nor good ergonomics for many of the tasks other languages have be designed to be good with.
I would not agree that the ergonomics are so much better in Odin that switching to another language is worth giving up the advantages of a much larger ecosystem. For a hobby project this may not matter at all, of course.
Odin has very good FFI with its `foreign import` system, so you can still use libraries written in C, Objective-C, or any other language. And Odin does support tools like asan, tsan, etc already too. So what are the thing that you are giving up if you were using Odin instead of C?—in practice.
I am not a C programmer, but I have been wondering this for a long time: People have been complaining about the standard library for literal decades now. Seemingly, most people/companies write their own abstractions on top of it to ease the pain and limit exposure to the horrors lurking below.
Why has nobody come along and created an alternative standard library yet? I know this would break lots of things, but it’s not like you couldn’t transition a big ecosystem over a few decades. In the same time, entire new languages have appeared, so why is it that the C world seems to stay in a world of pain willingly?
Again, mind you, I’m watching from the outside, really just curious.
> Why has nobody come along and created an alternative standard library yet?
Probably, IMO, because not enough people would agree on any particular secondary standard such that one would gain enough attention and traction¹ to be remotely considered standard. Everyone who already has they own alternatives (or just wrappers around the current stdlib) will most likely keep using them unless by happenstance the new secondary standard agrees (by definition, a standard needs to be at least somewhat opinionated) closely with their local work.
Also, maintaining a standard, and a public implementation of it, could be a faffy and thankless task. I certainly wouldn't volunteer for that!
[Though I am also an outsider on the matter, so my thoughts/opinions don't have any particular significance and in insider might come along and tell us that I'm barking up the wrong tree]
--------
[1] This sort of thing can happen, but is rare. jquery became an unofficial standard for DOM manipulation and related matters for quite a long time, to give one example - but the gulf between the standard standard (and its bad common implementations) at the time and what libraries like jquery offered was much larger than the benefits a secondary C stidlib standard might give.
> Why has nobody come along and created an alternative standard library yet?
Everybody has created their own standard library. Mine has been honed over a decade, why would I use somebody else's? And since it is designed for my use cases and taste, why would anyone use mine?
Because to be _standard_, it would have to come with the compiler toolchain. And if it's scattered around on the internet, people will not use it.
I tried to create my own alternative about a decade ago which eventually influenced my other endeavours.
But another big reason is that people use C and its stdlib because that's what it is. Even if it is bad, its the "standard" and trivially available. Most code relies on it, even code that has its own standard library alternative.
> Why has nobody come along and created an alternative standard library yet?
Because people are so terribly opinionated that the only common denominator is that the existing thing is bad. For every detail that somebody will argue a modern version should have, there will be somebody else arguing the exact opposite. Both will be highly opinionated and for each of them there is probably some scenario in which they are right.
So, the inability of the community to agree on what "good" even means, plus the extreme heterogenity of the use cases for C is probably the answer to your question.
> I think if someone came along and wrote a new C library that incorporates these design trends for low-level languages, a lot of people would be pretty happy.
I suppose glib comes the closest to this? At least the closest that actually sees fairly common usage.
I never used it myself though, as most of my C has been fairly small programs and I never wanted to bother people with the extra dependency.
As long as programmers view a program as a mechanism that manipulates bytes in flat memory, we will be stuck in a world where this kind of topic seems like a success. In that world, an object puts some structure above those memory bytes and obviously an allocator sounds like a great feature. But you'll always have those bytes in the back of your mind and will never be able to abstract things without the bytes in memory leaking through your abstractions. The author even gives an example for a pretty simple scenario in which this is painful, and that's SOA. As long as your data abstraction is fundamentally still a glorified blob of raw bytes in memory, you'll be stuck there.
Instead, data needs to be viewed more abstractly. Yes, it will eventually manifest in memory as bytes in some memory cell, but how that's layouted and moved around is not the concern of you as the programmer that's a user of data types. Looking at some object attributes foo.a or foo.b is just that - the abstract access of some data. Whether a and b are adjacent in memory should be insubstantial or are even on the same machine or are even backed by data cells in some physical memory bank. Yes, in some very specific (!) cases, optimizing for speed makes it necessary to care about locality, but for those cases, the language or library need to provide mechanisms to specify those requirements and then they will layout things accordingly. But it's not helpful if we all keep writing in some kind of glorified assembly language. It's 2025 and "data type" needs to mean something more abstract than "those bytes in this order layed out in memory like this", unless we are writing hand-optimized assembly code which most of us never do.
Well, the DOD people keep finding that caring about the cache is more helpful regaring performance then the casual programmer might think. Even compiler people are thinking about ditching the classical AST for something DOD-based. I admin HPC systems as a dayjob, and I rarely see programmers aware of modern CPU design and how to structure your data such that it actually performs. I get that you'd like to add more abstractions to make programming easier, but I worry that this only adds to the (already rampant) inefficiency of most programs. The architecture is NOT irrelevant. And with every abstraction you put in, you increase the distance the programmer has from knowing how the architecture works. Maybe thats fine for Python and other high level stuff, but it is not a good idea IMO when dealing with programs with longer runtimes...
> caring about the cache is more helpful regaring performance then the casual programmer might think.
Cache is easily the most important consideration if you intend to go fast. The instructions are meaningless if they or their dependencies cannot physically reach the CPU in time.
The latency difference between L1/L2 and other layers of memory is quite abrupt. Keeping workloads in cache is often as simple as managing your own threads and tightly controlling when they yield to the operating system. Most languages provide some ability to align with this, even the high level ones.
IMO, DOD shows that you don’t have to sacrifice developer ergonomics for performance.
ECS is vastly superior as an abstraction that pretty much everything that we had before in games. Tightly coupled inheritance chains of the 90s/2000s were minefields of bugs.
Of course perhaps not every type of app will have the same kind of goldilocks architecture, but I also doubt anyone will stumble into something like that unless they’re prioritizing it, like game programmers did.
I won't get into it too much but virtually no one needs ECS, and if you have to ask how to do it, it's not for you. There are much better ways to organize a game for most people than the highly generic relational-database-like structure that is ECS. ECS does make sense in certain contexts but most people do not need it.
But I agree that DOD in practice is not a compromise between performance and ergonomics, and Odin kind of shows how that is possible.
That's great! Let the compiler figure out the optimal data layout then! Of course the architecture is relevant. But does everybody need to consider L2 and L3 sizes all the time? Optimizing this is for machines, with very rare exceptions. Expecting every programmer to do optimal data placement by hand is similar to expecting every programmer to call malloc and free in the right order and the correct number of times. And we know how reliable that turned out.
The compiler cannot know the _purpose_ of your program, and thus cannot "figure out the optimal data layout". It's metaphysically not possible, let alone technically.
Not everybody needs to worry about L2 or L3 most of the time, but if you are using a systems-level programming language where it might be of a concern to you at some point, it's extremely useful to be able to have that control.
> expecting every programmer to call malloc and free in the right order
The point of custom allocators is to not need to do the `malloc`/`free` style of memory allocation, and thus reduce the problems which that causes. And even if you do still need that style, Odin and many other languages offer features such as `defer` or even the memory tracking allocator to help you find the problems. Just like what was said in the article.
I am reluctant to believe compiler optimisations can do everything. Kind of reminds me of the time when people thought auto parallelisation would be a plausible thing. It never really happened, at least not in a predictably efficient way.
> That's great! Let the compiler figure out the optimal data layout then!
GHC, which is without a doubt the smartest compiler you can get your grubby mitts on, is still an extremely stupid git that can't be trusted to do basic optimizations. Which is exactly why it exposes so many special intrinsic functions. The "sufficiently smart compiler" myth was thoroughly discounted over 20 years ago.
> As long as programmers view a program as a mechanism that manipulates bytes in flat memory...
> Yes, it will eventually manifest in memory as bytes in some memory cell...
So people view a program how the computer actually deals with it? And how they need to optimize for since they are writing programs for that machine?
So what is an example of you abstraction that you are talking about? Is there a language that already exists that is closer to what you want? Otherwise you are talking vaguely and abstractly and it doesn't really help anyone understand your point of view.
Real world example. You go sit in your ICE car. You press the gas pedal and the car starts moving. And that's your mental model. Depressing pedal = car moves. You do not think "depress pedal" = "more gasoline to the engine" = stronger combustion" = "higher rpm" = "higher speed". But that's the level those C and C-like language discussions are always on. The consequence of you using this abstraction in your car is that switching to a hybrid or lately an EV is seemless for most people. Depress pedal, vehicle moves faster. Whether there is a battery involved or some hydrogen magic or an ICE is insubstantial. Most of the time. Exceptions are race track drivers. But even those drop off their kids at school during which they don't really care what's under the hood as long as "depress pedal" = "vehicle moves faster".
This may be true, but it's also false. Many regular drivers have an understanding of how the machine they're driving works. Mechanical sympathy is one of the most important things I've ever learnt. It applies to software as well. Knowing how the data structures are laid out in memory, knowing how the cache works, knowing how the compiler messes with the loops and the variables. These aren't necessarily vital information, and good compilers mean that you can comfortably ignore much of these things, but this knowledge definitely makes you a better developer. Same as knowing how the fuel injection system or the aspiration of your ICE will make you a better driver.
I'm totally with you that it's useful knowledge. One of the main differences between a Youtube/bootcamp trained programmer and a university-CS-educated software engineer, though either "side" has outliers too.
But there is a fine line between having general understanding of the details of what's going on inside your system and using that knowledge to do very much premature optimizations and getting stuck in a corner that is hard to get out of. Large parts of our industry are in such a corner.
It's fun to nerd out about memory allocators, but that's not contributing to overall improvements of software engineering as a craft which is still too much ad hoc hacking and hoping for the best.
The perfect analogy, because sometimes people want to drive a manual car, and sometimes people aren't American and it's the default.
PRESS PEDAL CAR STOPS
DIDNT SHIFT UP
> You do not think
Actually I do, and I include the inertia and momentum of every piece of the drive-train as well, and the current location of the center of gravity. I'm thinking about all of these things through the next predicted 5 seconds or so at any given time. It comes naturally and subconsciously. To say nothing of how you really aren't going to be driving a standard transmission without that mental model.
Your analogy is appropriate for your standard American whose only experience with driving a car is the 20 minute commute to work in an automatic, and thus more like a hobbyist programmer or a sysadmin than someone whose actual craft is programming. Do you really think truckers don't know in their gut what their fuel burn rate is based on how far they've depressed the pedal?
Uhhhhh that's kind of how I think about the gas pedal though. There's some lag. The engine might stall a bit if you try to accelerate uphill in a wrong way. There's ideal RPM range. Etc.
And you were perhaps asking about programming languages. Python does not model objects as bytes in physical memory. Functional languages normally don't. That all has consequences, some of which the "close to the metal" folks don't like. But throwing the "but performance" argument at anyhing that moves us beyond the 80s is really getting old.
Thank you for telling me you have no idea why people want or need to use a systems-level programming language.
And yes, I explicitly asked for a language: "Is there a language that already exists that is closer to what you want?", which means you reading comprehension isn't very high.
In your analogy, it's still extremely oversimplified because what about a manual car, of which I have only ever driven. I don't have just acceleration and break, but also a clutch. I also have to many other things too to deal with. It's no where near as simple as you are making out, and thus kind of makes your analogy useless.
> Thank you for telling me you have no idea why people want or need to use a systems-level programming language.
> And yes, I explicitly asked for a language: "Is there a language that already exists that is closer to what you want?", which means you reading comprehension isn't very high.
Really? Two insults packaged into two paragraphs? Was that really necessary? It's possible to discuss technical disagreements without insulting others.
I'm doing systems-level programming every day, some of it involves C. It provides me with the perspective from which I'm expressing my views. There are other views, thankfully, and a discussion allows to highlight the differences and perhaps provide everybody with a learning opportunity. That's what I'm here for.
Obviously I saw that you asked for a language and I replied to that. I separated the concrete answer to avoid getting things mixed up with the more general point.
In the kinds of applications that require a systems language you need to know the object layouts, it isn’t avoidable. Algorithm selection over those objects is dependent on the physical object layout and hardware architecture based on the use case. The compiler doesn’t do any of this and largely can’t because it doesn’t understand what you are trying to do. It has nothing to do with “hand-optimized assembly code”.
You are making a classic “sufficiently smart compiler” argument. These types of problems can’t be automagically solved without strong general AI inside the compiler. See also: SIMD, auto-parallelization, etc. We don’t have strong general AI, never mind inside the compiler.
Until we have such a compiler, you will be dependent on people caring a lot about physical data layout to make your software scalable and efficient.
It's always current_year, and I like bytes, thanks.
Ideally, the same language would allow programmers to see things at different abstraction levels, no? Because when you are stuck with bytes and allocators and doing everything else manually, it's detious and you develop hand arthritis in your 30s. But when you have only abstractions and the performances are inacceptable because no magic happened, then it's not great either.
While I agree with you to some extent - working with a higher-level language where you _don't_ have that kind of visibility is its own kind of liberating - Odin is very specifically not that kind of language, and is designed for people who want or need to operate in a machine-sympathetic fashion. I don't think that's necessary all the time, but some form of it does need to exist.
> Instead, data needs to be viewed more abstractly.
There is no instead here. This is not a choice that has to be made once and for all and there is no correct way to view things.
Languages exist if you want to have a very abstract view of the data you are manipulating and they come with toolchains and compilers that will turn that into low level representation.
That doesn’t preclude the interest of languages which expose this low level architecture.
Sure. But solving problems at the wrong level of abstraction is always doomed to fail.
That would be true if it was always the wrong level of abstraction.
It's obviously not for the low level parts of the toolchain which are required to make very abstract languages work.
and we should probably look at alcoholic liver disease as an expression of capitalism.
data is bytes. period. your suggestion rests on someone else seeing how it is the case and dealing with it to provide you with ways of abstraction you want. but there is an infinity of possible abstractions – while virtual memory model is a single solid ground anyone can rest upon. you’re modeling your problems on a machine – have some respect for it.
in other words – most abstractions are a front-end to operations on bytes. it’s ok to have various designs, but making lower layers inaccessible is just sad.
i say it’s the opoposite – it’s 2025, we should stop stroking the imaginaries of the 80s and return to the actual. just invest in making it as ergonomic and nimble as possible.
i find it hard understand why some programmers are so intent on hiding from the space they inhabit.
Odin was made for me, also. It has been 4 years and I’m still discovering little features that give me the control and confidence I wish I’d had writing C++.
I returned to the language after a stint of work in other tech and to my utter amazement, the parametric polymorphism that was added to the language felt “right” and did not ruin the comprehensibility of the core library.
Thank you gingerBill!
Odin has been hitting HN semi-regularly. A recent thread: https://news.ycombinator.com/item?id=43939520
Which parts of the C standard library has any need for allocators?
Loads of libc allocate. The trivial ones being malloc/calloc/free/strdup/etc, but many other things within it will also allocate like qsort. And that means you cannot change how those things allocate either.
malloc/calloc/free is the allocator, so it makes no sense to pass it an allocator to it. qsort does not allocate. I think strdup is the only other function that allocates and it is a fairly new convenience function that would not be as convenient if you had to pass an allocator.
You're saying you like Odin because it provided this feature in stdlib but how hard would it be if C provided this? And if C provided this, you'd stay with C? So this is a failure of the C community to not evolve and improve?
I've been messing around with Odin and Raylib for a few weeks. I've been interested in trying Raylib for a long time, it has a huge list language bindings. I chose Odin for different reasons than I think many would. Perhaps superficial reasons.
I'm a game-play programmer and not really into memory management or complex math. I like things to be quick and easy to implement. My games are small. I have no need for custom allocators or SOA. All I want is a few thousand sprites at ~120fps. I normally just work in the browser with JS. I use Odin like it's a scripting language.
I really like the dumb stuff like... no semicolons at the end of lines, no parentheses around conditionals, the case statement doesn't need breaks, no need to write var or let, the basic iterators are nice. Having a built in vector 2 is really nice. Compiling my tiny programs is about as fast as refreshing a browser page.
I also really like C style procedural programing rather than object oriented code, but when you work in a language that most people use as OO, or the standard library is OO, your program will end up with mixed paradigms.
It's only been a few weeks, but I like Odin. It's like a statically typed and compiled scripting language.
I don't mean to promote it because the nature programming language version 0.5 is not ready yet, but the nature programming language https://github.com/nature-lang/nature basically meets your expectations, except for the use of var to declare variables, probably because I also really like simplicity.
Here's an example of how I use the nature and raylib bindings.
https://github.com/weiwenhao/tetris
I like this aspect about Odin. It doesn't try to fundamentally solve any new problems. Instead it does many things right. So it becomes hard to say "this is why you should use Odin". It's more like, try it for yourself and see if you like it :)
The author is excited that they can do all the things in Odin that they can do in C.
So it strikes me that a new language may be the wrong approach to addressing C's issues. Can they truly not be addressed with C itself?
E.g., here's a list of some commonly mentioned issues:
* standard library is godawful, and composed almost entirely of foot guns. New languages fix this by providing new standard libraries. But that can be done just as well with C.
* lack of help with safety. The solutions people put forward generally involve some combination of static analysis disallowing potentially unsafe operations, runtime checks, and provided implementations of mechanisms around potentially unsafe operations (like allocators, and slices). Is there any reason these cannot be done with C (in fact, I know they all have been done).
* lack of various modern conveniences. I think there's two aspects of this. One is aesthetics -- people can feel that C code is inelegant or ugly. Since that's purely a matter of personal taste, we have to set that aside. The other is that C can often be pretty verbose. Although the syntax is terse, its low-level nature means that, in practice, you can end up writing a relatively large number of lines of code to do fairly simple things. C alternatives tend to provide syntax conveniences that streamline common & preferred patterns. But it strikes me that an advanced enough autocomplete would provide the same convenience (albeit without the terseness). We happen to have entered the age of advanced autocomplete.
Building a new language, along with the ecosystem to support it, is a lot of fun. But it also seems like a very inefficient way to address C's issues because you have to recreate so much (including all the things about C that aren't broken), and you have to reach some critical mass of adoption/usage to become relevant and sustainable. And to be frank, it's also a pretty ineffective way to address C's issues because it doesn't actually do anything to help all the existing C code. Very few projects are in a position to be rewritten. Much better would be to have a fine-grained set of solutions that code bases could adopt incrementally according to need and opportunity
Of course, I realize all this has been happening with C all along. I'm just pointing out that seems like the right approach, while these C alternatives, while fun and exciting (as far as these things go), they are probably just sound and fury that will ultimately fade away. (In fact, it might be worse if some catch on... C and all the C code bases will still be there, we'll just have more fragmentation.)
The most powerful objection to the proposition that C can be fixed is the many, many attempts lying by the side of the road.
There seems to be some sort of force in the programming language landscape that prevents a language that is too similar to another language from being able to succeed. And I don't just mean something like "Python versus Ruby", although IMHO even that was a bit of a fluke due to geography, but the general inability to create a variant of C that everybody uses.
The other problem is you still end up pushed in the direction of a new language anyhow. Let's say you create C-New and it "fixes pointers" so now they're safe. I don't care how you do that. But obviously that involves some sort of writing into C-New new guarantees that pointers take. But if you're conceiving of this as "still basically C", such that you can just call into C code, when you pass your C-New pointer into C-Old, you can no longer make those guarantees. You still basically have to treat C-Old as a remote call, just like Python or Go or Lua, and put it at arm's length.
The extent to which you can "fix C" without creating this constraint is fairly limited. It's a very well defined language at this point with extremely strong opinions.
As for "C alternatives", actually, the era of C alternatives has passed. C++, Java, Objective-C, C#, many takes on the problem, none perhaps nailing the totality of the C problem space but the union of them all pretty much does. The era we have finally, at long last, it's about time we entered is the era of programming languages that aren't even reactions to C anymore, but are just their own thing.
The process of bringing up an ecosystem that isn't C is now well-trod. It's risky, certainly, but it's been done a dozen times over. It's often the only practical way forward.
I'm the creator of the Odin programming language and I originally tried to approach it by fixing C. And my conclusion was that C could not be fixed.
I made my own standard library to replace libc. The lack of safety is hard to do when you don't have a decent enough type system. C's lack of a proper array type is a good example of this.
Before making Odin, I tried making my own C compiler with some extensions, specifically adding proper arrays (slices) with bounds checking, and adding `defer`. This did help things a lot, but it wasn't enough. C still had fundamentally broken semantics in so many places that just "fixing" the problems of C in C was not enough.
I didn't want to make Odin initially, but it was the conclusion I had after trying to fix something that cannot be fixed.
I am quite happy with what you can do with C's types already today: https://godbolt.org/z/WEe9c154o
This is (almost *) bounds safety with -fsanitize=bounds
*) with some pending some compiler improvements it will be perfect
Do you know if there are some professional games that have been made with Odin and released on steam or on game consoles?
I feel like Odin is the closest to "normal C", especially in its simplicity, which is often undervalued. If C was easily fixable it probably would've been done already anyway...
I agree, this is absolutely the right approach. And any suggestions to make C better are very welcome.