Author of the linked CL here: we added this mostly so that we could abuse the memory initialization tracking to test the constant-time-ness of crypto code (similar to what BoringSSL does, proposed by agl around fifteen years ago: https://www.imperialviolet.org/2010/04/01/ctgrind.html), which is an annoyingly hard property to test.
We're hoping that there are also a bunch of other interesting side-effects of enabling the usage of Valgrind for Go, in particular seeing how we can use it to track how the runtime handles memory (hopefully correctly!)
edit: also strong disclaimer that this support is still somewhat experimental. I am not 100% confident we are properly instrumenting everything, and it's likely there are still some errant warnings that don't fully make sense.
This is super cool. Hopefully it will flush
out other issues in Go too.
But I wonder why its not trivial to throw a bunch
of different inputs at your cyphering functions and measure
that the execution times are all within an
epsilon tolerance?
I mean, you want to show constant time
of your crypto functions, why
not just directly measure the time under
lots of inputs? (and maybe background Garbage
Collection and OS noise) and see how constant
they are directly?
Also some CPUs have a counter for conditional
branches (that the rr debuger leverages), and you
could sample that before and after and make
sure the number of conditional branches does
not change between decrypts -- as that AGL post
mentions branching being the same is
important for constant time.
Finally, it would also seem trivial to track
the first 10 decrypts, take their maximum
time add a small extra few nanoseconds tolerance,
and pad every following decrypt with
a few nanoseconds (executing noops)
to force constant time when it is varying.
And you could add an assert that anything over that
established upper bound crashes the program since it
is violating the constant time property. I
suppose the real difficulty is if the OS deschedules
your execution and throws off your timing check...
For the best crypto, you don’t want “within an epsilon”, you want “exactly the same number of CPU cycles”, because any difference can be measured with enough work.
Feeding random inputs to a crypto function is not guaranteed to exercise all the weird paths that an attacker providing intentionally malicious input could access. For example, a loop comparing against secret data in 32 bit chunks will take constant time 99.99999999% of the time, but is still a security hole because an attacker learns a lot from the one case where it returns faster. Crypto vulnerabilities often take the form of very specifically crafted inputs that exploit some mathematical property that's very unlikely from random data.
> But I wonder why its not trivial to throw a bunch of different inputs at your cyphering functions and measure that the execution times are all within an epsilon tolerance?
My guess is because the GC introduces pauses and therefor nondetermism in measuring the time anything takes.
Because "constant time" here means algorithms that are O(1), rather than O(n). This isn't about wall-clock execution time, it's about avoiding the number of operations performed being based on attributes of the input.
I think it's more complicated than that. Certain things like branches, comparisons, and some other operations may not be constant time especially if you consider interactions with prior code. It's clearly possible just really difficult to get right.
> Instead of adding the Valgrind headers to the tree, and using cgo to
call the various Valgrind client request macros, we just add an assembly
function which emits the necessary instructions to trigger client
requests.
Love that they have taken this route, this is the way bootstraped toolchains should be, minimal building blocks and everything else on the language itself.
I am still curious, had they not gone this route, and avoided the other two routes mentioned, what could they have done to make this process as simple as the rest of Go tends to be, and nearly as performant? I guess this is an ongoing question to be solved at a future date.
It would be another scenario to use as ammunition for "see you can't implement a language toolchain without using C", usually voiced by folks without background in compiler design, and understanding that most of the time that is a decision that spurs out of convenience and nothing else.
Assembly isn't that hard, those of us that grown around 8 bit home computers were writing Z80 and 6502 Assembly aged 10 - 12 years old, while having fun cracking games and setting the roots of Demoscene.
Oh. There was a comment to your comment saying that kids learning assembly was easy and — I guess? — implying that adults-learning-assembly is hard. I teach adults assembly on an irregular basis. Adults-learning-assembly is hard because adults are rational animals who (correctly) assume I'm an idiot for insisting on assembly. Once I explain the long-term benefits for our exceedingly specific use case, they pick up assembly in a few hours. Assembly isn't hard. Assembly is annoying because it takes absolutely gobsmacking amounts of assembly to do anything.
Don't try to write "good" go and it becomes easy too.
I would rather see clearly defined, readable, documented code that isnt optimal... than good code lacking any of those traits.
And good code often isnt clearly defined, it often isn't reader friendly and it often lacks documentation (this bit is fixable but ends up needing a lot more of it).
> Assembly isn't that hard, those of us that grown around 8 bit home computers were writing Z80 and 6502 Assembly aged 10 - 12 years old, while having fun cracking games and setting the roots of Demoscene.
I'm glad to see rsc still actively involved. And commenting on commit messages.
The older I get the more I value commit messages. It's too easy to just leave a message like "adding valgrind support", which isn't very useful to future readers doing archaeology.
If that were true it would also apply to C and C++. I have used Valgrind with Python + Boost C++ hybrid programs and it worked fine after spending an hour making a suppressions file.
Yes, but in my past experience, one often wants to be edit those files to make them more generic… Valgrind struggles to distinguish which parts of the call stack are essential to a “known leak” versus which are coincidental.
I have never tried asking an LLM to do this-but it seems like the kind of problem with which an LLM might have some success, even if only partial success.
For me, a waaay outdated suppressions file for Qt + a rough understanding what syscalls and frameworks do is enough. If my app crashes in a network request and a byte sent to the X server (old example, I use Wayland now) is uninitialized, I know to ignore it.
Valgrind(-memcheck) is an extremely important tool in memory-unsafe languages.
Simplification of overwhelming information sounds like a good use case for local LLMs. So I agree with other comments that toolchains are better positioned to include batteries like Valgrind.
I'd be interested to know why Valgrind vs the Clang AddressSanitizer and MemorySaniziter. These normally find more types of errors (like use-after-return) and I find it significantly faster than Valgrind.
I'm interested too. I'm using a Go program that call a cpp library with SWING and I was interested in find out if that library had a memory leak, or maybe the SWING wrap I wrote. But this kind of problem can't be detected via pprof, so I tought, what if Go support Valgrind?? and find out this changes.
I'm not sure if this will work though, will it @bracewel?
Programs running under any Valgrind tool will be executed using a CPU emulator, making it quite a bit slower than, say, running the instrumented binaries as required by sanitizers; it's often an order of magnitude slower, but could be very well be close to two orders of magnitude slower in some cases. This also means that it just can't be attached to any running program, because, well, it's emulating a whole CPU to track everything it can.
(Valgrind using a CPU emulator allows for a lot of interesting things, such as also emulating cache behavior and whatnot; it may be slow and have other drawbacks -- it has to be updated every time the instruction set adds a new instruction for instance -- but it's able to do things that aren't usually possible otherwise precisely because it has a CPU emulator!)
You're right and I was wrong, but in my experience Valgrind has been way faster then the AdressSanitizer. I don't perceive a difference with Valgrind, while ASan makes the program slower around 10x.
Valgrind is a hidden super-power. In much of the software I write, there's 'make check' which runs the test cases, and 'make check-valgrind' that runs the same test cases under valgrind. The latter is only used on developer machines. It often reveals memory leaks or other subtle memory bugs.
Somewhat yes, but as soon as you enter the world of multi-threading (which Go does a lot), the abstraction doesn’t work anymore: as I understand it (or rather, understood: last time I really spent a lot of time digging into it with C++ code was a while ago) it uses its own scheduler, and as such, a lot of subtle real world issues that would arise due to concurrency / race conditions / etc do not pop up in valgrind. And the performance penalty in general is very heavy.
Having said that, it saved my ass a lot of times, and I’m very grateful that it exists.
I wrote the Lwan web server, which similarly to Go, has its own scheduler and makes use of stackful coroutines. I have spent quite a bit of time Valgrinding it after adding the necessary instrumentation to not make Valgrind freak out due to the stack pointer changing like crazy. Despite a lot of Valgrind's limitations due to the way it works, it has been instrumental to finding some subtle concurrency issues in the scheduler and vicinity.
From a quick glance, it seems that Go is now registering the stacks and emitting stack change commands on every goroutine context switch. This is most likely enough to make Valgrind happy with Go's scheduler.
TSAN is not perfect but Go has had built-in TSAN support for a very long time (go build -race).
Also, strictly speaking all Go programs are multithreaded. The inability to spawn a single-threaded Go program is actually a huge issue in some system tools like container runtimes and requires truly awful hacks to work around. (Before you ask, GOMAXPROCS=1 doesn't work.)
looks very promising, one of the biggest issue in golang for me is profiling and constant memory leaks/pressure. Not sure if there is an alternative of what people use now
I'd love to hear more! What kind of profiling issues are you running into? I'm assuming the inuse memory profiles are sometimes not good enough to track down leaks since they only show the allocation stack traces? Have you tried goref [1]?. What kind of memory pressure issues are you dealing with?
I for one am still mystified how it's possible that a GC language can't expose the GC roots in a memory profile. I've lost so many hours of my life manually trying to figure out what might be keeping some objects live, information the GC figures out every single time it runs...
Do you think the GC roots alone (goroutine stacks with goroutine id, package globals) would be enough?
I think in many cases you'd want the reference chains.
The GC could certainly keep track of those, but at the expense of making things slower. My colleagues Nick and Daniel prototyped this at some point [1].
Alternatively the tracing of reference chains can be done on heap dumps, but it requires maintaining a partial replica of the GC in user space, see goref [2] for that approach.
So it's not entirely trivial, but rest assured that it's definitely being considered by the Go project. You can see some discussions related to it here [3].
Disclaimer: I contribute to the Go runtime as part of my job at Datadog. I can't speak on behalf of the Go team.
no, haven't heard of goref yet but will give it a shot!
usually I go with pprof, like basic stuff and it helps. I would NOT say memory leak is the biggest or most common issue I see, however as time goes and services become more complicated what I often see in the metrics is how RAM gets eaten and does not get freed as time goes, so the app eats more and more memory as time goes and only restart helps.
It's hard to call it memory leak in "original meaning of memory leak" but the memory does not get cleaned up because the choices I made and I want to understand how to make it better.
This was often a question asked in Java interviews as well.
In Java heap fragmentation is usually considered a separate issue but I understand go has a non-moving garbage collector so you can lose memory due to pathological allocations that overly fragment memory and require constantly allocating new pages. I could be wrong about this since I don't know a lot about go, but heap fragmentation can cause troubles for long running programs with certain types of memory allocation.
Beside that, applications can leak memory by stuffing things into a collection (map or list) and then not cleaning it up despite becoming "stale". The references are live from the perspective of the garbage collector but are dead from the application perspective. Weak references exist to solve this problem when you expose an API that stores something but won't be able to know when something goes out of scope. I wouldn't consider this to be common, but if you are building a framework or any kind of platform code you might need to reach for this at some point. Some crazy folks also intern every string they encounter "for performance reasons" and that can obviously lead to what less crazy folk would consider a memory leak. Other folk stick a cache around every client and might not tune the cache parameters leading to unnecessary memory pressure...
Golang has a feature that I love in general but that makes it very easy to keep unintended allocations around. If you have a struct with a simple int field, and you store that somewhere as an *int, the entire struct and anything it points to will be kept alive. This is super useful for short-lived pointers, and super dangerous for long-lived pointers.
Most other widely used GCed languages don’t allow the use of arbitrary interior pointers (though most GCs can actually handle them at the register level).
> If you have a struct with a simple int field, and you store that somewhere as an *int, the entire struct and anything it points to will be kept alive.
While Go allows interior pointers, I don't think what you say is true. runtime.KeepAlive was added exactly to prevent GC from collecting the struct when only a field pointer is stored. Take a look at this blog post, for example: https://victoriametrics.com/blog/go-runtime-finalizer-keepal...
I don’t believe that’s the case based on the example in the blog post. The fd field in that struct was passed into the later function by value (i.e. as an int, not an *int), so there was no interior pointer in play at all.
A GC only deallocates unreferenced memory, if you keep unused references, that's a leak the GC won't catch, as it has no way to know that you won't need it later.
It can happen when your variables have too long a lifespan, or when you have a cache where the entries are not properly evicted.
But how would Valgrind know more than the GC? Of course a program in a GCed language can leak memory, but it’s not clear to me how Valgrind would detect the kinds of memory leaks that you can create in pure Go code (without calling into C or using unsafe functions to allocate memory directly).
Valgrind can tell you what is still in use when the program exits, along with useful information, like where it comes from. You can then assess to situation and see if it is normal or not.
In addition, Valgrind is actually a complete toolsuite, not just a memory leak detector. Among these tools is "massif", a memory profiler, so you will have a graph of memory use over time and it can tell you from where these allocations come from.
Not, if your language is fully GCed you can have a debug GC that does the job more efficiently than Valgrind on this task, but I don't know if it is the case for Go.
A common one I see fairly often is opening a big file, creating a “new slice” on a subset of the file and then using the “new slice” and expecting the old large object to be dropped.
Except, the “new slice” is just a reference into the larger slice, so its never marked unused.
A go slice is a wrapper around a normal array. When you take sub-slices those also point to the original array. There's a possible optimization to avoid this footgun where they could reallocate a smaller array if only subslices are reachable (similar to how they reallocate if a slice grows beyond the size of the underlying array).
Most sublicing is just the 2-arg kind so it would not be safe to truncate the allocation even if the subslice is the only living slice because the capacity still allows indirect access to the trailing elements of the original slice. This optimization would only be truly safe for strings (which have no capacity) or the much less common 3-arg slicing (and Go would need to have a compacting GC).
Of course, the language could also be changed to make 2-arg slice operations trim the capacity by default, which might not be a bad idea anyway.
Yeah I understand why they can't do it for backwards compat reasons. Considering they had the forethought to randomize map iteration order it's a big foot gun they've baked into the language.
No, python slice syntax on lists returns a new list. Of course, the slice syntax on your own classes can do just about anything using operator overloading.
- you can create deadlocks
- spawn goroutines while not making sure they have proper exit criteria
- use slices of large objects in memory and pass them around (e.g. read files in a loop and pass only slice from whole buffer)
- and so on
Ideally, they would have learnt from other languages, and offered explicit control over what goes into the stack instead of relying into escape analysis alone.
As it is, the only way to currently handle that is with " -gcflags -m=3" or using something like VSCode Go plugin, via "ui.codelenses" and "ui.diagnostic.annotations" configurations.
I sometimes dream of a GCed language with a non-escaping pointer type. However to make it really useful (i.e. let you put it inside other non-escaping structs) you need something on the scale of the Rust borrow checker, which means adding a lot of complexity.
Which is why several GC languages (the CS meaning of GC), are rather going down the path to keeping their approach to automatic resource management, plus type systems improvements for low level high performance code when needed.
So you only go down into the complexity of affine types, linear types, effects, formal proofs, dependent types, if really needed, after spending time reasoning with a profiler.
Now, this does not need to be so complex, since languages like Interlisp-D and Cedar at Xerox, that many GC languages have offered value types and explicit stack allocation.
That alone is already good enough for most scenarios, provided people actually spend some time thinking about how to design their data structures instead of placing everything into the heap.
yes, that's what I use, just wonder if there are alternatives. I am not sure how valgrind compares to it or the goref tool mentioned above, just asking around, does not hurt.
Don't get me wrong, I love Valgrind, and have been using it extensively in my past life as a C developer. Though the fact that Go needs Valgrind feels like a failure of the language or the ecosystem. I've been doing Rust for ~6 years now, and haven't had to reach for Valgrind even once (I think a team member may have use it once).
I realize that's probably because of cgo, and maybe it's in-fact a step forward, but I can help but feel like it is a step backwards.
I never understand why there's always one of the top comment on every Go post being derogatory and mentioning Rust. It never fails.
It starts to feel like a weird mix of defensiveness and superiority complex.
If I had to guess: because Rust engineers are like every other engineer that reads HN. They read stories, sometimes comment, and share from their own experience. Rust and Go were created roughly at the same time and are more directly comparable than e.g. Go and Ruby, so you don't see Ruby people writing about not having to use Valgrind. This means you'll see more comments from Rust devs than others in Go threads.
At least that's why I wrote that original comment.
> and are more directly comparable than e.g. Go and Ruby
Why do you say that? The original Go announcement made it abundantly clear that it was intended to be like a dynamically-typed language, but faster. It is probably more like Python than Ruby, as it clearly took a lot of ideas from Python, but most seem to consider those languages to be in the same hemisphere anyway.
Beyond maybe producing complied binaries, which is really just an implementation detail, not a hard feature of the language, what is really comparable with Rust?
Go was sold as a "systems language" for a long time, and a lot of people deciding on what language to use in the 2010s made decisions based on that bit of advertising. Rust filled the same niche at around the same time so it's not really surprising people would mentally connect the two.
FWIW, I suspect the entire container ecosystem would not have gone with Go if it wasn't for Docker picking Go mostly based on the "systems language" label. (If only they knew how painful it would be...)
To be clear, I use both and like them for different reasons. But in my experience I agree that Go and Rust are far closer brethren than Go and Python. A lot of the design nexus in Go was to take C and try to improve it with a handful of pared down ideas from Java -- this leads to a similar path as the C++-oriented design nexus for Rust. Early Rust even had green threads like Go. They are obviously very different languages in other respects but it's not particularly suprising that people would compare them given their history.
Your later comments about lots of Go users coming from Python is not particularly surprising and I don't think actually helps your point -- they wanted to improve performance so they switched to a compiled language that handles multithreading well. I would argue they moved to Go precisely because it isn't like Python. If Go didn't exist they would've picked a different language (probably Rust, C++, or any number of languages in the same niche). Maybe if there was a "Python-but-fast" language that they all switched to you would have a point, but Go is not that language.
I don't think this is correct. Go is a language with structs, arrays, value semantics and pointers, and an `unsafe` package for performing low-level operations on those pointers and deal directly with the layout of memory. And in practice Go and Rust have found use in a lot of the exact same systems programming and network programming domains, as replacement languages for C.
Go is certainly a higher-level language than C, but to say it's at all similar to Python or Ruby is nonsensical.
Just like Ruby. Just like pretty much every language people actually use.
> value semantics and pointers
Value semantics are one of the tricks it uses to satisfy the "but faster" part. It is decidedly its own language. But it only supports value semantics, so this is more like Ruby, which only supports reference semantics. Rust supports both value and reference semantics, so it is clearly a very different beast.
> and an `unsafe` package for performing low-level operations on those pointers
The Ruby standard library also includes a package for this.
> And in practice Go and Rust have found use in a lot of the exact same systems programming and network programming domains, as replacement languages for C.
Maybe in some cases, but the data is abundantly clear that Go was most adopted by those who were previously using Ruby and Python. The Go team did think at one point that it might attract C++ programmers, but they quickly found out that wasn't the case. It was never widely adopted in that arena. Whereas I think it is safe to say that many C++ programmers would consider Rust. Which makes sense as Rust is intended to play in the same ballpark as C++. Go was explicitly intended to be a 'faster Python'.
> but to say it's at all similar to Python or Ruby is nonsensical.
Go is most similar to Go, but on the spectrum is way closer to Python and Ruby than it is Rust. It bears almost no resemblance to Rust. Hell, even the "Rustacians'" complaint about Go is that it is nothing like Rust.
> Just like Ruby. Just like pretty much every language people actually use.
Well, no. A struct is a series of objects of different types adjacent in memory. An array is similar but all the objects are the same type. These can (sometimes) be allocated directly on the stack, rather than having to participate in garbage collection.
Languages with reference semantics don't have this, because what you're storing adjacently in memory is always just a series of pointers to the data, rather than the data itself.
This distinction is not only relevant for performance/cache locality. It's relevant any time an application needs to deal directly with the layout of memory, such as when passing data to or from C code.
> But it only supports value semantics, so this is more like Ruby, which only supports reference semantics.
It's not clear to me how it's possible that you wrote this and deemed it logical.
> Rust supports both value and reference semantics, so it is clearly a very different beast.
No, Rust has only value semantics. Rust's references are themselves just values.
There are some languages with both - C# for example has struct types which are copied when passed by value, in addition to regular objects which are implicitly just pointers to heap data. But Rust is not one of those languages.
> The Ruby standard library also includes a package for this.
No, it doesn't. What are you talking about?
> Maybe in some cases
So then you agree. I never claimed it was all cases.
> because what you're storing adjacently in memory is always just a series of pointers to the data
What, exactly, do you think a pointer is? Magic...?
> It's not clear to me how it's possible that you wrote this and deemed it logical.
Why would a stupid premise be followed with a logical response? Use your logic.
> No, it doesn't.
Yes it does. What are you talking about?
> So then you agree.
Yes, I have always agreed that just about any programming language can be used to solve just about any programming task. Still, some languages are more similar to each other than others. It isn't just coincidence that Go is most commonly used where Ruby was historically used and Rust where C++ was historically used.
Let me help you narrow in on the only bit where your comment can find relevance:
> And in practice Go and Rust have found use in a lot of the exact same systems programming and network programming domains, as replacement languages for C.
To which was already followed up with:
> the data is abundantly clear that Go was most adopted by those who were previously using Ruby and Python.
Nice of you to say the exact same thing again, I guess, but it would be more effective if it were correctly positioned in the thread. I know, it can be difficult to track down the right correct reply button. Especially when in a rush to post something that just repeats what is already there.
There were developed around the same time so maybe that accounts for some of the comparisons, but at least in this case I think it matters that they are both relatively new languages with modern tooling.
If they were being compared, shouldn't we also see the inverse? There is a discussion about Rust on the front page right now, with 230 comments at time of writing, and not a single mention of Go.
In fairness, the next Rust discussion a few pages deep does mention Go, but in the context of:
1. Someone claiming that GC languages are inherently slow, where it was pointed out that it doesn't have to be that way, using Go as an example. It wasn't said in comparison with Rust.
2. A couple of instances of the same above behaviour; extolling the virtues of Rust and randomly deriding Go. As strange as the above behaviour is, at least Go was already introduced into the discussion. In these mentioned cases Go came from completely out in left field, having absolutely nothing to do with the original article or having any relation to the thread, only showing up seemingly because someone felt the need to put it down.
Rust is, in my opinion, overrepresented by a vocal minority in HN. A vocal corpus that tend to be passive aggressive more often than other language communities, in my experience.
Which is sad because I like the language and find it useful. But a part of the community does a disservice with comments like your parent comment. It's often on the cusp of calling people who code in Go "stupid". But I digress.
I've had to use valgrind a bit in Rust. Not much but the need is there. It really depends on the product you are working with. Rust is an extremely flexible language and can be used in many spaces. From high level abstract functional code to low level C-like code on a microcontroller. Some use cases would never image using unsafe, some can't live without it. For most FFI to C is just a fact of life, so it comes up somewhere.
When I used it before I was working on a Rust modules that was loaded in by nginx. This was before there were official or even community bindings to nginx.... so there was a lot of opportunity for mistakes.
For sure, that's why I said it's possibly due to the ecosystem. We link against two C libraries other than libc: OpenSSL and librdkafka. Though they are both abstracted away with solid Rust bindings so for us, so as a consumer of these libs it hasn't been a problem (I guess it may be a problem for the people developing them).
I guess maybe the failure is not the addition of it (as it's useful for people writing the bindings), but rather how happy everyone on the thread is (which means it's more useful than it should be due to a failure with the ecosystem).
> but rather how happy everyone on the thread is (which means it's more useful than it should be due to a failure with the ecosystem).
More likely Go users are just happy in general. The Rust users always come across as being incredibly grumpy for some reason, which may be why that happiness — or what would be considered normalcy in any other venue — seems to stand out so much in comparison.
> We link against [...] OpenSSL
Which is kind of funny as Valgrind support was added specifically for the crypto package to help test tricky constant-time cases. You think that the failure of the ecosystem is that Go has built high-quality crypto support in rather than relying on the Heartbleed library instead...? That is certainly an interesting take.
I'm not entirely clear here. Are you saying that the failure of the Go ecosystem was in not building on rusttls (which didn't exist at the time), or are you saying that the failure of the ecosystem was in you not stopping to think before writing your comment?
Even though there is the whole CGO is not Go meme, it certainly makes it rather easy to write C and C++ code directly on a Go project, thus I imagine some folks reach rather easy to it.
Which I can relate to, when doing stuff that is Windows only , I rather make use of C++/CLI than getting P/Invoke declarations correctly.
That's quite nice. There is a small risk that the client request mechanism might change . The headers don't change much - mostly when a new platform gets added. Go is only targeting amd64 and arm64.
This isn't so much about leaks. The most important thing that this will enable is correct analysis of uninitialised memory. Without annotation memory that gets recycled will not be correctly poisoned. I imagine that it will also be useful for the other tools (except cachegrind and callgrind).
Not even a Go user, and yet this is one of the best things I have read today morning. Valgrind is possibly one of the most powerful tools I have in my belt!!
Author of the linked CL here: we added this mostly so that we could abuse the memory initialization tracking to test the constant-time-ness of crypto code (similar to what BoringSSL does, proposed by agl around fifteen years ago: https://www.imperialviolet.org/2010/04/01/ctgrind.html), which is an annoyingly hard property to test.
We're hoping that there are also a bunch of other interesting side-effects of enabling the usage of Valgrind for Go, in particular seeing how we can use it to track how the runtime handles memory (hopefully correctly!)
edit: also strong disclaimer that this support is still somewhat experimental. I am not 100% confident we are properly instrumenting everything, and it's likely there are still some errant warnings that don't fully make sense.
w.r.t. your edit: Is there anything the community at large can do to aid your efforts?
This is super cool. Hopefully it will flush out other issues in Go too.
But I wonder why its not trivial to throw a bunch of different inputs at your cyphering functions and measure that the execution times are all within an epsilon tolerance?
I mean, you want to show constant time of your crypto functions, why not just directly measure the time under lots of inputs? (and maybe background Garbage Collection and OS noise) and see how constant they are directly?
Also some CPUs have a counter for conditional branches (that the rr debuger leverages), and you could sample that before and after and make sure the number of conditional branches does not change between decrypts -- as that AGL post mentions branching being the same is important for constant time.
Finally, it would also seem trivial to track the first 10 decrypts, take their maximum time add a small extra few nanoseconds tolerance, and pad every following decrypt with a few nanoseconds (executing noops) to force constant time when it is varying.
And you could add an assert that anything over that established upper bound crashes the program since it is violating the constant time property. I suppose the real difficulty is if the OS deschedules your execution and throws off your timing check...
For the best crypto, you don’t want “within an epsilon”, you want “exactly the same number of CPU cycles”, because any difference can be measured with enough work.
Feeding random inputs to a crypto function is not guaranteed to exercise all the weird paths that an attacker providing intentionally malicious input could access. For example, a loop comparing against secret data in 32 bit chunks will take constant time 99.99999999% of the time, but is still a security hole because an attacker learns a lot from the one case where it returns faster. Crypto vulnerabilities often take the form of very specifically crafted inputs that exploit some mathematical property that's very unlikely from random data.
> But I wonder why its not trivial to throw a bunch of different inputs at your cyphering functions and measure that the execution times are all within an epsilon tolerance?
My guess is because the GC introduces pauses and therefor nondetermism in measuring the time anything takes.
I believe that Go can have GC disabled so that issue could be moot.
Because "constant time" here means algorithms that are O(1), rather than O(n). This isn't about wall-clock execution time, it's about avoiding the number of operations performed being based on attributes of the input.
I think it's more complicated than that. Certain things like branches, comparisons, and some other operations may not be constant time especially if you consider interactions with prior code. It's clearly possible just really difficult to get right.
> Instead of adding the Valgrind headers to the tree, and using cgo to call the various Valgrind client request macros, we just add an assembly function which emits the necessary instructions to trigger client requests.
Love that they have taken this route, this is the way bootstraped toolchains should be, minimal building blocks and everything else on the language itself.
I am still curious, had they not gone this route, and avoided the other two routes mentioned, what could they have done to make this process as simple as the rest of Go tends to be, and nearly as performant? I guess this is an ongoing question to be solved at a future date.
It would be another scenario to use as ammunition for "see you can't implement a language toolchain without using C", usually voiced by folks without background in compiler design, and understanding that most of the time that is a decision that spurs out of convenience and nothing else.
Assembly isn't that hard, those of us that grown around 8 bit home computers were writing Z80 and 6502 Assembly aged 10 - 12 years old, while having fun cracking games and setting the roots of Demoscene.
Oh. There was a comment to your comment saying that kids learning assembly was easy and — I guess? — implying that adults-learning-assembly is hard. I teach adults assembly on an irregular basis. Adults-learning-assembly is hard because adults are rational animals who (correctly) assume I'm an idiot for insisting on assembly. Once I explain the long-term benefits for our exceedingly specific use case, they pick up assembly in a few hours. Assembly isn't hard. Assembly is annoying because it takes absolutely gobsmacking amounts of assembly to do anything.
Assembly is simple because each instruction is very simple.
Also, assembly is complex because each instruction is very simple.
Large Scale Assembly puts a huge premium on planning and design. If you try to just bang things out, you'll live in a world of pain.
Remember, simple is not the same easy
See also: go.
Well.
Don't try to write "good" go and it becomes easy too.
I would rather see clearly defined, readable, documented code that isnt optimal... than good code lacking any of those traits.
And good code often isnt clearly defined, it often isn't reader friendly and it often lacks documentation (this bit is fixable but ends up needing a lot more of it).
Bad code that works isnt bad.
I guess because there is nowadays a misconception that it is harder than it actually is in practice.
Exactly, plus one gets to understand what JIT and AOT toolchains are actually generating.
> it takes absolutely gobsmacking amounts of assembly to do anything.
"ever built something big in LEGO? Yeah? Yeah."
Do you have anything public on how you get people writing assembly in a few hours?
> Assembly isn't that hard, those of us that grown around 8 bit home computers were writing Z80 and 6502 Assembly aged 10 - 12 years old, while having fun cracking games and setting the roots of Demoscene.
Finally. I found my people.
Z80 on the TI-80 series of calculators for me. The Internet was very young, but there was ticalc.org. Damn, it's still around. I wonder if can log in?
Z80 and 6502 for me, pre internet, which was a luxury of sorts I wish we all could go back to now and then.
Isn't Valgrind written in C?
Yes, but the goal of this is integration, not Rewrite in Go for Valgrind.
To the deleted sibling comment about children vs. adults: some children are also optimists.
I'm glad to see rsc still actively involved. And commenting on commit messages.
The older I get the more I value commit messages. It's too easy to just leave a message like "adding valgrind support", which isn't very useful to future readers doing archaeology.
rsc is a rock star! I believe his focus now is on using AI to manage issues and PRs and such -- I'm sure it will bear copious fruit.
It only works if every package tests with it.
Otherwise the relevant warnings get swamped by a huge amount by irrelevant warnings.
This is why running Valgrind on Python code does not work.
If that were true it would also apply to C and C++. I have used Valgrind with Python + Boost C++ hybrid programs and it worked fine after spending an hour making a suppressions file.
> it worked fine after spending an hour making a suppressions file.
So you are confirming the problem, but treating it as if ignoring it is the solution for all?
it's a rejection of the thesis that it "does not work". It does, but it requires investing into a suppression file.
plus, LLMs can generate suppressor files from logs. It's much faster these days.
I've had success with this approach.
Valgrind can generate suppression files directly.
Yes, but in my past experience, one often wants to be edit those files to make them more generic… Valgrind struggles to distinguish which parts of the call stack are essential to a “known leak” versus which are coincidental.
I have never tried asking an LLM to do this-but it seems like the kind of problem with which an LLM might have some success, even if only partial success.
yes but LLMs have access and understanding of my code and can better discern what should be suppressed.
Yeah I tried that on a PySide6 application.
Trust me, it does not work.
skill issue
For me, a waaay outdated suppressions file for Qt + a rough understanding what syscalls and frameworks do is enough. If my app crashes in a network request and a byte sent to the X server (old example, I use Wayland now) is uninitialized, I know to ignore it.
Valgrind(-memcheck) is an extremely important tool in memory-unsafe languages.
Simplification of overwhelming information sounds like a good use case for local LLMs. So I agree with other comments that toolchains are better positioned to include batteries like Valgrind.
What do you mean if every package tests with it in the context of Go?
Very cool. Should flush out a few bugs.
I'd be interested to know why Valgrind vs the Clang AddressSanitizer and MemorySaniziter. These normally find more types of errors (like use-after-return) and I find it significantly faster than Valgrind.
Go has had its own version of msan and asan for years at this point.
Go doesn't use clang/llvm, so they can't use these tools.
TinyGo does, but it is also behind in language support.
I'm interested too. I'm using a Go program that call a cpp library with SWING and I was interested in find out if that library had a memory leak, or maybe the SWING wrap I wrote. But this kind of problem can't be detected via pprof, so I tought, what if Go support Valgrind?? and find out this changes.
I'm not sure if this will work though, will it @bracewel?
Valgrind also does stuff like memory tracking and memory-profiling, so this is great also from a performance tracking point of view.
Valgrind is way faster and can be attached to a running program.
Programs running under any Valgrind tool will be executed using a CPU emulator, making it quite a bit slower than, say, running the instrumented binaries as required by sanitizers; it's often an order of magnitude slower, but could be very well be close to two orders of magnitude slower in some cases. This also means that it just can't be attached to any running program, because, well, it's emulating a whole CPU to track everything it can.
(Valgrind using a CPU emulator allows for a lot of interesting things, such as also emulating cache behavior and whatnot; it may be slow and have other drawbacks -- it has to be updated every time the instruction set adds a new instruction for instance -- but it's able to do things that aren't usually possible otherwise precisely because it has a CPU emulator!)
You're right and I was wrong, but in my experience Valgrind has been way faster then the AdressSanitizer. I don't perceive a difference with Valgrind, while ASan makes the program slower around 10x.
Valgrind is a hidden super-power. In much of the software I write, there's 'make check' which runs the test cases, and 'make check-valgrind' that runs the same test cases under valgrind. The latter is only used on developer machines. It often reveals memory leaks or other subtle memory bugs.
Somewhat yes, but as soon as you enter the world of multi-threading (which Go does a lot), the abstraction doesn’t work anymore: as I understand it (or rather, understood: last time I really spent a lot of time digging into it with C++ code was a while ago) it uses its own scheduler, and as such, a lot of subtle real world issues that would arise due to concurrency / race conditions / etc do not pop up in valgrind. And the performance penalty in general is very heavy.
Having said that, it saved my ass a lot of times, and I’m very grateful that it exists.
I wrote the Lwan web server, which similarly to Go, has its own scheduler and makes use of stackful coroutines. I have spent quite a bit of time Valgrinding it after adding the necessary instrumentation to not make Valgrind freak out due to the stack pointer changing like crazy. Despite a lot of Valgrind's limitations due to the way it works, it has been instrumental to finding some subtle concurrency issues in the scheduler and vicinity.
From a quick glance, it seems that Go is now registering the stacks and emitting stack change commands on every goroutine context switch. This is most likely enough to make Valgrind happy with Go's scheduler.
TSAN is not perfect but Go has had built-in TSAN support for a very long time (go build -race).
Also, strictly speaking all Go programs are multithreaded. The inability to spawn a single-threaded Go program is actually a huge issue in some system tools like container runtimes and requires truly awful hacks to work around. (Before you ask, GOMAXPROCS=1 doesn't work.)
IME Helgrind does an great job finding concurrency issues.
Yes though last I tried to use it it sadly didn't support openmp. Maybe that's fixed now (that was a while ago)
(I think it was possible to use on openmp if you compiled your compiler with special options)
tsan from LLVM works a bit better in my experience. I still like valgrind in general though!
For fuzzing we don't use valgrind, but use Clang + ASan instead. All these tools have their niches.
looks very promising, one of the biggest issue in golang for me is profiling and constant memory leaks/pressure. Not sure if there is an alternative of what people use now
I'd love to hear more! What kind of profiling issues are you running into? I'm assuming the inuse memory profiles are sometimes not good enough to track down leaks since they only show the allocation stack traces? Have you tried goref [1]?. What kind of memory pressure issues are you dealing with?
[1] https://github.com/cloudwego/goref
Disclaimer: I work on continuous profiling for Datadog and contribute to the profiling features in the runtime.
I for one am still mystified how it's possible that a GC language can't expose the GC roots in a memory profile. I've lost so many hours of my life manually trying to figure out what might be keeping some objects live, information the GC figures out every single time it runs...
Do you think the GC roots alone (goroutine stacks with goroutine id, package globals) would be enough?
I think in many cases you'd want the reference chains.
The GC could certainly keep track of those, but at the expense of making things slower. My colleagues Nick and Daniel prototyped this at some point [1].
Alternatively the tracing of reference chains can be done on heap dumps, but it requires maintaining a partial replica of the GC in user space, see goref [2] for that approach.
So it's not entirely trivial, but rest assured that it's definitely being considered by the Go project. You can see some discussions related to it here [3].
Disclaimer: I contribute to the Go runtime as part of my job at Datadog. I can't speak on behalf of the Go team.
[1] https://go-review.googlesource.com/c/go/+/552736
[2] https://github.com/cloudwego/goref/blob/main/docs/principle....
[3] https://github.com/golang/go/issues/57175
no, haven't heard of goref yet but will give it a shot!
usually I go with pprof, like basic stuff and it helps. I would NOT say memory leak is the biggest or most common issue I see, however as time goes and services become more complicated what I often see in the metrics is how RAM gets eaten and does not get freed as time goes, so the app eats more and more memory as time goes and only restart helps.
It's hard to call it memory leak in "original meaning of memory leak" but the memory does not get cleaned up because the choices I made and I want to understand how to make it better.
Thanks for the tool!
Sorry if this is a basic question but are you setting the GOMEMLIMIT?
Also, are you running the code in a container? In K8s?
How are you getting "constant memory leaks" in a GC'd language?
This was often a question asked in Java interviews as well.
In Java heap fragmentation is usually considered a separate issue but I understand go has a non-moving garbage collector so you can lose memory due to pathological allocations that overly fragment memory and require constantly allocating new pages. I could be wrong about this since I don't know a lot about go, but heap fragmentation can cause troubles for long running programs with certain types of memory allocation.
Beside that, applications can leak memory by stuffing things into a collection (map or list) and then not cleaning it up despite becoming "stale". The references are live from the perspective of the garbage collector but are dead from the application perspective. Weak references exist to solve this problem when you expose an API that stores something but won't be able to know when something goes out of scope. I wouldn't consider this to be common, but if you are building a framework or any kind of platform code you might need to reach for this at some point. Some crazy folks also intern every string they encounter "for performance reasons" and that can obviously lead to what less crazy folk would consider a memory leak. Other folk stick a cache around every client and might not tune the cache parameters leading to unnecessary memory pressure...
Golang has a feature that I love in general but that makes it very easy to keep unintended allocations around. If you have a struct with a simple int field, and you store that somewhere as an *int, the entire struct and anything it points to will be kept alive. This is super useful for short-lived pointers, and super dangerous for long-lived pointers.
Most other widely used GCed languages don’t allow the use of arbitrary interior pointers (though most GCs can actually handle them at the register level).
> If you have a struct with a simple int field, and you store that somewhere as an *int, the entire struct and anything it points to will be kept alive.
While Go allows interior pointers, I don't think what you say is true. runtime.KeepAlive was added exactly to prevent GC from collecting the struct when only a field pointer is stored. Take a look at this blog post, for example: https://victoriametrics.com/blog/go-runtime-finalizer-keepal...
I don’t believe that’s the case based on the example in the blog post. The fd field in that struct was passed into the later function by value (i.e. as an int, not an *int), so there was no interior pointer in play at all.
You are right; I stand corrected
A GC only deallocates unreferenced memory, if you keep unused references, that's a leak the GC won't catch, as it has no way to know that you won't need it later.
It can happen when your variables have too long a lifespan, or when you have a cache where the entries are not properly evicted.
But how would Valgrind know more than the GC? Of course a program in a GCed language can leak memory, but it’s not clear to me how Valgrind would detect the kinds of memory leaks that you can create in pure Go code (without calling into C or using unsafe functions to allocate memory directly).
Valgrind can tell you what is still in use when the program exits, along with useful information, like where it comes from. You can then assess to situation and see if it is normal or not.
In addition, Valgrind is actually a complete toolsuite, not just a memory leak detector. Among these tools is "massif", a memory profiler, so you will have a graph of memory use over time and it can tell you from where these allocations come from.
Not, if your language is fully GCed you can have a debug GC that does the job more efficiently than Valgrind on this task, but I don't know if it is the case for Go.
Pprof should cover most of that for pure Go code, though.
There’s ways, GC isn’t perfect.
A common one I see fairly often is opening a big file, creating a “new slice” on a subset of the file and then using the “new slice” and expecting the old large object to be dropped.
Except, the “new slice” is just a reference into the larger slice, so its never marked unused.
Interesting, I always thought of slices as stand-alone, I wonder if its the same in Python?
A go slice is a wrapper around a normal array. When you take sub-slices those also point to the original array. There's a possible optimization to avoid this footgun where they could reallocate a smaller array if only subslices are reachable (similar to how they reallocate if a slice grows beyond the size of the underlying array).
Most sublicing is just the 2-arg kind so it would not be safe to truncate the allocation even if the subslice is the only living slice because the capacity still allows indirect access to the trailing elements of the original slice. This optimization would only be truly safe for strings (which have no capacity) or the much less common 3-arg slicing (and Go would need to have a compacting GC).
Of course, the language could also be changed to make 2-arg slice operations trim the capacity by default, which might not be a bad idea anyway.
Yeah I understand why they can't do it for backwards compat reasons. Considering they had the forethought to randomize map iteration order it's a big foot gun they've baked into the language.
No, python slice syntax on lists returns a new list. Of course, the slice syntax on your own classes can do just about anything using operator overloading.
there are many ways:
Not going to claim this is all sources, but Go makes it extremely easy to leak goroutines.
it's not hard. GC lets shit leak until it decided to clean it up...
do you think they will enable Valgrind if there's no leaks?
valgrind finds sooooo many more problems than just memory leaks
uninitialized memory, illegal writes, etc... There's a lot of good stuff that could be discovered.
Ideally, they would have learnt from other languages, and offered explicit control over what goes into the stack instead of relying into escape analysis alone.
As it is, the only way to currently handle that is with " -gcflags -m=3" or using something like VSCode Go plugin, via "ui.codelenses" and "ui.diagnostic.annotations" configurations.
I sometimes dream of a GCed language with a non-escaping pointer type. However to make it really useful (i.e. let you put it inside other non-escaping structs) you need something on the scale of the Rust borrow checker, which means adding a lot of complexity.
Which is why several GC languages (the CS meaning of GC), are rather going down the path to keeping their approach to automatic resource management, plus type systems improvements for low level high performance code when needed.
So you only go down into the complexity of affine types, linear types, effects, formal proofs, dependent types, if really needed, after spending time reasoning with a profiler.
Now, this does not need to be so complex, since languages like Interlisp-D and Cedar at Xerox, that many GC languages have offered value types and explicit stack allocation.
That alone is already good enough for most scenarios, provided people actually spend some time thinking about how to design their data structures instead of placing everything into the heap.
https://oxcaml.org/documentation/stack-allocation/intro/ ?
> constant memory leaks/pressure
In Go, never launch a goroutine that you don't know exactly how it will be cleaned up.
pprof is pretty good, what do you need?
yes, that's what I use, just wonder if there are alternatives. I am not sure how valgrind compares to it or the goref tool mentioned above, just asking around, does not hurt.
Alternative to solve what problem? pprof is very powerful, it's not missing much.
Pprof doesn't tell you if something was leaked aka still around.
I fixed a leak recently because of misuse of a slice with code like
slice = append(slice[1:], newElement)
I only figured it out by looking at the pprof heap endpoint output and noticed there were multiple duplicate entries.
This feels more like a failure than a win.
Don't get me wrong, I love Valgrind, and have been using it extensively in my past life as a C developer. Though the fact that Go needs Valgrind feels like a failure of the language or the ecosystem. I've been doing Rust for ~6 years now, and haven't had to reach for Valgrind even once (I think a team member may have use it once).
I realize that's probably because of cgo, and maybe it's in-fact a step forward, but I can help but feel like it is a step backwards.
I never understand why there's always one of the top comment on every Go post being derogatory and mentioning Rust. It never fails. It starts to feel like a weird mix of defensiveness and superiority complex.
If I had to guess: because Rust engineers are like every other engineer that reads HN. They read stories, sometimes comment, and share from their own experience. Rust and Go were created roughly at the same time and are more directly comparable than e.g. Go and Ruby, so you don't see Ruby people writing about not having to use Valgrind. This means you'll see more comments from Rust devs than others in Go threads.
At least that's why I wrote that original comment.
> and are more directly comparable than e.g. Go and Ruby
Why do you say that? The original Go announcement made it abundantly clear that it was intended to be like a dynamically-typed language, but faster. It is probably more like Python than Ruby, as it clearly took a lot of ideas from Python, but most seem to consider those languages to be in the same hemisphere anyway.
Beyond maybe producing complied binaries, which is really just an implementation detail, not a hard feature of the language, what is really comparable with Rust?
Go was sold as a "systems language" for a long time, and a lot of people deciding on what language to use in the 2010s made decisions based on that bit of advertising. Rust filled the same niche at around the same time so it's not really surprising people would mentally connect the two.
FWIW, I suspect the entire container ecosystem would not have gone with Go if it wasn't for Docker picking Go mostly based on the "systems language" label. (If only they knew how painful it would be...)
To be clear, I use both and like them for different reasons. But in my experience I agree that Go and Rust are far closer brethren than Go and Python. A lot of the design nexus in Go was to take C and try to improve it with a handful of pared down ideas from Java -- this leads to a similar path as the C++-oriented design nexus for Rust. Early Rust even had green threads like Go. They are obviously very different languages in other respects but it's not particularly suprising that people would compare them given their history.
Your later comments about lots of Go users coming from Python is not particularly surprising and I don't think actually helps your point -- they wanted to improve performance so they switched to a compiled language that handles multithreading well. I would argue they moved to Go precisely because it isn't like Python. If Go didn't exist they would've picked a different language (probably Rust, C++, or any number of languages in the same niche). Maybe if there was a "Python-but-fast" language that they all switched to you would have a point, but Go is not that language.
I don't think this is correct. Go is a language with structs, arrays, value semantics and pointers, and an `unsafe` package for performing low-level operations on those pointers and deal directly with the layout of memory. And in practice Go and Rust have found use in a lot of the exact same systems programming and network programming domains, as replacement languages for C.
Go is certainly a higher-level language than C, but to say it's at all similar to Python or Ruby is nonsensical.
> Go is a language with structs, arrays
Just like Ruby. Just like pretty much every language people actually use.
> value semantics and pointers
Value semantics are one of the tricks it uses to satisfy the "but faster" part. It is decidedly its own language. But it only supports value semantics, so this is more like Ruby, which only supports reference semantics. Rust supports both value and reference semantics, so it is clearly a very different beast.
> and an `unsafe` package for performing low-level operations on those pointers
The Ruby standard library also includes a package for this.
> And in practice Go and Rust have found use in a lot of the exact same systems programming and network programming domains, as replacement languages for C.
Maybe in some cases, but the data is abundantly clear that Go was most adopted by those who were previously using Ruby and Python. The Go team did think at one point that it might attract C++ programmers, but they quickly found out that wasn't the case. It was never widely adopted in that arena. Whereas I think it is safe to say that many C++ programmers would consider Rust. Which makes sense as Rust is intended to play in the same ballpark as C++. Go was explicitly intended to be a 'faster Python'.
> but to say it's at all similar to Python or Ruby is nonsensical.
Go is most similar to Go, but on the spectrum is way closer to Python and Ruby than it is Rust. It bears almost no resemblance to Rust. Hell, even the "Rustacians'" complaint about Go is that it is nothing like Rust.
> Just like Ruby. Just like pretty much every language people actually use.
Well, no. A struct is a series of objects of different types adjacent in memory. An array is similar but all the objects are the same type. These can (sometimes) be allocated directly on the stack, rather than having to participate in garbage collection.
Languages with reference semantics don't have this, because what you're storing adjacently in memory is always just a series of pointers to the data, rather than the data itself.
This distinction is not only relevant for performance/cache locality. It's relevant any time an application needs to deal directly with the layout of memory, such as when passing data to or from C code.
> But it only supports value semantics, so this is more like Ruby, which only supports reference semantics.
It's not clear to me how it's possible that you wrote this and deemed it logical.
> Rust supports both value and reference semantics, so it is clearly a very different beast.
No, Rust has only value semantics. Rust's references are themselves just values.
There are some languages with both - C# for example has struct types which are copied when passed by value, in addition to regular objects which are implicitly just pointers to heap data. But Rust is not one of those languages.
> The Ruby standard library also includes a package for this.
No, it doesn't. What are you talking about?
> Maybe in some cases
So then you agree. I never claimed it was all cases.
> because what you're storing adjacently in memory is always just a series of pointers to the data
What, exactly, do you think a pointer is? Magic...?
> It's not clear to me how it's possible that you wrote this and deemed it logical.
Why would a stupid premise be followed with a logical response? Use your logic.
> No, it doesn't.
Yes it does. What are you talking about?
> So then you agree.
Yes, I have always agreed that just about any programming language can be used to solve just about any programming task. Still, some languages are more similar to each other than others. It isn't just coincidence that Go is most commonly used where Ruby was historically used and Rust where C++ was historically used.
What?
Here is Rob Pike blog post about instead of getting C and C++ developers, they got the dynamic language folks.
https://commandcenter.blogspot.com/2012/06/less-is-exponenti...
You hit the wrong reply button. It was the sibling comment that thought that Go attracted C developers.
Not at all.
Let me help you narrow in on the only bit where your comment can find relevance:
> And in practice Go and Rust have found use in a lot of the exact same systems programming and network programming domains, as replacement languages for C.
To which was already followed up with:
> the data is abundantly clear that Go was most adopted by those who were previously using Ruby and Python.
Nice of you to say the exact same thing again, I guess, but it would be more effective if it were correctly positioned in the thread. I know, it can be difficult to track down the right correct reply button. Especially when in a rush to post something that just repeats what is already there.
You should see the C++ threads
There were developed around the same time so maybe that accounts for some of the comparisons, but at least in this case I think it matters that they are both relatively new languages with modern tooling.
If they were being compared, shouldn't we also see the inverse? There is a discussion about Rust on the front page right now, with 230 comments at time of writing, and not a single mention of Go.
In fairness, the next Rust discussion a few pages deep does mention Go, but in the context of:
1. Someone claiming that GC languages are inherently slow, where it was pointed out that it doesn't have to be that way, using Go as an example. It wasn't said in comparison with Rust.
2. A couple of instances of the same above behaviour; extolling the virtues of Rust and randomly deriding Go. As strange as the above behaviour is, at least Go was already introduced into the discussion. In these mentioned cases Go came from completely out in left field, having absolutely nothing to do with the original article or having any relation to the thread, only showing up seemingly because someone felt the need to put it down.
Rust is, in my opinion, overrepresented by a vocal minority in HN. A vocal corpus that tend to be passive aggressive more often than other language communities, in my experience.
Which is sad because I like the language and find it useful. But a part of the community does a disservice with comments like your parent comment. It's often on the cusp of calling people who code in Go "stupid". But I digress.
Religious wars. C is Judaism. Go/Rust/Python/Ruby are all the different sects of Christianity. AI is Islam.
This is mainly actually for testing constant-time code, rather than doing proper memory tracking (see https://www.imperialviolet.org/2010/04/01/ctgrind.html for a slightly out-of-date description of this technique).
Oh, interesting. Thanks for sharing!
I guess there's also callgrind that may be useful for Gophers.
Useful for people -- believe it or not not all of us internalize the language we use as part of our identity (and species).
I've had to use valgrind a bit in Rust. Not much but the need is there. It really depends on the product you are working with. Rust is an extremely flexible language and can be used in many spaces. From high level abstract functional code to low level C-like code on a microcontroller. Some use cases would never image using unsafe, some can't live without it. For most FFI to C is just a fact of life, so it comes up somewhere.
When I used it before I was working on a Rust modules that was loaded in by nginx. This was before there were official or even community bindings to nginx.... so there was a lot of opportunity for mistakes.
Depends on how much unsafe you actually happen to write, use unsafe crates, or link into C and C++ libraries.
I also seldom need something like this in Java, .NET or node, until a dependency makes it otherwise.
For sure, that's why I said it's possibly due to the ecosystem. We link against two C libraries other than libc: OpenSSL and librdkafka. Though they are both abstracted away with solid Rust bindings so for us, so as a consumer of these libs it hasn't been a problem (I guess it may be a problem for the people developing them).
I guess maybe the failure is not the addition of it (as it's useful for people writing the bindings), but rather how happy everyone on the thread is (which means it's more useful than it should be due to a failure with the ecosystem).
> but rather how happy everyone on the thread is (which means it's more useful than it should be due to a failure with the ecosystem).
More likely Go users are just happy in general. The Rust users always come across as being incredibly grumpy for some reason, which may be why that happiness — or what would be considered normalcy in any other venue — seems to stand out so much in comparison.
> We link against [...] OpenSSL
Which is kind of funny as Valgrind support was added specifically for the crypto package to help test tricky constant-time cases. You think that the failure of the ecosystem is that Go has built high-quality crypto support in rather than relying on the Heartbleed library instead...? That is certainly an interesting take.
Rust has rustls, we need to use OpenSSL for a very specific set of reasons unfortunately, but rustls is great and most people can just use that.
Also, I had no idea it was added because of constant time crypto that was shared after I wrote my top level comment.
I'm not entirely clear here. Are you saying that the failure of the Go ecosystem was in not building on rusttls (which didn't exist at the time), or are you saying that the failure of the ecosystem was in you not stopping to think before writing your comment?
Even though there is the whole CGO is not Go meme, it certainly makes it rather easy to write C and C++ code directly on a Go project, thus I imagine some folks reach rather easy to it.
Which I can relate to, when doing stuff that is Windows only , I rather make use of C++/CLI than getting P/Invoke declarations correctly.
That's quite nice. There is a small risk that the client request mechanism might change . The headers don't change much - mostly when a new platform gets added. Go is only targeting amd64 and arm64.
This isn't so much about leaks. The most important thing that this will enable is correct analysis of uninitialised memory. Without annotation memory that gets recycled will not be correctly poisoned. I imagine that it will also be useful for the other tools (except cachegrind and callgrind).
Not even a Go user, and yet this is one of the best things I have read today morning. Valgrind is possibly one of the most powerful tools I have in my belt!!
Would you mind to elaborate? I don't program in C but it sounds interesting.
I love Valgrind, but since my main development machine is an M3, I don’t get to use it nearly as much as I would like.
https://github.com/LouisBrunner/valgrind-macos
This is only useful for cgo correct?
It's really for crypto. https://news.ycombinator.com/item?id=45348445
But maybe others will find a way to use it. Who knows?
I presume it is also useful if you are using the unsafe APIs as well to mess with pointers and do raw memory reads.
damn, i remember using valgrind when writing C in university a long time ago.
I remember when it came to be, and I was already working. Yep feeling gray.
And I’m going to be using Valgrind in a few weeks, writing C in university now.
oh man. you came at the right time.