There are two kinds of bugs: the rare, tricky race conditions and the everyday “oh shucks” ones. The rare ones show up maybe 1% of the time—they demand a debugger, careful tracing, and detective work. The “oh shucks” kind where I am half sure what it is when I see the shape of the exception message from across the room - that is all the rest of the time. A simple print statement usually does the trick for this kind.
By definition a rare case probably will rarely show up in my dev environment if it shows up at all, so the only way to find them is to add logging and look at the logs next time someone reports that same bug after the logging was added.
Something tells me your debugger is really hard to use, because otherwise why would you voluntarily choose to add and remove logging instead of just activating the debugger?
Rare 1% bugs practically require prints debugging because they are only going to appear only 6 times if you run the test 600 times. So you just run the test 600 times all at once, look at the logs of the 6 failed tests, and fix the bug. You don’t want to run the debugger 600 times in sequence.
The tricky race conditions are the ones you often don't see in the debugger, because stopping one thread makes the behavior deterministic.
But that aside, for webapps I feel it's way easier to just set a breakpoint and stop to see a var's value instead of adding a print statement for it (just to find out that you also need to see the value of another var). So given you just always start in debugging mode, there's no downside if you have a good IDE.
I used to agree with this, but then I realized that you can use trace points (aka non-suspending break points) in a debugger. These cover all the use cases of print statements with a few extra advantages:
- You can add new traces, or modify/disable existing ones at runtime without having to recompile and rerun your program.
- Once you've fixed the bug, you don't have to cleanup all the prints that you left around the codebase.
I know that there is a good reason for debugging with prints: The debugging experience of many languages suck. In that case I always use prints. But if I'm lucky to use a language with good debugging tooling (e.g Java/Kotlin + IntelliJ IDEA), there is zero chance to ever print for debugging.
I've had far better luck print debugging tricky race conditions than using a debugger.
The only language where I've found a debugger particularly useful for race condition debugging is go, where it's a lot easier to synthetically trigger race conditions in my experience.
Even print debugging is easier in a good debugger.
Print debugging in frontend JS/TS is literally just writing the statement "debugger;" and saving the file. JS, unlike supposedly better designed languages, is designed to support hot reloading so often times just saving the file will launch me into the debugger at the line of code in question.
I used to write C++, and setting up print statements, while easier than using LLDB, is still harder than that.
I still use print debugging, but only when the debugger fails me. It's still easier to write a series of console.log()s than to set up logging breakpoints. If only there was an equivalent to "debugger;" that supported log and continue.
What I've found is that as you chew through surface level issues, at one point all that's left is messy and tricky bugs.
Still have a vivid memory of moving a JS frontend to TS and just overnight losing all the "oh shucks" frontend bugs, being left with race conditions and friends.
Not to say you can't do print debugging with that (tracing is fancy print debugging!), but I've found that a project that has a lot of easy-to-debug issues tends to be at a certain level of maturity and as times goes on you start ripping your hair out way more
Well, if you have a race condition, the debugger is likely to change the timing and alter the race, possibly hiding it altogether. Race conditions is where print is often more useful than the debugger.
If I find myself using a debugger it’s usually one two things:
- freshly written low level assembly code that isn’t working
- basic userspace app crash (in C) where whipping out gdb is faster than adding prints and recompiling.
Even never needed a debugger for complex kernel drivers — just prints.
> the rare, tricky race conditions [...]. The rare ones show up maybe 1% of the time—they demand a debugger,
Interesting. I usually find those harder to debug with a debugger. Debuggers change the timing when stepping through, making the bug disappear. Do you have a cool trick for that? (Or a mundane trick, I'm not picky.)
No shade, this was my perspective until recently as well, but I disagree now.
The tipping point for me was the realisation that if I'm printing code out for debugging, I must be executing that code, and if I'm executing that code anyway, it's faster for me to click a debug point in an IDE than it is to type out a print statement.
Not only that, but the thing that I forgot to include in my log line doesn't require adding it in and re-spinning, I can just look it up when the debug point is hit.
I don't know why it took me so long to change the habit but one day it miraculously happened overnight.
Often you can also just use conditional breakpoints, which surprisingly few people know about (to be clear, it's still a breakpoint, but your application just auto continues if false. Is usually usable via right click on the area you're clicking on to set the breakpoint.
I don't see any evidence that the 1% of bugs can be reduced so easily. A debugger is unsuitable just as often as print debugging is. There is no inherent edge it gives to the sort of reasoning demanded. It is just a flathead rather than a phillips. The only thing that distinguishes this sort of bug from the rest is pain.
When the print statements cause a change in asynchronous data hazards that leads to the issue disappearing, then what's the plan since you appear to "know it all" already? Perhaps you don't know as much as you profess, professor.
I’ll give you an example a plain vanilla ass bug that I dealt with today.
Teammate was trying to use portaudio with ALDA on one of cloud Linux machines for CI tests. Portaudio was failing to initialize with an error that it failed to find the host api.
Why did it fail? Where did it look? What actual operation failed? Who the fuck knows! With a debugger this would take approximately 30 seconds to understand exactly why it failed. Without a debugger you need to spend a whole bunch of time figuring out how a random third party library works to figure out where the fuck to even put a printf.
Printf debugging is great if it’s within systems you already know inside and out. If you deal with code that isn’t yours then debugger is more then an order of magnitude faster and more efficient.
It’s super weird how proud people are to not use tools that would save them hundreds of hours per year. Really really weird.
Every engineer should understand how to use a debugger and a time profiler (one that gives a call tree). Knowing how to do memory profiling is incredibly valuable too.
So many problems can be solved with these.
And then there's some more specialized tooling depending on what you're doing that can be a huge help.
For SQL, the query planner and index hit/miss / full table scan.
And things like valgrind or similar for cache hit/miss.
Proper observability (spans/ traces) for APIs...
Knowing that the tools exist and how to use them can be the difference between software and great software.
Though system design / architecture is very important as well.
So, uh, everything is important, and every engineer must know everything then?
I mean, don't get me wrong, I do agree engineers should at least be aware of the existence of debuggers & profilers and what problems they can solve. It's just that not all the stuff you've said belongs in the "must know" category.
I don't think you'll need valgrind or query planning in web frontend tasks. Knowing them won't hurt though.
It may sound obvious to folks who already use a debugger, but in my experience a decent chunk of people don't use them because they just don't know about them.
Depending on the language or setup debuggers can be really crappy. I think people here would just flee away and go find a better fitting stack, but for more pragmatic workers they'll just learn to debug with the other tools (REPL, structured logging, APMs etc.)
I had a think about where I first learned to use a debugger. The combo of M$ making it easy for .NET and VB6 and working professionally and learning from others was key. Surprised it is less popular. Tests have made it less necessary perhaps BUT debugging a unit test is a killer move. You quickly get to the breakpoint and can tweak the scenario.
Author missed one of the best features: easy access to hardware breakpoints. Breaking on a memory read or write, either a raw address or via a symbol, is one of the most time saving debugging tools I know.
windbg used to offer scripting capabilities that teams could use to trigger validation of any number of internal data structures essentially at every breakpoint or watchpoint trigger. it was a tremendous way to detect subtle state corruption. and sharing scripts across teams was also a way to share knowledge of a complex binary that was often not encoded in asserts or other aspects of the codebase.
Oh my god, same. This literally catches bugs with a smoking gun in their hand in a way that's completely impossible with printf. I'd upvote this 100 times if I could.
From the same toolbox: expression watch. Set a watch on the invariant being violated (say "bufpos < buflen") and get a breakpoint the moment it changes.
While a debugger is of high value, having access to a REPL also covers the major use cases.
In particular, REPL tools will work on remote session, on pre-production servers etc. _if_ the code base is organized in a somewhat modular way, it can be more pleasant than a debugger at times.
Makes me wonder if the state of debugging improved in PHP land. It was mostly unusable for batch process debugging, or when the server memory wasn't infinite, which is kinda the case most of the time for us mere mortals.
IME console-based debuggers work great for single-threaded code without a lot of console output. They don't work that well otherwise. GUI-based debuggers can probably fix both of those issues. I just haven't really tried them as much.
It's not a silver bullet, but Visual Studio is leaps and bounds ahead of gdb et. al. for debugging C/C++ code. "Attach to process" and being able to just click a window is so easy when debugging a large Windows app.
lol, agree to disagree here. While the interface to gdb is annoying, there are many gui frontend alternatives.
VS, on the other hand, gets worse with every release. It is intolerably slow and buggy at this point. It used to be a fantastic piece of software, and is now a fantastic pile of shit.
Most languages let you print the stack, so you can easily see the stack using print debugging.
Anecdotally, dynamic expressions are impossibly slow in the cases I’ve tried them.
As the author mentions, there are also a number of cases where debuggers don’t work. Personally, I’m going to reach for the tool that always works vs. sometimes works.
> I’m going to reach for the tool that always works vs. sometimes works.
This is only logical if you're limited to one tool. Would you never buy a power tool because sometimes the power goes out and a hand tool is your only choice?
Print debugging is historical / offline debugging, just ad-hoc instead of systemic.
The ”debug” package on npm is something in between, as it requires inserting debug statements but they are hidden from output unless an envvar like DEBUG=scope.subscope.*,otherscope is used.
I've loved working with rr! Unfortunately the most recent project I've been contributing to breaks it (honestly it might just be Ubuntu, as it works on my arch install, but doesn't work when deployed where I need to test it).
printing is never the appropriate tool. You can make your debugger print something when that line of code is reached anyway and automatically continue if you want. So what's the point of pritntf? It's just less information and features.
> Some would’ve also heard about time travel debuggers (TTD) which let you step back in time. But most languages do not have a mature TTD implementation. So I am not writing about that.
Shame as that's likely the only option with significant universal UX advantage vs. sprinkling prints...
things I can do with print statements but not a debugger: trace the flow of several values across a program, seeing their values at several different times and execution points in a single screen.
Maybe someone can give me idea, how can I debug this particular rust app, which is extremely annoying. It's a one of Rustdesk.
It won't run if I compile with debug info. I think it's due to a 3rd party proprietary library. So, to run the app I have to use release profile, with debug info stripped.
So, when I fire up gdb, I can't see any function information or anything, and it has so many system calls it's really difficult to follow through blindly.
I'd investigate why it won't run with debug info in the first place. That feels like the core problem here, because it prevents you from using some debug tools.
Of course that may require digging down pretty low, which is difficult in itself.
Edit: also there's split-debuginfo which puts debug info in separate file. It could help if the reason you can't run it is the debug info itself. Which feels unlikely, but :shrug:.
Two of the benefits listed (call stack and catch exceptions at the source) are available in logging as well. A good logging framework lets you add the method name, source file and line number for the logging call-after a few debugging sessions you will construct the call stack quite easily. And C# at least lets you print the exception call stack from where it was thrown.
I agree that adhoc dynamic expression evaluation at run time is very useful and can only be done in a debugger.
Don’t tell Primeagen. Although he’s right about debugging sprawling systems in Prod. I’d argue the stateful architecture of these apps is the root cause.
I had to avoid doing that inside other macros, or inside Struct or Class definitions, enums, etc. But it wasn't hard, and it was a pretty sizeable codebase.
The DEBUGVIKINGCODER macro, or whatever I called it, was a no-op in release. But in Debug or testing builds, would do something like:
So when I'd run the program, I'd get a directory full of files, one per thread.
Then I wrote another program that would read those all up, and would also read the code, and learn the File Name, Line Number of every GUID...
And, in Visual Studio, this tool program would print to the Output window, the File Name and Line Number, of every call and return.
And, in Visual Studio, you can step forward AND BACK in this Output window, and if you format it correctly, it'll open the file at that point, too.
So I could step forwards and backwards, through the code, to see who called where, etc. I could search in this Output window to jump to the function call I was looking for, and then walk backwards...
Then I added some code that would compare one run to another, and argued we could use that to figure out which of our automated tests formed a "basis set" to execute all of our code...
And to recommend which automated tests we should run, based on past analysis.
In addition to being able to time calls to functions, of course.
So then I added printing out some variables... And printing out lines in the middle of functions, when I wanted to time a section...
And if people respected the GUIDs, making a new one when they forked code, and leaving it alone if they moved code, we could have tracked how unit tests and other automation changed over time.
That got me really wishing that every new call scope really did have a GUID, in all the code we write... And I wished that it was essentially hidden from the developers, because who wants to see that? But, wow, it'd be nice if it was there.
I know there are debuggers that can go backwards and forwards in time... But I feel like being able to compare runs, over weeks and months, as the code is changing, is an under-appreciated objective.
This is refreshing. I get triggered by people writing "I don't use a debugger because I'm too smart to need one".
Some other things I'd add:
Some debuggers allow you to add actions. For example logging at the breakpoint is great if I can't modify the source, plus there's nothing to revert afterward. This just scratches the surface. Some debuggers allow you to see entire GPU workloads, view textures etc.
Debuggers are extremely useful for exploring and helping edit code. I can't be the only person that sprinkles breakpoints during development which helps me visualise code flow and quickly jump between source locations.
Honestly, I feel like the print vs. debugger debate isn't about the tool, it's about the mindset. Print statements feel like you're just trying to patch a leak, while the debugger is about understanding the plumbing. I’m starting to think relying only on print is a symptom of not truly wanting to understand the system you're working in.
Interesting POV. I see it exactly the opposite: using a debugger most of the time feels like trying to see the current state of things without understanding what set of inputs led to it. Print debugging feels more like trying to understand the actual program logic that got us to this point, based on a few choice clues.
I’m not saying you’re wrong or I’m right, just that we have diametric opposite opinions on this.
Call stacks and reading code give very different views of the codebase. The debugger tells you what's happening, reading tells you what can happen in many situations at once. You can generalize or focus, respectively, but their strengths and weaknesses remain.
Readable code, though, is written with the reading view in mind.
I think the obvious benefit of a debugger is the ability to introspect when you have the misfortune of investigating the behavior of a binary rather than source code. In the vast, vast majority other instances, it is more desirable (to me) to encode evidence of investigation in the source itself. This has all the other benefits of source code—you can persist it, share it, let ai play with it, fork it, commit it to source control, use git bisect, etc.
There are a few other instances where the interaction offers notable benefits—bugs in the compiler, debugging assembly, access to registers, a half-completed runtime or standard library that occludes access to state so that you might print it. If you have the misfortune of working with C or C++, you have the benefit of breaking on memory access—but I tend to file this in the "half-completed runtime" category. There are also a few "heisenbugs" that may actually prevent the bug from occurring by using print itself; but I've only run into this I think twice. This is also possible with the debugger, but I've only run into that once. The only way out of that mess is careful reasoning, and i recommend printing the code out and using a pen.
I also strongly suspect that preference for print debugging vs interactive debuggers comes down to internal conception of the runtime and aesthetic preference. I abhor debuggers—especially thosr in IDEs. I think they tend to reimplement the runtime of a language a second time, except with more bugs and a less intuitive interface. But I have the wherewithal to realize that this is ultimately a preference.
I'm pretty sure in that interview at some point he realized becasue the debugger experience for developers using Linux sucks compared to Windows where he does most of his work.
Alot of programmers work in a Linux environment.
It seems like windows, ide and languages are all pretty nicely integrated together?
I am surprised all the time in this industry how many software engineers still debug with printf. It's entirely baffling how senior / staff folks in FAANG can get there without this essential skill.
I think it would be interesting to view this from a different angle. Perhaps "Lots of people who know of debuggers still use printf debugging, maybe they're not all wrong and there are advantages that aren't so clear."
Good print statements can become future logging entries for when software ships and debugging statements need to be turned on without source code access.
I'm so used to bouncing between environments my code's running in (and which project I'm working on) that I tend to just assume I don't have debugger access, or at least don't have it configured for that environment, even when I do. Like I'm just in the habit of not reaching for it because so often it's not actually there. It rarely matters much anyway (though when it does, yeah, it really does).
I don't really get the hate that debuggers sometimes get from old hands. "Who needs screwdrivers if we always used knives?" - You can still use your knife, but screwdriver is a useful tool.
It seems to me that this is one of the many phenomena where people want to judge and belittle their peers over something completely trivial.
Personally, I get the appeal of printing out debugging information, especially if some bug is rare and happens in unpredictable times (such as when you are sleeping). But the amount of info you get this way is necessarily lower than what can be gleaned from a debugger.
Print debugging is, checking patient's life signs, eye color, blood pressure, skin inflammation and so on. However using debuggers are like putting the patient through an MRI machine. It can provide you very advanced diagnostic information, but it's expensive, time consuming, requires specialized hardware and education. Alike medicinal doctors it's easier and logical to use the basics until absolutely necessary.
Meh. None of these sway me. I'm a die hard printf() debugger and always will be. But I do use debuggers regularly, for circumstances where printf() isn't quite up to the task. And there really are only two such categories (neither of which appear in the linked article!):
1. Code where the granularity of state change is smaller than a function call. Sometimes you actually have to step through things one instruction at a time, and I'm lucky enough to have such problems to solve. You can't debug your assembly with printf(), basically[1a].
2. State changes that can't be easily isolated. Sometimes you want to log when something change but can't for the life of you figure out when it's changing. Debuggers have watchpoints.
But... that's really it. If I'm not hitting one of those I'm not reaching for the debugger. Logging is just faster, because you type it in right at the code you're already reading.
[1a] Though there's a caveat: sometimes you need to write assembly and don't even have anything like a printk. Bootstrap code for a new device is a blast. You just try stuff like writing one byte to a UART address or setting one GPIO pin as the first instructions and hope it works, then use that one bit of output to pull the rest up.
Assuming you meant C's printf, why would you subject yourself to the pain of recompilation every time you need to look at a different part of code? Isn't the debugger easier than adding printf and then recompiling?
Do you use snippets or something to help speed this up? Manually typing `printf("longvarname=%s secondvarname=%d\n", longvarname, secondvarname);` adds up over a debugging session, compared to a graphical debugger setup with well-chosen breakpoints, watches etc.
There are two kinds of bugs: the rare, tricky race conditions and the everyday “oh shucks” ones. The rare ones show up maybe 1% of the time—they demand a debugger, careful tracing, and detective work. The “oh shucks” kind where I am half sure what it is when I see the shape of the exception message from across the room - that is all the rest of the time. A simple print statement usually does the trick for this kind.
Leave us be. We know what we’re doing.
I see it the exact other way around:
- everyday bugs, just put a breakpoint
- rare cases: add logging
By definition a rare case probably will rarely show up in my dev environment if it shows up at all, so the only way to find them is to add logging and look at the logs next time someone reports that same bug after the logging was added.
Something tells me your debugger is really hard to use, because otherwise why would you voluntarily choose to add and remove logging instead of just activating the debugger?
Rare 1% bugs practically require prints debugging because they are only going to appear only 6 times if you run the test 600 times. So you just run the test 600 times all at once, look at the logs of the 6 failed tests, and fix the bug. You don’t want to run the debugger 600 times in sequence.
The tricky race conditions are the ones you often don't see in the debugger, because stopping one thread makes the behavior deterministic. But that aside, for webapps I feel it's way easier to just set a breakpoint and stop to see a var's value instead of adding a print statement for it (just to find out that you also need to see the value of another var). So given you just always start in debugging mode, there's no downside if you have a good IDE.
I used to agree with this, but then I realized that you can use trace points (aka non-suspending break points) in a debugger. These cover all the use cases of print statements with a few extra advantages:
- You can add new traces, or modify/disable existing ones at runtime without having to recompile and rerun your program.
- Once you've fixed the bug, you don't have to cleanup all the prints that you left around the codebase.
I know that there is a good reason for debugging with prints: The debugging experience of many languages suck. In that case I always use prints. But if I'm lucky to use a language with good debugging tooling (e.g Java/Kotlin + IntelliJ IDEA), there is zero chance to ever print for debugging.
I've had far better luck print debugging tricky race conditions than using a debugger.
The only language where I've found a debugger particularly useful for race condition debugging is go, where it's a lot easier to synthetically trigger race conditions in my experience.
Even print debugging is easier in a good debugger.
Print debugging in frontend JS/TS is literally just writing the statement "debugger;" and saving the file. JS, unlike supposedly better designed languages, is designed to support hot reloading so often times just saving the file will launch me into the debugger at the line of code in question.
I used to write C++, and setting up print statements, while easier than using LLDB, is still harder than that.
I still use print debugging, but only when the debugger fails me. It's still easier to write a series of console.log()s than to set up logging breakpoints. If only there was an equivalent to "debugger;" that supported log and continue.
> The rare ones show up maybe 1% of the time
Lucky you lol
What I've found is that as you chew through surface level issues, at one point all that's left is messy and tricky bugs.
Still have a vivid memory of moving a JS frontend to TS and just overnight losing all the "oh shucks" frontend bugs, being left with race conditions and friends.
Not to say you can't do print debugging with that (tracing is fancy print debugging!), but I've found that a project that has a lot of easy-to-debug issues tends to be at a certain level of maturity and as times goes on you start ripping your hair out way more
Well, if you have a race condition, the debugger is likely to change the timing and alter the race, possibly hiding it altogether. Race conditions is where print is often more useful than the debugger.
Fully agree.
If I find myself using a debugger it’s usually one two things: - freshly written low level assembly code that isn’t working - basic userspace app crash (in C) where whipping out gdb is faster than adding prints and recompiling.
Even never needed a debugger for complex kernel drivers — just prints.
> the rare, tricky race conditions [...]. The rare ones show up maybe 1% of the time—they demand a debugger,
Interesting. I usually find those harder to debug with a debugger. Debuggers change the timing when stepping through, making the bug disappear. Do you have a cool trick for that? (Or a mundane trick, I'm not picky.)
It is also much much easier to fix all kinds of all other bugs stepping through code with the debugger.
I am in camp where 1% on the easy side of the curve can be efficiently fixed by print statements.
> Leave us be. We know what we’re doing.
No shade, this was my perspective until recently as well, but I disagree now.
The tipping point for me was the realisation that if I'm printing code out for debugging, I must be executing that code, and if I'm executing that code anyway, it's faster for me to click a debug point in an IDE than it is to type out a print statement.
Not only that, but the thing that I forgot to include in my log line doesn't require adding it in and re-spinning, I can just look it up when the debug point is hit.
I don't know why it took me so long to change the habit but one day it miraculously happened overnight.
Often you can also just use conditional breakpoints, which surprisingly few people know about (to be clear, it's still a breakpoint, but your application just auto continues if false. Is usually usable via right click on the area you're clicking on to set the breakpoint.
I don't see any evidence that the 1% of bugs can be reduced so easily. A debugger is unsuitable just as often as print debugging is. There is no inherent edge it gives to the sort of reasoning demanded. It is just a flathead rather than a phillips. The only thing that distinguishes this sort of bug from the rest is pain.
When the print statements cause a change in asynchronous data hazards that leads to the issue disappearing, then what's the plan since you appear to "know it all" already? Perhaps you don't know as much as you profess, professor.
> Leave us be. We know what we’re doing.
No. You’re wrong.
I’ll give you an example a plain vanilla ass bug that I dealt with today.
Teammate was trying to use portaudio with ALDA on one of cloud Linux machines for CI tests. Portaudio was failing to initialize with an error that it failed to find the host api.
Why did it fail? Where did it look? What actual operation failed? Who the fuck knows! With a debugger this would take approximately 30 seconds to understand exactly why it failed. Without a debugger you need to spend a whole bunch of time figuring out how a random third party library works to figure out where the fuck to even put a printf.
Printf debugging is great if it’s within systems you already know inside and out. If you deal with code that isn’t yours then debugger is more then an order of magnitude faster and more efficient.
It’s super weird how proud people are to not use tools that would save them hundreds of hours per year. Really really weird.
Every engineer should understand how to use a debugger and a time profiler (one that gives a call tree). Knowing how to do memory profiling is incredibly valuable too.
So many problems can be solved with these.
And then there's some more specialized tooling depending on what you're doing that can be a huge help.
For SQL, the query planner and index hit/miss / full table scan.
And things like valgrind or similar for cache hit/miss.
Proper observability (spans/ traces) for APIs...
Knowing that the tools exist and how to use them can be the difference between software and great software.
Though system design / architecture is very important as well.
Renderdoc!
So, uh, everything is important, and every engineer must know everything then?
I mean, don't get me wrong, I do agree engineers should at least be aware of the existence of debuggers & profilers and what problems they can solve. It's just that not all the stuff you've said belongs in the "must know" category.
I don't think you'll need valgrind or query planning in web frontend tasks. Knowing them won't hurt though.
It may sound obvious to folks who already use a debugger, but in my experience a decent chunk of people don't use them because they just don't know about them.
Spread the good word!
Depending on the language or setup debuggers can be really crappy. I think people here would just flee away and go find a better fitting stack, but for more pragmatic workers they'll just learn to debug with the other tools (REPL, structured logging, APMs etc.)
I had a think about where I first learned to use a debugger. The combo of M$ making it easy for .NET and VB6 and working professionally and learning from others was key. Surprised it is less popular. Tests have made it less necessary perhaps BUT debugging a unit test is a killer move. You quickly get to the breakpoint and can tweak the scenario.
yeah, tons dont know they exist. But there's also a lot of people - new and veteran - who are just allergic to them, for various reasons.
Setting up a debugger is the very first thing i do when i start working with a new language, and always use it to explore the code on new projects.
This also applies to testing. So much legacy code out there that's untested.
Author missed one of the best features: easy access to hardware breakpoints. Breaking on a memory read or write, either a raw address or via a symbol, is one of the most time saving debugging tools I know.
windbg used to offer scripting capabilities that teams could use to trigger validation of any number of internal data structures essentially at every breakpoint or watchpoint trigger. it was a tremendous way to detect subtle state corruption. and sharing scripts across teams was also a way to share knowledge of a complex binary that was often not encoded in asserts or other aspects of the codebase.
Oh my god, same. This literally catches bugs with a smoking gun in their hand in a way that's completely impossible with printf. I'd upvote this 100 times if I could.
From the same toolbox: expression watch. Set a watch on the invariant being violated (say "bufpos < buflen") and get a breakpoint the moment it changes.
Especially with combined with reverse-execution in rr or UndoDB!
Is there somewhere where this approach is described in more detail?
While a debugger is of high value, having access to a REPL also covers the major use cases.
In particular, REPL tools will work on remote session, on pre-production servers etc. _if_ the code base is organized in a somewhat modular way, it can be more pleasant than a debugger at times.
Makes me wonder if the state of debugging improved in PHP land. It was mostly unusable for batch process debugging, or when the server memory wasn't infinite, which is kinda the case most of the time for us mere mortals.
I am the author of the posted flamebait. I agree.
I use IPython / JShell REPLs often when the code is not finished and I have to call a random function without entrypoint.
In fact its possible to jump to the graphical debugger from the Python REPL when running locally. PyCharm has this feature natively. In VSCode you can use a simple workaround like this: https://mahesh-hegde.github.io/posts/vscode-ipython-debuggin...
IME console-based debuggers work great for single-threaded code without a lot of console output. They don't work that well otherwise. GUI-based debuggers can probably fix both of those issues. I just haven't really tried them as much.
pdb is great for python, though.
I frequently use the go debugger to debug concurrent go routines. I haven’t found it any different than single threaded debugging.
I simply use conditional break points to break when whatever go routine happens to be working on the struct I care about.
Is there more to the issue?
It's not a silver bullet, but Visual Studio is leaps and bounds ahead of gdb et. al. for debugging C/C++ code. "Attach to process" and being able to just click a window is so easy when debugging a large Windows app.
lol, agree to disagree here. While the interface to gdb is annoying, there are many gui frontend alternatives.
VS, on the other hand, gets worse with every release. It is intolerably slow and buggy at this point. It used to be a fantastic piece of software, and is now a fantastic pile of shit.
Most languages let you print the stack, so you can easily see the stack using print debugging.
Anecdotally, dynamic expressions are impossibly slow in the cases I’ve tried them.
As the author mentions, there are also a number of cases where debuggers don’t work. Personally, I’m going to reach for the tool that always works vs. sometimes works.
> I’m going to reach for the tool that always works vs. sometimes works.
This is only logical if you're limited to one tool. Would you never buy a power tool because sometimes the power goes out and a hand tool is your only choice?
but can you go back in the stack and inspect the variables and related functions there in print debugging?
debuggers are hard to use outside of userland.
For really hairy bugs in programs that can't be stopped (kernel/drivers/realtime, etc) logging works.
And when it doesn't, like when you can't do I/O or switching of any kind, log non-blocking to a buffer that is dumped elsewhere.
also, related. It is harder than it should be to debug the linux kernel. Just getting a symboled stack trace is ridiculously hard.
Something I haven't seen discussed here that is another type of debugging that can be very useful is historical / offline debugging.
Kind of a hybrid of logging and standard debugging. "everything" is logged and you can go spelunk.
For example:
https://rr-project.org/
Print debugging is historical / offline debugging, just ad-hoc instead of systemic.
The ”debug” package on npm is something in between, as it requires inserting debug statements but they are hidden from output unless an envvar like DEBUG=scope.subscope.*,otherscope is used.
I've loved working with rr! Unfortunately the most recent project I've been contributing to breaks it (honestly it might just be Ubuntu, as it works on my arch install, but doesn't work when deployed where I need to test it).
It isn't either/or. Good programmers know how to use both and know how to choose the appropriate tool for the job.
printing is never the appropriate tool. You can make your debugger print something when that line of code is reached anyway and automatically continue if you want. So what's the point of pritntf? It's just less information and features.
Cases where I absolutely need to do these things to solve a bug: 0%
Maybe 0.1%
> Some would’ve also heard about time travel debuggers (TTD) which let you step back in time. But most languages do not have a mature TTD implementation. So I am not writing about that.
Shame as that's likely the only option with significant universal UX advantage vs. sprinkling prints...
things I can do with print statements but not a debugger: trace the flow of several values across a program, seeing their values at several different times and execution points in a single screen.
Maybe someone can give me idea, how can I debug this particular rust app, which is extremely annoying. It's a one of Rustdesk.
It won't run if I compile with debug info. I think it's due to a 3rd party proprietary library. So, to run the app I have to use release profile, with debug info stripped.
So, when I fire up gdb, I can't see any function information or anything, and it has so many system calls it's really difficult to follow through blindly.
So, what is the best way to handle this?
I'd investigate why it won't run with debug info in the first place. That feels like the core problem here, because it prevents you from using some debug tools.
Of course that may require digging down pretty low, which is difficult in itself.
Edit: also there's split-debuginfo which puts debug info in separate file. It could help if the reason you can't run it is the debug info itself. Which feels unlikely, but :shrug:.
You can add debug info to release builds. In Cargo.toml:
https://doc.rust-lang.org/cargo/reference/profiles.html#debu...claude code cli
Two of the benefits listed (call stack and catch exceptions at the source) are available in logging as well. A good logging framework lets you add the method name, source file and line number for the logging call-after a few debugging sessions you will construct the call stack quite easily. And C# at least lets you print the exception call stack from where it was thrown.
I agree that adhoc dynamic expression evaluation at run time is very useful and can only be done in a debugger.
Don’t tell Primeagen. Although he’s right about debugging sprawling systems in Prod. I’d argue the stateful architecture of these apps is the root cause.
I have counter-points to several of these... But this one is my favorite (This didn't go very far, but I loved the idea of it...):
I once wrote a program that opened up all of my code, and at every single code curly brace, it added a macro call, and a guid.
I had to avoid doing that inside other macros, or inside Struct or Class definitions, enums, etc. But it wasn't hard, and it was a pretty sizeable codebase.The DEBUGVIKINGCODER macro, or whatever I called it, was a no-op in release. But in Debug or testing builds, would do something like:
(Using the right macros to append __LINE__ to the variable, so there's no collisions.)The constructor for DebugVikingCoder used a thread-local variable to write to a file (named after the thread id). It would write, essentially,
The destructor, when that scope was exited, would write to the same file: So when I'd run the program, I'd get a directory full of files, one per thread.Then I wrote another program that would read those all up, and would also read the code, and learn the File Name, Line Number of every GUID...
And, in Visual Studio, this tool program would print to the Output window, the File Name and Line Number, of every call and return.
And, in Visual Studio, you can step forward AND BACK in this Output window, and if you format it correctly, it'll open the file at that point, too.
So I could step forwards and backwards, through the code, to see who called where, etc. I could search in this Output window to jump to the function call I was looking for, and then walk backwards...
Then I added some code that would compare one run to another, and argued we could use that to figure out which of our automated tests formed a "basis set" to execute all of our code...
And to recommend which automated tests we should run, based on past analysis.
In addition to being able to time calls to functions, of course.
So then I added printing out some variables... And printing out lines in the middle of functions, when I wanted to time a section...
And if people respected the GUIDs, making a new one when they forked code, and leaving it alone if they moved code, we could have tracked how unit tests and other automation changed over time.
That got me really wishing that every new call scope really did have a GUID, in all the code we write... And I wished that it was essentially hidden from the developers, because who wants to see that? But, wow, it'd be nice if it was there.
I know there are debuggers that can go backwards and forwards in time... But I feel like being able to compare runs, over weeks and months, as the code is changing, is an under-appreciated objective.
Looklike you invented "tracing", but since you added a hook at every "curly bracket", it would be much more detail than average tracing.
And slower of course, they are not free.
Looks like you invented telemetry
This is refreshing. I get triggered by people writing "I don't use a debugger because I'm too smart to need one".
Some other things I'd add:
Some debuggers allow you to add actions. For example logging at the breakpoint is great if I can't modify the source, plus there's nothing to revert afterward. This just scratches the surface. Some debuggers allow you to see entire GPU workloads, view textures etc.
Debuggers are extremely useful for exploring and helping edit code. I can't be the only person that sprinkles breakpoints during development which helps me visualise code flow and quickly jump between source locations.
They're not just for debugging.
"you can’t use them when your application is running on remote environments"
This isn't always the case. Maybe it's really hard in a lot of cases, but it's always not impossible.
I read it as dealing with applications you only get shell access to and can't forward ports.
Didn't expect this to blow up, and now I realize it's bit of a flame bait topic, haha.
Honestly, I feel like the print vs. debugger debate isn't about the tool, it's about the mindset. Print statements feel like you're just trying to patch a leak, while the debugger is about understanding the plumbing. I’m starting to think relying only on print is a symptom of not truly wanting to understand the system you're working in.
https://lemire.me/blog/2016/06/21/i-do-not-use-a-debugger/
A bit of counterpoint here
Interesting POV. I see it exactly the opposite: using a debugger most of the time feels like trying to see the current state of things without understanding what set of inputs led to it. Print debugging feels more like trying to understand the actual program logic that got us to this point, based on a few choice clues.
I’m not saying you’re wrong or I’m right, just that we have diametric opposite opinions on this.
Call stacks and reading code give very different views of the codebase. The debugger tells you what's happening, reading tells you what can happen in many situations at once. You can generalize or focus, respectively, but their strengths and weaknesses remain.
Readable code, though, is written with the reading view in mind.
What’s a good debugger for bash?
I think the obvious benefit of a debugger is the ability to introspect when you have the misfortune of investigating the behavior of a binary rather than source code. In the vast, vast majority other instances, it is more desirable (to me) to encode evidence of investigation in the source itself. This has all the other benefits of source code—you can persist it, share it, let ai play with it, fork it, commit it to source control, use git bisect, etc.
There are a few other instances where the interaction offers notable benefits—bugs in the compiler, debugging assembly, access to registers, a half-completed runtime or standard library that occludes access to state so that you might print it. If you have the misfortune of working with C or C++, you have the benefit of breaking on memory access—but I tend to file this in the "half-completed runtime" category. There are also a few "heisenbugs" that may actually prevent the bug from occurring by using print itself; but I've only run into this I think twice. This is also possible with the debugger, but I've only run into that once. The only way out of that mess is careful reasoning, and i recommend printing the code out and using a pen.
I also strongly suspect that preference for print debugging vs interactive debuggers comes down to internal conception of the runtime and aesthetic preference. I abhor debuggers—especially thosr in IDEs. I think they tend to reimplement the runtime of a language a second time, except with more bugs and a less intuitive interface. But I have the wherewithal to realize that this is ultimately a preference.
And that's why I never learned Elixir, despite being an interesting languge with an awesome web Framework, Phoenix.
The fact that there ist No Debugger is super unfortunate
Don't show the discussion to John Carmack. He's baffled why people are so allergic to debuggers: https://youtu.be/tzr7hRXcwkw?si=beXGdoePRkbgfTtL
I'm pretty sure in that interview at some point he realized becasue the debugger experience for developers using Linux sucks compared to Windows where he does most of his work.
Alot of programmers work in a Linux environment.
It seems like windows, ide and languages are all pretty nicely integrated together?
I am surprised all the time in this industry how many software engineers still debug with printf. It's entirely baffling how senior / staff folks in FAANG can get there without this essential skill.
I think it would be interesting to view this from a different angle. Perhaps "Lots of people who know of debuggers still use printf debugging, maybe they're not all wrong and there are advantages that aren't so clear."
Good print statements can become future logging entries for when software ships and debugging statements need to be turned on without source code access.
I'm so used to bouncing between environments my code's running in (and which project I'm working on) that I tend to just assume I don't have debugger access, or at least don't have it configured for that environment, even when I do. Like I'm just in the habit of not reaching for it because so often it's not actually there. It rarely matters much anyway (though when it does, yeah, it really does).
“All these senior/staff FAANG folks are using a different tool than the one I regard as essential.”
There are a couple of ways to resolve this conundrum, and you seem to be locked on the less likely one.
What if… that weren’t an essential skill?
Imagine I posted the bell curve meme with "print debugging" on both ends.
I'm surprised that you can get that far without seeing value in print debugging.
I don't really get the hate that debuggers sometimes get from old hands. "Who needs screwdrivers if we always used knives?" - You can still use your knife, but screwdriver is a useful tool.
It seems to me that this is one of the many phenomena where people want to judge and belittle their peers over something completely trivial.
Personally, I get the appeal of printing out debugging information, especially if some bug is rare and happens in unpredictable times (such as when you are sleeping). But the amount of info you get this way is necessarily lower than what can be gleaned from a debugger.
Print debugging is, checking patient's life signs, eye color, blood pressure, skin inflammation and so on. However using debuggers are like putting the patient through an MRI machine. It can provide you very advanced diagnostic information, but it's expensive, time consuming, requires specialized hardware and education. Alike medicinal doctors it's easier and logical to use the basics until absolutely necessary.
Meh. None of these sway me. I'm a die hard printf() debugger and always will be. But I do use debuggers regularly, for circumstances where printf() isn't quite up to the task. And there really are only two such categories (neither of which appear in the linked article!):
1. Code where the granularity of state change is smaller than a function call. Sometimes you actually have to step through things one instruction at a time, and I'm lucky enough to have such problems to solve. You can't debug your assembly with printf(), basically[1a].
2. State changes that can't be easily isolated. Sometimes you want to log when something change but can't for the life of you figure out when it's changing. Debuggers have watchpoints.
But... that's really it. If I'm not hitting one of those I'm not reaching for the debugger. Logging is just faster, because you type it in right at the code you're already reading.
[1a] Though there's a caveat: sometimes you need to write assembly and don't even have anything like a printk. Bootstrap code for a new device is a blast. You just try stuff like writing one byte to a UART address or setting one GPIO pin as the first instructions and hope it works, then use that one bit of output to pull the rest up.
Assuming you meant C's printf, why would you subject yourself to the pain of recompilation every time you need to look at a different part of code? Isn't the debugger easier than adding printf and then recompiling?
Do you use snippets or something to help speed this up? Manually typing `printf("longvarname=%s secondvarname=%d\n", longvarname, secondvarname);` adds up over a debugging session, compared to a graphical debugger setup with well-chosen breakpoints, watches etc.
This is a solid answer.