The whole “just read the code carefully and you’ll find bugs” thing works fine on a 500-line rope implementation. Try that on a million-line distributed system with 15 years of history and a dozen half-baked abstractions layered on top of each other. You won’t build a neat mental model, you’ll get lost in indirection.
What actually prevents bugs at scale is boring stuff: type systems, invariant checks, automated property testing, and code reviews that focus on contracts instead of line-by-line navel gazing. Reading code can help, sure, but treating it as some kind of superpower is survivorship bias.
Yes, something strange happens in large systems, where it's better to assume they work the way they are supposed to, rather than deal with how they (currently) work in reality.
It's common in industry for (often very productive) people to claim the "code is the source of truth", and just make things work as bluntly as possible. Sprinkling in special cases and workarounds as needed. For smaller systems that might even be the right way about it.
For larger systems, there will always be bugs, and the only way for the number of bugs to tend to zero is for everyone to have the same set of strong assumptions about how the system is supposed to behave. Continuously depending on those assumptions, and building more and more on them will eventually reveal the most consequential bugs, and fixing them will be more straightforward. Once they are fixed, everything assuming the correct behavior is also fixed.
In large systems, it is worse to build something that works, but depends on broken behavior than to build something that doesn't work, but depends on correct behavior. In the second case you basically added an invariant check by building a feature. It's a virtuous process.
One thing worth pointing out here is you that when reading you rarely will actually find the bug that you set out to, undoubtedly you'll notice others. Because you're reading the code with a mindset of "what awkward conditions failed to be handled appropriately such that xyz could happen".
It is also valuable to both form a hypothesis of how you think the code works,
and then measure in the debugger how it actually works. Once you understand how these differ, it can be helpful in restructuring the code so it's structure better reflects it's behavior.
Time spent reading code is almost never fruitless.
In my experience there's one approach, which might not necessarily prevent bugs, but helps to reduce their numbers, and does not require much effort. I'm trying to use it whenever possible.
1. Code defensively, but don't spend too much time on handling error conditions. Abort as early as possible. Keep enough information to locate error later. Log relevant data. For example just put `Objects.requireNonNull` for public arguments which must not be null. If they're null, exception will be thrown which should abort current operation. Exception stacktrace will include enough information to pinpoint the bug location and fix it later.
2. Monitor for these messages and act accordingly. My rule of thumb: zero stack traces in logs. Stacktrace is sign of bug and should be handled one way or another.
With bug prevention, it's important to stay reasonable, there's only so much time in the world and business people usually don't want to pay 10x to eliminate 50% bugs. And handling theoretical error conditions also adds to the complexity of codebase and might actually hurt its maintainability.
>If you want a single piece of advice to reduce your bug count, it’s this: Re-read your code frequently. After writing a few lines of code (3 to 6 lines, a short block within a function), re-read them. That habit will save you more time than any other simple change you can make.
So, more focused on a ground-up, de novo thing as opposed to inheriting or joining a large project. Different models of "code" and different strokes for different folks, I guess, but the big takeaway I like from that initial piece is:
>I spent the next two years keeping a log of my bugs, both compile-time errors and run-time errors, and modified my coding to avoid the common ones.
It was a different era, but I feel like the act of manually recording specific bugs probably helps ingrain them better and help you avoid them in the future. Tooling has come a long way, so maybe it's less relevant, but it's not a bad thing to think about.
In the end, a lot of learning isn't learning per se, but rather learning where the issues are going to be, so you know when to be careful or check something out.
I've also noticed that a strong type system and things like immutability have worked tremendously well for minimizing the amount bugs. They can't necessarily help with business rules but a compiler can definitely clear out all the "stupid" bugs.
> The whole “just read the code carefully and you’ll find bugs” thing works fine on a 500-line rope implementation. Try that on a million-line distributed system with 15 years of history and a dozen half-baked abstractions layered on top of each other. You won’t build a neat mental model, you’ll get lost in indirection.
Yes, yes, why bother reading your code at all? After all, eventually 15 years will pass whether you do anything or not!
I think if you read it while it's 500 lines, you'll see a way to make it 400. Maybe 100 lines. Maybe shorter. As this happens you get more and more confident that these 50 lines are in fact correct, and they do everything that 500 lines you started with do, and you'll stop touching it.
Then, you've got only 1,5m lines of code after 15 years, and it's all code that works: that you don't have to touch. Isn't that great?
Comparing that to the 15m lines of code that doesn't work, that nobody read, just helps make the case for reading.
> What actually prevents bugs at scale is boring stuff: type systems, invariant checks, automated property testing, and code reviews that focus on contracts instead of line-by-line navel gazing.
Nonsense. The most widely-deployed software with the lowest bug-count is written in C. Type systems could not have done anything to improve that.
> sure, but treating it as some kind of superpower is survivorship bias
That's the kind of bias we want: Programs that run for 15 years without changing are the ones we think are probably correct. Programs that run for 15 years with lots of people trying to poke at them are ones we can have even more confidence in.
You can definitely use this approach for large projects. No matter how big, at some point you are just reading a function or file. You don't need to read every single file to find bugs.
> What actually prevents bugs at scale is boring stuff:
There is a layer above this: To understand, really, what are the requirements and to check if are delivered. You can have perfect code that do nothing of consequence. Is the equivalent of `this function is not used by anything` but more macro.
But of course, the problem is to decipher the code, where what you say helps a ton.
I've identified serious bugs that were lurking in large legacy codebases for years with this approach. Whenever I read code I always try to find the gaps between "what was the author(s) trying to do?" and "what does this code actually do?" There's often quite a lot of space between those two, and almost always that space is where you find the bugs.
In Writing Solid Code[0], Steve Maguire recommends stepping through every line of code, in a symbolic debugger.
Sounds crazy, but I usually end up doing that, anyway, as I work.
Another tip that has helped me, is to add code documentation to inline code, after it’s written (I generally add some, but not much inline, as I write it. Most of my initial documentation is headerdoc). The process of reading the code, helps cement its functionality into my head, and I also find bugs, just like he mentions.
> In Writing Solid Code[0], Steve Maguire recommends stepping through every line of code, in a symbolic debugger.
> Sounds crazy, but I usually end up doing that, anyway, as I work.
This doesn't sound crazy to me. On the contrary, it sounds crazy not to do it.
How many bugs do we come across where we ask rhetorically, "Did this ever work?" It becomes obvious that the programmer never ran the code, otherwise the bug would have exhibited immediately.
When I think about stepping through the code in a debugger, I think about how hard it is to even run all the code, for the projects I've had the most trouble on.
1. One was soft real-time and stepping through the code in a debugger would first mean having a robust way to play back data input on a simulate time tick. Doing it on live sensor data would mean the code saw nonsense.
2. One requires a background service to run as root. Attaching a debugger to that is surely possible but not any fun.
3. Attaching a debugger to an Android app is certainly possible, but I have never practiced it, and I'm not sure if it can be done "in the field" - Suppose I have a bug that only replicates under certain conditions that are hard to simulate.
You're not wrong. But maybe bugs build up when programmers don't want to admit that we aren't really able to run our code, and managers don't want to give us time to actually run it.
I also do this quite a lot but pair it with an automated test to repeatedly trigger the breakpoint with different values and round out the tests and code accordingly.
I agree with the idea of not making bugs in the first place. Overall I think this piece is great and includes good suggestions. However, personally I think the best weapon to avoid writing bugs is making them impossible in the first place, ala "Making invalid state unpresentable".
Interestingly there's a post from the last day arguing that "Making invalid state unpresentable" is harmful[0], which I don't think I agree with. My experience is that bugs hide in crevices created by having invalid states remain representable and are often caused by the increased cognitive load of not having small reasoning scopes. In terms of reading code to find bugs, having fewer valid states and fewer intersections of valid state makes this easier. With well-define and constrained interfaces you can reason about more code because you need to keep fewer facts in your head.
electric_muse's point in a sibling comment "The whole “just read the code carefully and you’ll find bugs” thing works fine on a 500-line rope implementation. Try that on a million-line distributed system with 15 years of history and a dozen half-baked abstractions layered on top of each other. You won’t build a neat mental model, you’ll get lost in indirection." is a good case study in this too. Having poorly scoped state boundaries means this reasoning is hard, here too making invalid states unpresentable and interfaces constrained helps.
I once found a bug in code that was read to me over the phone while I sat in an airport waiting for a flight. So I agree that constructing a model of the program in your head is the key, and you can use various interfaces for that. Some are more optimal than others. When I first started learning to write programs we very often debugged from printed listings for example. They rolled up nicely but random access was very slow.
I keep reading articles about how you need to fit mental models of code in your head with analogies to spatial maps. This is not how your brain processes these. You have a spatial center mapping 3D objects to literally mini-3d-models encoded in neurons. You can grasp some(!) code with this if the structure is similar to what you code yourself, but most of the code in larger bases is a ruleset like your countries taxcode - it will only fit in your language processing center and need a lot of working memory.
Now some people might be able to fit more than millers number 7+-2 there and juggle concepts with 20 interconnected entities, but this is mostly done by people having this as their main work / business logic.
These articles mix up same-form dimensional mapping like audio or visual to distinct data, it's similar to why its easy to replicate audio and images, but not olfactory / smell. Your nose picks up millions of different molecules and each receptor locks onto a certain one.
Thinking you can find general rules here is exactly why LLMs seem to work but can never be inductive - they map similarities in higher dimensional space, not reasoning. And the same mix up happens here: You map this code to a space that feels home to you, but it will not apply to reading another purpose software outside your field, a different process pipeline, language or form.
If your assumption would be correct all humans needed to train is reading assembly and then magically all bugs will resolve!
Maybe if you want to understand code with both hemispheres map it to a graph, but trying to make strategies from spatial recognition work for code is like trying to make sense of your traffic law rules by length of paragraphs.
This article really resonated with me. I've been trying to teach this way of thinking to juniors, but with mixed results. They tend to just whack at their code until it stops crashing, while I can often spot logic errors in a minute of reading. I don't think it's that hard, just a different mindset.
There's a well-known quote: "Make the program so simple, there are obviously no errors. Or make it so complicated, there are no obvious errors." A large application may not be considered "simple" but we can minimize errors by making it a sequence of small bug-free commits, each one so simple that there are obviously no errors. I first learned this as "micro-commits", but others call it "stacked diffs" or similar.
I think that's a really crucial part of this "read the code carefully" idea: it works best if the code is made readable first. Small readable diffs. Small self-contained subsystems. Because obviously a million-line pile of spaghetti does not lend itself to "read carefully".
Type systems certainly help, but there is no silver bullet. In this context, I think of type systems a bit like AI: they can improve productivity, but they should not be used as a crutch to avoid reading, reasoning, and building a mental model of the code.
Isn't this basically what a debugger gives you? You say "follow the control flow" and "track state," but those are exactly what I do when stepping through code with invariants and watchpoints. The only real difference I see is that reading doesn't require a reproducible example, while debugging does. Otherwise, the habits seem nearly identical.
Yeah, exactly. Although sometimes, if you don't have a repro, you may want to understand more of the code in the way the article shows to (at least 1-2 go to definitions), as you'd need to know what to change the values to.
I think I'm having a hard time understanding the value of this piece. Is carefully reading the code you're writing/working on not the default? How on Earth do you write code without understanding what it does?
The first programming language I learned was Java. And for us non-native speakers who didn't know English very well at that point public static void did indeed sound like a magic spell. It was behind both an understanding and a language barriers
When I first saw Java, I had already seen multiple dialects of BASIC, plus Turing (a Pascal dialect), HyperTalk (the scripting language of HyperCard, and predecessor of AppleScript), J (an APL derivative), C and C++. I'm also a native speaker of English.
Your perception is still warranted. It was clear enough to me what all of that meant, but I was well aware that static is an awkward, highly overloaded term, and I already had the sense that all this boilerplate is a negative.
To try and relate to the native English speakers the impression of how the usual boilerplate feels like arbitrary magical incantations to novice programmers (and non-native English speakers, I guess).
You totally can identify performance issues by reading code. E.g. spotting accidentally-quadratic, or failing to reserve vectors, or accidental copies in C++. Or in more amateur code (not mine!) using strings to do things that can be done without them (e.g. rounding numbers; yes people do that).
It's a lot easier and better to use profiling in general, but that doesn't mean I never see read code and think "hmm that's going to be slow".
Ok. I'll bite. How do you identify that a performance uplift of part of the code will kill the performance of overall app? Or won't have any observable effect?
I'm not saying you can't spot naive performance pitfalls. But how do you spot cache misses reading the code?
Ok (that's a naive performance problem), and you speed that up, but now a shared resource is used mutably more often, leading to frequent locking and more overall pauses. How would you read that from your code?
Practitioners of this approach to performance optimization often waste huge swaths of their colleagues' time and attention with pointless arguments about theoretical performance optimizations. It's much better to have a measurement first policy. "Hmm that might be slow" is a good signal that you should measure how fast it is, and nothing more.
The author doesn't grasp how much of what they've written amounts to flexing their own outlier intelligence; they must sincerely believe the average programmer is capable of juggling a complex 500 line program in their heads.
I do what the author does all the time, every day. But then, I work mostly on my own; and I've spent decades learning how to structure my code so as to minimize the amount that has to be "live" in my head at any give moment, and so that I can quickly rebuild that mental model on re-reading.
The whole “just read the code carefully and you’ll find bugs” thing works fine on a 500-line rope implementation. Try that on a million-line distributed system with 15 years of history and a dozen half-baked abstractions layered on top of each other. You won’t build a neat mental model, you’ll get lost in indirection.
What actually prevents bugs at scale is boring stuff: type systems, invariant checks, automated property testing, and code reviews that focus on contracts instead of line-by-line navel gazing. Reading code can help, sure, but treating it as some kind of superpower is survivorship bias.
> type systems, invariant checks
Yes, something strange happens in large systems, where it's better to assume they work the way they are supposed to, rather than deal with how they (currently) work in reality.
It's common in industry for (often very productive) people to claim the "code is the source of truth", and just make things work as bluntly as possible. Sprinkling in special cases and workarounds as needed. For smaller systems that might even be the right way about it.
For larger systems, there will always be bugs, and the only way for the number of bugs to tend to zero is for everyone to have the same set of strong assumptions about how the system is supposed to behave. Continuously depending on those assumptions, and building more and more on them will eventually reveal the most consequential bugs, and fixing them will be more straightforward. Once they are fixed, everything assuming the correct behavior is also fixed.
In large systems, it is worse to build something that works, but depends on broken behavior than to build something that doesn't work, but depends on correct behavior. In the second case you basically added an invariant check by building a feature. It's a virtuous process.
This comment is a nugget of gold - I hadn't thought about it in those terms before but it makes total sense. Thank you!
One thing worth pointing out here is you that when reading you rarely will actually find the bug that you set out to, undoubtedly you'll notice others. Because you're reading the code with a mindset of "what awkward conditions failed to be handled appropriately such that xyz could happen".
It is also valuable to both form a hypothesis of how you think the code works, and then measure in the debugger how it actually works. Once you understand how these differ, it can be helpful in restructuring the code so it's structure better reflects it's behavior.
Time spent reading code is almost never fruitless.
In my experience there's one approach, which might not necessarily prevent bugs, but helps to reduce their numbers, and does not require much effort. I'm trying to use it whenever possible.
1. Code defensively, but don't spend too much time on handling error conditions. Abort as early as possible. Keep enough information to locate error later. Log relevant data. For example just put `Objects.requireNonNull` for public arguments which must not be null. If they're null, exception will be thrown which should abort current operation. Exception stacktrace will include enough information to pinpoint the bug location and fix it later.
2. Monitor for these messages and act accordingly. My rule of thumb: zero stack traces in logs. Stacktrace is sign of bug and should be handled one way or another.
With bug prevention, it's important to stay reasonable, there's only so much time in the world and business people usually don't want to pay 10x to eliminate 50% bugs. And handling theoretical error conditions also adds to the complexity of codebase and might actually hurt its maintainability.
The jumping off point given in the lede of the post—<https://www.teamten.com/lawrence/programming/dont-write-bugs...>—ends with this:
>If you want a single piece of advice to reduce your bug count, it’s this: Re-read your code frequently. After writing a few lines of code (3 to 6 lines, a short block within a function), re-read them. That habit will save you more time than any other simple change you can make.
So, more focused on a ground-up, de novo thing as opposed to inheriting or joining a large project. Different models of "code" and different strokes for different folks, I guess, but the big takeaway I like from that initial piece is:
>I spent the next two years keeping a log of my bugs, both compile-time errors and run-time errors, and modified my coding to avoid the common ones.
It was a different era, but I feel like the act of manually recording specific bugs probably helps ingrain them better and help you avoid them in the future. Tooling has come a long way, so maybe it's less relevant, but it's not a bad thing to think about.
In the end, a lot of learning isn't learning per se, but rather learning where the issues are going to be, so you know when to be careful or check something out.
I've also noticed that a strong type system and things like immutability have worked tremendously well for minimizing the amount bugs. They can't necessarily help with business rules but a compiler can definitely clear out all the "stupid" bugs.
Dealing with a 15-year old legacy codebase with strong types: awful but manageable. Without: not a chance.
> The whole “just read the code carefully and you’ll find bugs” thing works fine on a 500-line rope implementation. Try that on a million-line distributed system with 15 years of history and a dozen half-baked abstractions layered on top of each other. You won’t build a neat mental model, you’ll get lost in indirection.
Yes, yes, why bother reading your code at all? After all, eventually 15 years will pass whether you do anything or not!
I think if you read it while it's 500 lines, you'll see a way to make it 400. Maybe 100 lines. Maybe shorter. As this happens you get more and more confident that these 50 lines are in fact correct, and they do everything that 500 lines you started with do, and you'll stop touching it.
Then, you've got only 1,5m lines of code after 15 years, and it's all code that works: that you don't have to touch. Isn't that great?
Comparing that to the 15m lines of code that doesn't work, that nobody read, just helps make the case for reading.
> What actually prevents bugs at scale is boring stuff: type systems, invariant checks, automated property testing, and code reviews that focus on contracts instead of line-by-line navel gazing.
Nonsense. The most widely-deployed software with the lowest bug-count is written in C. Type systems could not have done anything to improve that.
> sure, but treating it as some kind of superpower is survivorship bias
That's the kind of bias we want: Programs that run for 15 years without changing are the ones we think are probably correct. Programs that run for 15 years with lots of people trying to poke at them are ones we can have even more confidence in.
| > What actually prevents bugs at scale is boring stuff: type systems...
| Nonsense. The most widely-deployed software with the lowest bug-count is written in C. Type systems could not have done anything to improve that.
C is statically and fairly strongly typed. Hard to tell if you're arguing for or against the statement you're responding to.
You can definitely use this approach for large projects. No matter how big, at some point you are just reading a function or file. You don't need to read every single file to find bugs.
This can be combined with a more strategic approach like: https://mitchellh.com/writing/contributing-to-complex-projec...
> What actually prevents bugs at scale is boring stuff:
There is a layer above this: To understand, really, what are the requirements and to check if are delivered. You can have perfect code that do nothing of consequence. Is the equivalent of `this function is not used by anything` but more macro.
But of course, the problem is to decipher the code, where what you say helps a ton.
I've identified serious bugs that were lurking in large legacy codebases for years with this approach. Whenever I read code I always try to find the gaps between "what was the author(s) trying to do?" and "what does this code actually do?" There's often quite a lot of space between those two, and almost always that space is where you find the bugs.
In Writing Solid Code[0], Steve Maguire recommends stepping through every line of code, in a symbolic debugger.
Sounds crazy, but I usually end up doing that, anyway, as I work.
Another tip that has helped me, is to add code documentation to inline code, after it’s written (I generally add some, but not much inline, as I write it. Most of my initial documentation is headerdoc). The process of reading the code, helps cement its functionality into my head, and I also find bugs, just like he mentions.
[0] https://writingsolidcode.com/
> In Writing Solid Code[0], Steve Maguire recommends stepping through every line of code, in a symbolic debugger.
> Sounds crazy, but I usually end up doing that, anyway, as I work.
This doesn't sound crazy to me. On the contrary, it sounds crazy not to do it.
How many bugs do we come across where we ask rhetorically, "Did this ever work?" It becomes obvious that the programmer never ran the code, otherwise the bug would have exhibited immediately.
True, dat.
Writing Solid Code is over 30 years old, and has techniques that are still completely relevant, today (some have become industry standard).
Reading that, was a watershed in my career.
When I think about stepping through the code in a debugger, I think about how hard it is to even run all the code, for the projects I've had the most trouble on.
1. One was soft real-time and stepping through the code in a debugger would first mean having a robust way to play back data input on a simulate time tick. Doing it on live sensor data would mean the code saw nonsense.
2. One requires a background service to run as root. Attaching a debugger to that is surely possible but not any fun.
3. Attaching a debugger to an Android app is certainly possible, but I have never practiced it, and I'm not sure if it can be done "in the field" - Suppose I have a bug that only replicates under certain conditions that are hard to simulate.
You're not wrong. But maybe bugs build up when programmers don't want to admit that we aren't really able to run our code, and managers don't want to give us time to actually run it.
I also do this quite a lot but pair it with an automated test to repeatedly trigger the breakpoint with different values and round out the tests and code accordingly.
That sounds like an excellent practice!
I agree with the idea of not making bugs in the first place. Overall I think this piece is great and includes good suggestions. However, personally I think the best weapon to avoid writing bugs is making them impossible in the first place, ala "Making invalid state unpresentable".
Interestingly there's a post from the last day arguing that "Making invalid state unpresentable" is harmful[0], which I don't think I agree with. My experience is that bugs hide in crevices created by having invalid states remain representable and are often caused by the increased cognitive load of not having small reasoning scopes. In terms of reading code to find bugs, having fewer valid states and fewer intersections of valid state makes this easier. With well-define and constrained interfaces you can reason about more code because you need to keep fewer facts in your head.
electric_muse's point in a sibling comment "The whole “just read the code carefully and you’ll find bugs” thing works fine on a 500-line rope implementation. Try that on a million-line distributed system with 15 years of history and a dozen half-baked abstractions layered on top of each other. You won’t build a neat mental model, you’ll get lost in indirection." is a good case study in this too. Having poorly scoped state boundaries means this reasoning is hard, here too making invalid states unpresentable and interfaces constrained helps.
0: https://news.ycombinator.com/item?id=45164444
I once found a bug in code that was read to me over the phone while I sat in an airport waiting for a flight. So I agree that constructing a model of the program in your head is the key, and you can use various interfaces for that. Some are more optimal than others. When I first started learning to write programs we very often debugged from printed listings for example. They rolled up nicely but random access was very slow.
I keep reading articles about how you need to fit mental models of code in your head with analogies to spatial maps. This is not how your brain processes these. You have a spatial center mapping 3D objects to literally mini-3d-models encoded in neurons. You can grasp some(!) code with this if the structure is similar to what you code yourself, but most of the code in larger bases is a ruleset like your countries taxcode - it will only fit in your language processing center and need a lot of working memory.
Now some people might be able to fit more than millers number 7+-2 there and juggle concepts with 20 interconnected entities, but this is mostly done by people having this as their main work / business logic.
These articles mix up same-form dimensional mapping like audio or visual to distinct data, it's similar to why its easy to replicate audio and images, but not olfactory / smell. Your nose picks up millions of different molecules and each receptor locks onto a certain one.
Thinking you can find general rules here is exactly why LLMs seem to work but can never be inductive - they map similarities in higher dimensional space, not reasoning. And the same mix up happens here: You map this code to a space that feels home to you, but it will not apply to reading another purpose software outside your field, a different process pipeline, language or form.
If your assumption would be correct all humans needed to train is reading assembly and then magically all bugs will resolve!
Maybe if you want to understand code with both hemispheres map it to a graph, but trying to make strategies from spatial recognition work for code is like trying to make sense of your traffic law rules by length of paragraphs.
This article really resonated with me. I've been trying to teach this way of thinking to juniors, but with mixed results. They tend to just whack at their code until it stops crashing, while I can often spot logic errors in a minute of reading. I don't think it's that hard, just a different mindset.
There's a well-known quote: "Make the program so simple, there are obviously no errors. Or make it so complicated, there are no obvious errors." A large application may not be considered "simple" but we can minimize errors by making it a sequence of small bug-free commits, each one so simple that there are obviously no errors. I first learned this as "micro-commits", but others call it "stacked diffs" or similar.
I think that's a really crucial part of this "read the code carefully" idea: it works best if the code is made readable first. Small readable diffs. Small self-contained subsystems. Because obviously a million-line pile of spaghetti does not lend itself to "read carefully".
Type systems certainly help, but there is no silver bullet. In this context, I think of type systems a bit like AI: they can improve productivity, but they should not be used as a crutch to avoid reading, reasoning, and building a mental model of the code.
Isn't this basically what a debugger gives you? You say "follow the control flow" and "track state," but those are exactly what I do when stepping through code with invariants and watchpoints. The only real difference I see is that reading doesn't require a reproducible example, while debugging does. Otherwise, the habits seem nearly identical.
> The only real difference I see is that reading doesn't require a reproducible example, while debugging does.
You can manipulate values in a debugger to make it go down any code path you like.
Yeah, exactly. Although sometimes, if you don't have a repro, you may want to understand more of the code in the way the article shows to (at least 1-2 go to definitions), as you'd need to know what to change the values to.
I think I'm having a hard time understanding the value of this piece. Is carefully reading the code you're writing/working on not the default? How on Earth do you write code without understanding what it does?
Why is "public static void ..." written in Cyrillic here? I guess this might be a joke?
The first programming language I learned was Java. And for us non-native speakers who didn't know English very well at that point public static void did indeed sound like a magic spell. It was behind both an understanding and a language barriers
When I first saw Java, I had already seen multiple dialects of BASIC, plus Turing (a Pascal dialect), HyperTalk (the scripting language of HyperCard, and predecessor of AppleScript), J (an APL derivative), C and C++. I'm also a native speaker of English.
Your perception is still warranted. It was clear enough to me what all of that meant, but I was well aware that static is an awkward, highly overloaded term, and I already had the sense that all this boilerplate is a negative.
To try and relate to the native English speakers the impression of how the usual boilerplate feels like arbitrary magical incantations to novice programmers (and non-native English speakers, I guess).
Does this person also identify performance issues by reading the code? This is completely impractical.
You totally can identify performance issues by reading code. E.g. spotting accidentally-quadratic, or failing to reserve vectors, or accidental copies in C++. Or in more amateur code (not mine!) using strings to do things that can be done without them (e.g. rounding numbers; yes people do that).
It's a lot easier and better to use profiling in general, but that doesn't mean I never see read code and think "hmm that's going to be slow".
Ok. I'll bite. How do you identify that a performance uplift of part of the code will kill the performance of overall app? Or won't have any observable effect?
I'm not saying you can't spot naive performance pitfalls. But how do you spot cache misses reading the code?
For example if someone uses a linked list where a vector would have worked. Vectors are much faster, partly due to better spatial locality.
Ok (that's a naive performance problem), and you speed that up, but now a shared resource is used mutably more often, leading to frequent locking and more overall pauses. How would you read that from your code?
Practitioners of this approach to performance optimization often waste huge swaths of their colleagues' time and attention with pointless arguments about theoretical performance optimizations. It's much better to have a measurement first policy. "Hmm that might be slow" is a good signal that you should measure how fast it is, and nothing more.
Once your code is optimized so that manual mental/notepad execution is fast enough, it will crush it on any modern processor.
> Does this person also identify performance issues by reading the code? This is completely impractical.
This sounds like every technical job interview.
Extremely funny post.
The author doesn't grasp how much of what they've written amounts to flexing their own outlier intelligence; they must sincerely believe the average programmer is capable of juggling a complex 500 line program in their heads.
I do what the author does all the time, every day. But then, I work mostly on my own; and I've spent decades learning how to structure my code so as to minimize the amount that has to be "live" in my head at any give moment, and so that I can quickly rebuild that mental model on re-reading.