I tried to finish this quiz but just can’t. Every question I got was a very big, “it depends on context…”
“Do you prefer strong static or dynamic or a mix?” Well… are we teaching 9th graders an intro to coding, writing a quick script to answer a bespoke data question, or writing a data processing library?
“On algorithms I focus on…” Okay, well… do we care about performance? Where is it running? How often will it run? Will the code be disposed of soon or live a decade? Do we need it working today or next week?
I just don’t understand how to even begin formulating an opinion on any of these questions without any context.
To use the compass analogy: shouldn’t you want to best know how to use a compass? What value is there in saying, “my favourite bearing is east-northeast”? That is, the substance in any of this is the “it depends…” portion. Any answers to this quiz are really just a proxy for the kinds of contexts people are solving problems in.
Avoiding questions just because it depends on context might be a valuable way to signal to other people that you're careful and considerate, and that definitely has value.
But quizzes like this are explicitly designed to be contextless. You're supposed to answer with your gut feeling, the first step in the random walk.
Someone who actually depends heavily on context and doesn't have a strong preference either way will answer quickly, and end up near the center on all dimensions of interest - that's where I landed:
"""
You value clarity and directness in code. You prefer explicit, step-by-step solutions that are easy to understand and debug, even if they require more lines of code.
Abstract ↔ Concrete: -2 Concrete
Human ↔ Computer Friendly: +5 Human-Friendly
"""
Experienced engineers will have a gut feeling of ‘it depends’ and if there is no extra context given the question is pointless and any answers are useless.
It's not useless at all. We expect people whose answer to everything is "it depends" to answer more or less at random, when they don't have enough information to say otherwise.
That is an extremely telltale signature. Put another way, if your assertion is right, then a simple 5 minute quiz like this should be enough to rule out people who are claiming to be senior, but who actually arrive at extreme answers. A 5 minute quiz like that would be worth tens of millions in improving hiring practices.
So either we're all leaving millions of the floor by not building a company around this, stat, or your assertion is just wrong. There can indeed be senior engineers who are nonetheless very principled even in low-context situations.
It’s an easy 5 min screen at most and it’s common knowledge anyway which you can easily fake by answering ‘it depends’ and asking basically random questions. Good enough additional context turns the quiz into a systems design interview, which serves your purpose obviously.
> But quizzes like this are explicitly designed to be contextless. You're supposed to answer with your gut feeling, the first step in the random walk [...]
> Someone who actually depends heavily on context and doesn't have a strong preference either way will answer quickly, and end up near the center on all dimensions of intere
Which is basically all that this test is good for. If you're not somewhere around the center, you're either a junior dev or not a very good dev.
I have the same issue. For many of the questions my answer is "all of the above, but A in context A, B in context B, etc.". Many are also not mutually exclusive.
Take this example: "When debugging I typically:"
> Write tests to isolate the problem
In the case of math functions, or more primitive building blocks, writing tests can help ensure correctness of the underlying functions, to exclude them from the problem search.
> Reason about the code logically first
This is always useful.
> Use a debugger to step through code systematically
Useful when dealing with a larger codebase and the control flow is hard to follow. The call stack can give quick guidance over trying to manually decipher the control flow.
> Add print statements to understand data flow:
Useful when debugging continuous data streams or events, e.g. mouse input, where you don't want to interrupt the user interaction that needs to be debugged.
> By forcing you to make a decision without context.
Not the OP, but what would be the point to that? In any practical scenario there is always context, isnt it? I guess I don't quite get what we are trying to measure here.
I took it as “when working in the kinds of things things I prefer to work on using my preferred tools.”
For example, I prefer a mix of static and dynamic typing. Even for performance optimization where technically I do all four of the options, trying to write performant code from the start is what I prefer to do when possible.
This isn’t about the right tool for the job as much as what kinds of tools to you prefer to work with when given the choice.
yes, the mutually exclusive answers (radio buttons) are a quiz smell. they could have used a mix of radio buttons and check boxes, per question, as appropriate, i.e. check boxes for questions where more than one answer is applicable, and radio buttons for the rest.
And that I can see being an interesting survey. It becomes more about “what’s kinds of problem solving tools do you enjoy using?”
I could easily answer that. I love solving the kinds of problems that call for strong static typing and careful specifications and unit testing. I also love the opportunity to “whip up a script” in what feels like a hacker speedrun.
I absolutely adore teaching newcomers intro to programming. Holy crap the glimmer in their eyes when they grok what this opens them up to… I skip past all the “computer science” and jump into making a small game and sharing it with friends via web. Or even just Autohotkey to show them how they can become a hacker of their own computer habits.
The "it depends" instinct is actually the hallmark of an experienced engineer - context-awareness is precisely what separates dogmatic programming from effective problem-solving.
I had that feeling about several questions. The one that stood out to me was
"""
When debugging, I typically:
* Write tests to isolate the problem
* Reason about the code logically first
* Add print statements to understand data flow
* Use a debugger to step through code systematically
"""
and I typically do all 4 of those things. If I don't understand the dataflow yet, I'll start with either print statements or the debugger to understand that. If it's code where I already understand the dataflow, and I can reason about the code already, I'll do that. Otherwise I might first write tests to help me reason about the code. But I generally do all of these things and the order depends on my specific problem.
I landed right in the middle - -1, -2. Which seems weird because I’m very opinionated about a lot of this stuff. I like a lot of the questions but a lot of my answers felt like I was arbitrarily picking something. That’s probably why.
Eg for testing, do I want “whatever finds bugs most effectively” or “property based testing”? Well, property based testing is usually the most time efficient way to find bugs. So, yes both of those. Debugging: do I use print statements, or a debugger, or logically think it through? Yes all of those. But if I arbitrarily said I use a debugger in a multiple choice test, I don’t think that tells you much about how I code!
I do - controversially - think some of the answers are naming bad practices. Like abstraction first is a bad idea - since you know the least about your problem before you start programming it up. Abstraction first inevitably bakes in whatever bad assumptions you walked in the door with into your code. Better to code something - anything - up and use what you learned in the process to iterate on your software architecture decisions. I also generally hate modern OO and these days I prefer static types over dynamic types.
But yeah. Interesting questions! Thanks for putting this together.
Same. I am dead center but this did not really give me any hard questions. For example I controversially believe that user applications do not benefit from unit testing and that manual testing is both faster and superior in quality. Similarly, I believe that for most situations Python’s optional type system is a waste of time and mental load. It is not what catches bugs.
I think both are appropriate for well-scoped library code. But application code is just not well defined enough in most circumstances to get any benefit from it. But this quiz didn’t ask that and I suspect this would swing the score quite strongly.
I likewise got very close to the centre, and was surprised.
If you had shown me the diagram only, and asked me to position myself on it I would have placed myself on the middle of the perimeter of the second quadrant (135 degrees along the circumference), to indicate that I strongly prefer human friendly and concrete over computer friendly and abstract respectively.
And even as I was answering the questions I felt that I was leaning heavily towards that, with answers like starting simple, documenting well and so on.
I think some of the pull in the opposite direction comes down to interpretation as well.
And actually I see in the repo for the quiz there is a JSON file that contains scores for each question that one could have a look at to see if the answers are scored the same way that you think they would be.
For people who haven’t done the quiz yet, don’t look at the json file until after taking the quiz.
Also, the ranges of possible values are not equal in each direction so the resulting compass is biased a bit in favour of abstract and human friendly over concrete and machine friendly respectively.
abstract: min=-25, max=38
human: min=-27, max=33
Which means that the circle diagram showing the result can give a bit of wrong impression imo.
Edit to add: In a frequency plot you can see also specifically how the possible score additions and subtractions are a bit unevenly distributed
Ok. But then I took it 4 more times. I tried to maximize in any direction, and always stayed within the bullseye center-right. One time I was more computer, but I never made it out of the center 20% radius.
Maybe the message is that none of us are extremists, because we care? I like it
Mmm I don’t think I agree. The way I structure code in C or rust is subtly different than how I’d write the same program in Java. OO python or Ruby looks different from data oriented Python or Ruby.
They’re all imperative programs though. “OO vs Imperative” isn’t the right name for that design choice.
The way I was taught "imperative" encompasses both "object-oriented" and "procedural", much like "declarative", the opposite of imperative, captures both functional and logic.
It’s in the word. “Imperative” means something like “urgent demand to action”. Imperative code is code where each line is a command for the computer to do something. Like functions in C - which are made of a list of statements that get executed immediately when they get visited.
C++ and Java are imperative languages because functions are expressed as a list of imperative statements. But there’s nothing inherently imperative about structs and classes. Or any of the concepts of OO. You could have encapsulation, inheritance and polymorphism in a functional language just fine if you wanted to. Haskell fits the bill already - well, depending on your definition of OO.
I suspect this test gives central values to many test-takers because the internal consistency of the questions has not been verified.
Anyone can come up with ten questions and claim "these measure a programmer's fondness for abstraction". Unfortunately, we cannot verify this claim. What we can do is check statistically if the answers to the ten questions correlate strongly enough that we think they measure the same thing, whatever that is.
This involves a trial run of the quiz followed by statistics such as Cronbach's alpha and principal componoment analysis. Often one finds that the questions do not measure the same thing at all! It is difficult to come up with a set of internally consistent questions.
When we add together inconsistent questions (as is done here to get the final score), we don't get stronger signal: the noise from the uncorrelated questions cancels out and almost everyone gets a result near zero.
I know OP is not the author, but if the author reads this: if you're lucky, the data from this audience mey reveal a subset of two or three questions per dimension that are internally consistent. Use only those in the quiz going forward! Then you can trial more questions one at a time and keep only those that are consistent with the first set. (Is this not p-hacking? Yes but in this context who cares.)
I came here to write a similar comment. To me it read more like: When driving, do you prefer to a) turn left, b) turn right, c) press the gas pedal, d) press the brake pedal, or e) execute a sequence of actions that get you safely to your destination? There's a good idea here, perhaps, but it's obscured by very weak questions.
You prefer elegant, high-level solutions that are intuitive and accessible to other developers. You likely favor functional programming, clear abstractions, and code that reads like prose.
Spot on I'd say; code is the best documentation unless I'm writing some bespoke mathematical algorithms, even then I try to offset it by writing and using clear variables/function names.
These appear to be the possible programming philosophy descriptions (found by doing a keyword search in main.js)
Abstract/Human-Friendly: "You prefer elegant, high-level solutions that are intuitive and accessible to other developers. You likely favor functional programming, clear abstractions, and code that reads like prose."
Abstract/Computer-Friendly: "You appreciate mathematical elegance and optimal solutions. You probably enjoy languages with powerful type systems, formal methods, and code that leverages compiler optimizations."
Concrete/Human-Friendly: "You value clarity and directness in code. You prefer explicit, step-by-step solutions that are easy to understand and debug, even if they require more lines of code."
Concrete/Computer-Friendly: "You focus on efficiency and performance. You like to work close to the metal, optimize for speed and memory usage, and prefer direct control over system resources."
This string is also present in the source code, but it doesn't appear possible to trigger its display: "You have a balanced approach to programming, adapting your style based on the specific requirements of each situation."
Yeah I like the idea but the questions need serious work. I only got through a few before giving up because they're ambiguous or I want to say yes to multiple things. E.g. the question about comments - they're basically all true. Or the question about architecture - it's basically impossible to answer.
This isn't a good quiz. An example question (and there are many similar ones):
> When refactoring code, I prioritize:
> - Reducing complexity and coupling
> - Improving readability and maintainability
> - Optimizing performance and resource usage
> - Extracting reusable abstractions
Each refactoring has some goal, some driver behind it. It could be slow performance, unmaintenable mess, high coupling, too much duplication etc... Choosing a single answers makes no sense from a programming point of view. And this is the case most questions I have seen so far on the site.
EDIT: After finishing and seeing, I think I understand it a little better why was it structured like this. If you are open to do things differently, your answers probably won't weigh in any one direction in aggregate. But if you have certain biases, you might be leaning towards choosing similar answers that shows up in the end.
You value clarity and directness in code. You prefer explicit, step-by-step solutions that are easy to understand and debug, even if they require more lines of code.
Abstract ↔ Concrete: 0 Neutral
Human ↔ Computer Friendly: +6 Human-Friendly
The compass is almost in the middle, just a little up from center towards human friendly. That's fine, since most code you write is for other humans to read, the compiler is writing for the machine, only in critical perf sensitive paths you write for computer-first... The rest was mostly neutral, because what I wrote in the parent, it depends on the situation and it can go either way depending on the project.
For making user interfaces, I believe in what is the most relevant depending on context. It could be a CLI, a physical red button, among many other things including the proposed set.
An other question had the option about optimizing for collaboration, and I think apart for purely personal stuff that your write like private poetry, that should always be criteria number one.
You prefer elegant, high-level solutions that are intuitive and accessible to other developers. You likely favor functional programming, clear abstractions, and code that reads like prose.
You prefer elegant, high-level solutions that are intuitive and accessible to other developers. You likely favor functional programming, clear abstractions, and code that reads like prose.
Also, I'd recommend NOT telling the test taker which dimensions they're getting scored on as it will affect the responses. For example, if you gave me a test telling me that you're gonna score me on Introversion-Extraversion and Neuroticism-Emotional Stability, then I may be more biased to answer questions to score me as an emotionally stable introvert since that's what I identify as.
Oh, and
Abstract ↔ Concrete: 0 Neutral |
Human ↔ Computer Friendly: +11 Human-Friendly
> "You value clarity and directness in code. You prefer explicit, step-by-step solutions that are easy to understand and debug, even if they require more lines of code"
This question in particular could use an answer about like "use an up-front design that will not prevent subsequent performance work", so it is sad that it has two similar answers.
But also, it's not really reasonable to map all these opinions onto two axes
You focus on efficiency and performance. You like to work close to the metal, optimize for speed and memory usage, and prefer direct control over system resources.
Seems about right, although I only won "computer friendly" by +1, and I suspect that's because I think "computer friendly" and "human friendly" are very close to the same thing. Code that's short and simple is as easy for computers to execute as it is for humans to read and understand. Although +10 on concrete shows just how much I hate (usually superfluous) abstraction.
> You focus on efficiency and performance. You like to work close to the metal, optimize for speed and memory usage, and prefer direct control over system resources.
well, it's not wrong, but i feel like a lot of these questions are "too normative" in their framing. there is some stuff that i do because of the nature of my work, and some other stuff that other people should be enabled to do because of the nature of their work, and some other stuff that really sucks that nobody should be doing and is nonetheless quite popular
Concrete and human-readable here, which is exactly what I expected to get coming from IT (with short job cycles, high turnover, comparatively low wages) where my guiding principle is not being a dick to the next person by making sure they understand why I did things a given way, and where time to learn new things is very much a “thrown in the fire” type scenario (e.g., learning Asterisk while your global support line is down and your contractor holds non-regional business hours for support).
That's not how I saw it. They're very global, but relevant stances, although very much focused on beginning a new, small to medium sized project.
I ended up slightly Hitchcock (North by North West) of the center (-3, + 12). It's true I'm rather practical, but other people in here call themselves opinionated and end up near the center as well. So perhaps not tricks, but rather too broad/shallow for the measure it takes? Or perhaps the mapping from question to score is too simple.
This was fun, and the conclusion was pretty good (slight preference for human-friendly and concrete).
At first I was frustrated that the answers were very much “it depends”, but then I decided that (a) this is low stakes, (b) just pick the closest one as if someone held a gun to your head. End result was fine.
Pretty much what I expected. Probably also depends on what kind of code you write. I assume somebody who writes kernel drivers would lean more towards computer friendly.
You focus on efficiency and performance. You like to work close to the metal, optimize for speed and memory usage, and prefer direct control over system resources.
We write code for machines. For humans we write human-readble media and formats; like document, diagrams, specifications, and ect...
You prefer elegant, high-level solutions that are intuitive and accessible to other developers. You likely favor functional programming, clear abstractions, and code that reads like prose.
Abstract ↔ Concrete: +4 Abstract
Human ↔ Computer Friendly:+7 Human-Friendly
You value clarity and directness in code. You prefer explicit, step-by-step solutions that are easy to understand and debug, even if they require more lines of code.
> You value clarity and directness in code. You prefer explicit, step-by-step solutions that are easy to understand and debug, even if they require more lines of code.
It does not display the quiz. I found the JavaScript file, which is compiled from Nim, and I found the source code, and the questions.
For many questions, the answer depends on the specific use, and/or will be something other than what is listed there. (For example, debugging will involve all of the four things that are listed there.)
I am also not so sure that the quiz describes the programming philosophy very well, but this is a general feature of these kind of quiz anyways.
I seem to disagree with many modern programmers about programming philosophy, but some I seem to have more agreement with some people who do some things in the older ways (although not completely).
I tried to finish this quiz but just can’t. Every question I got was a very big, “it depends on context…”
“Do you prefer strong static or dynamic or a mix?” Well… are we teaching 9th graders an intro to coding, writing a quick script to answer a bespoke data question, or writing a data processing library?
“On algorithms I focus on…” Okay, well… do we care about performance? Where is it running? How often will it run? Will the code be disposed of soon or live a decade? Do we need it working today or next week?
I just don’t understand how to even begin formulating an opinion on any of these questions without any context.
To use the compass analogy: shouldn’t you want to best know how to use a compass? What value is there in saying, “my favourite bearing is east-northeast”? That is, the substance in any of this is the “it depends…” portion. Any answers to this quiz are really just a proxy for the kinds of contexts people are solving problems in.
Avoiding questions just because it depends on context might be a valuable way to signal to other people that you're careful and considerate, and that definitely has value.
But quizzes like this are explicitly designed to be contextless. You're supposed to answer with your gut feeling, the first step in the random walk.
Someone who actually depends heavily on context and doesn't have a strong preference either way will answer quickly, and end up near the center on all dimensions of interest - that's where I landed:
""" You value clarity and directness in code. You prefer explicit, step-by-step solutions that are easy to understand and debug, even if they require more lines of code. Abstract ↔ Concrete: -2 Concrete Human ↔ Computer Friendly: +5 Human-Friendly """
Experienced engineers will have a gut feeling of ‘it depends’ and if there is no extra context given the question is pointless and any answers are useless.
It's not useless at all. We expect people whose answer to everything is "it depends" to answer more or less at random, when they don't have enough information to say otherwise.
That is an extremely telltale signature. Put another way, if your assertion is right, then a simple 5 minute quiz like this should be enough to rule out people who are claiming to be senior, but who actually arrive at extreme answers. A 5 minute quiz like that would be worth tens of millions in improving hiring practices.
So either we're all leaving millions of the floor by not building a company around this, stat, or your assertion is just wrong. There can indeed be senior engineers who are nonetheless very principled even in low-context situations.
It’s an easy 5 min screen at most and it’s common knowledge anyway which you can easily fake by answering ‘it depends’ and asking basically random questions. Good enough additional context turns the quiz into a systems design interview, which serves your purpose obviously.
> But quizzes like this are explicitly designed to be contextless. You're supposed to answer with your gut feeling, the first step in the random walk [...]
> Someone who actually depends heavily on context and doesn't have a strong preference either way will answer quickly, and end up near the center on all dimensions of intere
Which is basically all that this test is good for. If you're not somewhere around the center, you're either a junior dev or not a very good dev.
I have the same issue. For many of the questions my answer is "all of the above, but A in context A, B in context B, etc.". Many are also not mutually exclusive.
Take this example: "When debugging I typically:"
> Write tests to isolate the problem
In the case of math functions, or more primitive building blocks, writing tests can help ensure correctness of the underlying functions, to exclude them from the problem search.
> Reason about the code logically first
This is always useful.
> Use a debugger to step through code systematically
Useful when dealing with a larger codebase and the control flow is hard to follow. The call stack can give quick guidance over trying to manually decipher the control flow.
> Add print statements to understand data flow:
Useful when debugging continuous data streams or events, e.g. mouse input, where you don't want to interrupt the user interaction that needs to be debugged.
Is the fact that the answer isn’t easy a feature or bug?
Oh a bug, surely. How is the quiz supposed to give you insights when it's unanswerable?
By forcing you to make a decision without context.
Similar to how when presented with the trolley problem some will ask many follow up questions about the individuals on each track, the train, etc.
That’s not the point.
> By forcing you to make a decision without context.
Not the OP, but what would be the point to that? In any practical scenario there is always context, isnt it? I guess I don't quite get what we are trying to measure here.
I think it’s a bug in getting a useful outcome, but it’s a feature in creating engagement with the post.
I took it as “when working in the kinds of things things I prefer to work on using my preferred tools.”
For example, I prefer a mix of static and dynamic typing. Even for performance optimization where technically I do all four of the options, trying to write performant code from the start is what I prefer to do when possible.
This isn’t about the right tool for the job as much as what kinds of tools to you prefer to work with when given the choice.
yes, the mutually exclusive answers (radio buttons) are a quiz smell. they could have used a mix of radio buttons and check boxes, per question, as appropriate, i.e. check boxes for questions where more than one answer is applicable, and radio buttons for the rest.
And that I can see being an interesting survey. It becomes more about “what’s kinds of problem solving tools do you enjoy using?”
I could easily answer that. I love solving the kinds of problems that call for strong static typing and careful specifications and unit testing. I also love the opportunity to “whip up a script” in what feels like a hacker speedrun.
I absolutely adore teaching newcomers intro to programming. Holy crap the glimmer in their eyes when they grok what this opens them up to… I skip past all the “computer science” and jump into making a small game and sharing it with friends via web. Or even just Autohotkey to show them how they can become a hacker of their own computer habits.
The "it depends" instinct is actually the hallmark of an experienced engineer - context-awareness is precisely what separates dogmatic programming from effective problem-solving.
I had that feeling about several questions. The one that stood out to me was
"""
When debugging, I typically:
* Write tests to isolate the problem
* Reason about the code logically first
* Add print statements to understand data flow
* Use a debugger to step through code systematically
"""
and I typically do all 4 of those things. If I don't understand the dataflow yet, I'll start with either print statements or the debugger to understand that. If it's code where I already understand the dataflow, and I can reason about the code already, I'll do that. Otherwise I might first write tests to help me reason about the code. But I generally do all of these things and the order depends on my specific problem.
pick what you would choose by default. in case of multiple options, pick the first option you would try.
it is not a DSA test, but more like intuition test
I’m that way with personality tests haha but this one i screwed down:
Abstract ↔ Concrete: -13 Concrete Human ↔ Computer Friendly: +7 Human-Friendly
That's why I hate (and never use in exams) multiple choice questions.
"it depends on context" is a personality trait
I landed right in the middle - -1, -2. Which seems weird because I’m very opinionated about a lot of this stuff. I like a lot of the questions but a lot of my answers felt like I was arbitrarily picking something. That’s probably why.
Eg for testing, do I want “whatever finds bugs most effectively” or “property based testing”? Well, property based testing is usually the most time efficient way to find bugs. So, yes both of those. Debugging: do I use print statements, or a debugger, or logically think it through? Yes all of those. But if I arbitrarily said I use a debugger in a multiple choice test, I don’t think that tells you much about how I code!
I do - controversially - think some of the answers are naming bad practices. Like abstraction first is a bad idea - since you know the least about your problem before you start programming it up. Abstraction first inevitably bakes in whatever bad assumptions you walked in the door with into your code. Better to code something - anything - up and use what you learned in the process to iterate on your software architecture decisions. I also generally hate modern OO and these days I prefer static types over dynamic types.
But yeah. Interesting questions! Thanks for putting this together.
Same. I am dead center but this did not really give me any hard questions. For example I controversially believe that user applications do not benefit from unit testing and that manual testing is both faster and superior in quality. Similarly, I believe that for most situations Python’s optional type system is a waste of time and mental load. It is not what catches bugs.
I think both are appropriate for well-scoped library code. But application code is just not well defined enough in most circumstances to get any benefit from it. But this quiz didn’t ask that and I suspect this would swing the score quite strongly.
Same. I got dead centre, even though I feel like I have strong biases, and rarely agree with my coworkers on design and style choices.
Maybe your preferences are so contradictory that they cancel each other out :)
I got very close to centre also, just slightly on the "concrete" and "human friendly" sides. But who wouldn't want to be concrete or human-friendly?
I likewise got very close to the centre, and was surprised.
If you had shown me the diagram only, and asked me to position myself on it I would have placed myself on the middle of the perimeter of the second quadrant (135 degrees along the circumference), to indicate that I strongly prefer human friendly and concrete over computer friendly and abstract respectively.
And even as I was answering the questions I felt that I was leaning heavily towards that, with answers like starting simple, documenting well and so on.
I think some of the pull in the opposite direction comes down to interpretation as well.
And actually I see in the repo for the quiz there is a JSON file that contains scores for each question that one could have a look at to see if the answers are scored the same way that you think they would be.
For people who haven’t done the quiz yet, don’t look at the json file until after taking the quiz.
https://github.com/treeform/devcompas/blob/master/questions....
Also, the ranges of possible values are not equal in each direction so the resulting compass is biased a bit in favour of abstract and human friendly over concrete and machine friendly respectively.
abstract: min=-25, max=38
human: min=-27, max=33
Which means that the circle diagram showing the result can give a bit of wrong impression imo.
Edit to add: In a frequency plot you can see also specifically how the possible score additions and subtractions are a bit unevenly distributed
Abstract frequencies: Human frequencies:I forget who said it, but "I don't truly understand a program until the 6th time I've written it."
That's such a good quote. I can't find it anywhere, so I'll attribute it to you.
I can't find it, but believe Joe Armstrong said something along those lines (but I think his number was ten).
Sounds like something Chuck Moore would have said. I have no idea if he did, but it made me think of him.
Ok. But then I took it 4 more times. I tried to maximize in any direction, and always stayed within the bullseye center-right. One time I was more computer, but I never made it out of the center 20% radius.
Maybe the message is that none of us are extremists, because we care? I like it
+1 abstract and 0 Neutral.
I thought the imperative vs object oriented was strange, since they are the same thing.
Mmm I don’t think I agree. The way I structure code in C or rust is subtly different than how I’d write the same program in Java. OO python or Ruby looks different from data oriented Python or Ruby.
They’re all imperative programs though. “OO vs Imperative” isn’t the right name for that design choice.
The way I was taught "imperative" encompasses both "object-oriented" and "procedural", much like "declarative", the opposite of imperative, captures both functional and logic.
It’s in the word. “Imperative” means something like “urgent demand to action”. Imperative code is code where each line is a command for the computer to do something. Like functions in C - which are made of a list of statements that get executed immediately when they get visited.
C++ and Java are imperative languages because functions are expressed as a list of imperative statements. But there’s nothing inherently imperative about structs and classes. Or any of the concepts of OO. You could have encapsulation, inheritance and polymorphism in a functional language just fine if you wanted to. Haskell fits the bill already - well, depending on your definition of OO.
Yes! The number of lousy articles and blog posts I've seen that talk about "imperative, oo and functional programming"...
I suspect this test gives central values to many test-takers because the internal consistency of the questions has not been verified.
Anyone can come up with ten questions and claim "these measure a programmer's fondness for abstraction". Unfortunately, we cannot verify this claim. What we can do is check statistically if the answers to the ten questions correlate strongly enough that we think they measure the same thing, whatever that is.
This involves a trial run of the quiz followed by statistics such as Cronbach's alpha and principal componoment analysis. Often one finds that the questions do not measure the same thing at all! It is difficult to come up with a set of internally consistent questions.
When we add together inconsistent questions (as is done here to get the final score), we don't get stronger signal: the noise from the uncorrelated questions cancels out and almost everyone gets a result near zero.
I know OP is not the author, but if the author reads this: if you're lucky, the data from this audience mey reveal a subset of two or three questions per dimension that are internally consistent. Use only those in the quiz going forward! Then you can trial more questions one at a time and keep only those that are consistent with the first set. (Is this not p-hacking? Yes but in this context who cares.)
> I suspect this test gives central values to many test-takers because the internal consistency of the questions has not been verified.
Do you prefer to use:
1. Capital letters
2. Lower-case letters
3. Numbers
4. The characters required for the text I am writing to be correct and legible
I came here to write a similar comment. To me it read more like: When driving, do you prefer to a) turn left, b) turn right, c) press the gas pedal, d) press the brake pedal, or e) execute a sequence of actions that get you safely to your destination? There's a good idea here, perhaps, but it's obscured by very weak questions.
The graph also seems broken, because I got +2 absract, +10 human-friendly but when plotted, the +10 appeared near the axis anyway.
The graph is would be better plotted with the radius representing standard-deviation or some other measure rather than the raw score.
This quiz is a good reminder of why I support ranked choice voting.
With an algorithm that can make the result worse according to your preferences if you vote compared to if you did not, or with some other algorithm?
You prefer elegant, high-level solutions that are intuitive and accessible to other developers. You likely favor functional programming, clear abstractions, and code that reads like prose.
Abstract ↔ Concrete: +7 Abstract Human ↔ Computer Friendly: +11 Human-Friendly
Spot on I'd say; code is the best documentation unless I'm writing some bespoke mathematical algorithms, even then I try to offset it by writing and using clear variables/function names.
These appear to be the possible programming philosophy descriptions (found by doing a keyword search in main.js)
Abstract/Human-Friendly: "You prefer elegant, high-level solutions that are intuitive and accessible to other developers. You likely favor functional programming, clear abstractions, and code that reads like prose."
Abstract/Computer-Friendly: "You appreciate mathematical elegance and optimal solutions. You probably enjoy languages with powerful type systems, formal methods, and code that leverages compiler optimizations."
Concrete/Human-Friendly: "You value clarity and directness in code. You prefer explicit, step-by-step solutions that are easy to understand and debug, even if they require more lines of code."
Concrete/Computer-Friendly: "You focus on efficiency and performance. You like to work close to the metal, optimize for speed and memory usage, and prefer direct control over system resources."
This string is also present in the source code, but it doesn't appear possible to trigger its display: "You have a balanced approach to programming, adapting your style based on the specific requirements of each situation."
Yeah I like the idea but the questions need serious work. I only got through a few before giving up because they're ambiguous or I want to say yes to multiple things. E.g. the question about comments - they're basically all true. Or the question about architecture - it's basically impossible to answer.
This isn't a good quiz. An example question (and there are many similar ones):
> When refactoring code, I prioritize:
> - Reducing complexity and coupling
> - Improving readability and maintainability
> - Optimizing performance and resource usage
> - Extracting reusable abstractions
Each refactoring has some goal, some driver behind it. It could be slow performance, unmaintenable mess, high coupling, too much duplication etc... Choosing a single answers makes no sense from a programming point of view. And this is the case most questions I have seen so far on the site.
EDIT: After finishing and seeing, I think I understand it a little better why was it structured like this. If you are open to do things differently, your answers probably won't weigh in any one direction in aggregate. But if you have certain biases, you might be leaning towards choosing similar answers that shows up in the end.
I finished it anyway:
Your Programming Philosophy
You value clarity and directness in code. You prefer explicit, step-by-step solutions that are easy to understand and debug, even if they require more lines of code. Abstract ↔ Concrete: 0 Neutral Human ↔ Computer Friendly: +6 Human-Friendly
The compass is almost in the middle, just a little up from center towards human friendly. That's fine, since most code you write is for other humans to read, the compiler is writing for the machine, only in critical perf sensitive paths you write for computer-first... The rest was mostly neutral, because what I wrote in the parent, it depends on the situation and it can go either way depending on the project.
For making user interfaces, I believe in what is the most relevant depending on context. It could be a CLI, a physical red button, among many other things including the proposed set.
An other question had the option about optimizing for collaboration, and I think apart for purely personal stuff that your write like private poetry, that should always be criteria number one.
GPT-5’s results:
Your Programming Philosophy
You prefer elegant, high-level solutions that are intuitive and accessible to other developers. You likely favor functional programming, clear abstractions, and code that reads like prose.
Abstract ↔ Concrete: +3 Abstract
Human ↔ Computer Friendly: +11 Human-Friendly
Interesting. Deepseek R1:
You prefer elegant, high-level solutions that are intuitive and accessible to other developers. You likely favor functional programming, clear abstractions, and code that reads like prose.
Abstract ↔ Concrete: +4 Abstract Human ↔ Computer Friendly: +9 Human-Friendly
This was fun!
Also, I'd recommend NOT telling the test taker which dimensions they're getting scored on as it will affect the responses. For example, if you gave me a test telling me that you're gonna score me on Introversion-Extraversion and Neuroticism-Emotional Stability, then I may be more biased to answer questions to score me as an emotionally stable introvert since that's what I identify as.
Oh, and Abstract ↔ Concrete: 0 Neutral | Human ↔ Computer Friendly: +11 Human-Friendly
Yes, it's so easy to tell which direction each question will push the result, that it kinda distracts from thinking about the question
-11 concrete, +14 human friendly lol.
> "You value clarity and directness in code. You prefer explicit, step-by-step solutions that are easy to understand and debug, even if they require more lines of code"
Sounds about right.
More concrete/less human friendly, but same message. And I couldn't have summed up my own philosophy towards code better, to be honest.
Almost exactly the same score :)
Could someone help me understand the difference between:
And The second seems to be a re-statement of the first?Profiling is Measuring, and the "critical path" is the bottleneck.
This question in particular could use an answer about like "use an up-front design that will not prevent subsequent performance work", so it is sad that it has two similar answers.
But also, it's not really reasonable to map all these opinions onto two axes
I was also thinking between these two. Still see the diffrence: I guess "critical path" is single, bottlenecks are multiple.
You focus on efficiency and performance. You like to work close to the metal, optimize for speed and memory usage, and prefer direct control over system resources.
Seems about right, although I only won "computer friendly" by +1, and I suspect that's because I think "computer friendly" and "human friendly" are very close to the same thing. Code that's short and simple is as easy for computers to execute as it is for humans to read and understand. Although +10 on concrete shows just how much I hate (usually superfluous) abstraction.
> You focus on efficiency and performance. You like to work close to the metal, optimize for speed and memory usage, and prefer direct control over system resources.
well, it's not wrong, but i feel like a lot of these questions are "too normative" in their framing. there is some stuff that i do because of the nature of my work, and some other stuff that other people should be enabled to do because of the nature of their work, and some other stuff that really sucks that nobody should be doing and is nonetheless quite popular
You prefer doing the right thing over the wrong thing, unless the wrong thing is correct in this context!
In a vaguely related vein, here’s a D&D-style developer alignment quiz: https://www.sallery.co.uk/lessons/quiz
Concrete and human-readable here, which is exactly what I expected to get coming from IT (with short job cycles, high turnover, comparatively low wages) where my guiding principle is not being a dick to the next person by making sure they understand why I did things a given way, and where time to learn new things is very much a “thrown in the fire” type scenario (e.g., learning Asterisk while your global support line is down and your contractor holds non-regional business hours for support).
I got bored after the 3rd or 4th question. It's like they were all tricks.
That's not how I saw it. They're very global, but relevant stances, although very much focused on beginning a new, small to medium sized project.
I ended up slightly Hitchcock (North by North West) of the center (-3, + 12). It's true I'm rather practical, but other people in here call themselves opinionated and end up near the center as well. So perhaps not tricks, but rather too broad/shallow for the measure it takes? Or perhaps the mapping from question to score is too simple.
This was fun, and the conclusion was pretty good (slight preference for human-friendly and concrete).
At first I was frustrated that the answers were very much “it depends”, but then I decided that (a) this is low stakes, (b) just pick the closest one as if someone held a gun to your head. End result was fine.
I would love to see how this overlaps with folks' preferred languages, frameworks, and tools.
Abstract ↔ Concrete: -3 Concrete
Human ↔ Computer Friendly: +21 Human-Friendly
Pretty much what I expected. Probably also depends on what kind of code you write. I assume somebody who writes kernel drivers would lean more towards computer friendly.
Seeing Treeform here immediately made me think of Nim, and lo-and behold that's what the Javascript is generated from, cool!
Thanks! It's all Nim, all the time.
You focus on efficiency and performance. You like to work close to the metal, optimize for speed and memory usage, and prefer direct control over system resources.
We write code for machines. For humans we write human-readble media and formats; like document, diagrams, specifications, and ect...
Ended up smack in the middle. Seems wrong...
You prefer elegant, high-level solutions that are intuitive and accessible to other developers. You likely favor functional programming, clear abstractions, and code that reads like prose.
Abstract ↔ Concrete: +4 Abstract Human ↔ Computer Friendly:+7 Human-Friendly
I like "code that reads like prose" :-)
I'm not even sure what the options for the first question mean...
You value clarity and directness in code. You prefer explicit, step-by-step solutions that are easy to understand and debug, even if they require more lines of code.
A philosophical quiz like this needs more philosophical questions like:
How much tech debt is acceptable?
Do you have moral obligation to any code you produce?
What are your side projects?
If your code results in a death, who is responsible?
If the author is reading
> pefixes and suffixes
the first word should probably be "prefixes"
I ended right in the center:
> You value clarity and directness in code. You prefer explicit, step-by-step solutions that are easy to understand and debug, even if they require more lines of code.
Abstract ↔ Concrete: -2 Concrete
Human ↔ Computer Friendly: -5 Computer-Friendly
It does not display the quiz. I found the JavaScript file, which is compiled from Nim, and I found the source code, and the questions.
For many questions, the answer depends on the specific use, and/or will be something other than what is listed there. (For example, debugging will involve all of the four things that are listed there.)
I am also not so sure that the quiz describes the programming philosophy very well, but this is a general feature of these kind of quiz anyways.
I seem to disagree with many modern programmers about programming philosophy, but some I seem to have more agreement with some people who do some things in the older ways (although not completely).
I'm not sure these are independent variables. I mostly care about concrete code or abstractions as a way to make the code more human friendly.
And isn't being human friendly while building performant enough solutions the whole point of code? If we didn't need the humans, we'd do machine code.
I got -7 and +10.
Ooh, fun, a FizzBuzzFeed quiz!
I got outer bull. 25. Is that good?
IMO not a very useful quiz.
At most questions, my answers would have been: "all of the above" or "it depends on the context"
-1 concrete, +16 human friendly.
Feels about right.
Cool. I feel right in the center. Where's my job offer?
+17 human+friendly, I guess I don't care what computers think lol.
0 concrete +1 human centered. Just put the code in the bag I guess.
Bad. It is annoying to do 20 questions quiz. The author is a bad developer.