Debouncing refers to cleaning up the signal from an opening and closing switch contact so that the cleaned signal matches the intended semantics of the switch action (e.g. one simple press of a button, not fifty pulses).
The analogy here is poor; reducing thrashing in those obnoxious search completion interfaces isn't like debouncing.
Sure, if we ignore everything about it that is not like debouncing, and we still have something left after that, then whatever is left is like debouncing.
One important difference is that if you have unlimited amounts of low latency and processing power, you can do a full search for each keystroke, filter it down to half a dozen results and display the completions. In other words, the more power you have, the less important it is to do any "debouncing".
Switch debouncing is not like this. The faster is your processor at sampling the switch, the more bounces it sees and consequently the more crap it has to clean up. Debouncing certainly does not go away with a faster microcontroller.
It's the term used in frontend dev. It is actually a little worse than you're imagining, because we're not sampling, we're receiving callbacks (so more analogous to interrupts than sampling in a loop). Eg the oninput callback. I've used it for implementing auto save without making a localStorage call on every key press, for example.
I think it makes sense if you view it from a control theory perspective rather than an embedded perspective. The mechanics of the UI (be that a physical button or text input) create a flaggy signal. Naively updating the UI on that signal would create jank. So we apply some hysteresis to obtain a clean signal. In the day way that acting 50 times on a single button press is incorrect behavior, saving (or searching or what have you) 50 times from typing a single sentence isn't correct (or at least undesired).
The example of 10ms is way too low though, anything less than 250ms seems needlessly aggressive to me. 250ms is still going to feel very snappy. I think if you're typing at 40-50wpm you'll probably have an interval of 100-150ms between characters, so 10ms is hardly debouncing anything.
Additionally, regardless of naming, debouncing is an accessibility feature for a surprisingly large portion of the population. Many users who grew up with double-click continue to attempt to provide this input on web forms, because it mostly works. Many more with motor control issues may struggle to issue just a single click reliably, especially on a touchscreen.
Holy moly, for years I've had in the back of my head this thought about why, earlier in my career, I'd see random doubly submitted form submissions on certain projects. Same form code and processing as other sites, legitimate submissions too. Eventually we added more spam filtering and restrictions unrelated to these legitimate ones, but it was probably the double-click users causing those errant submissions. I'd never even have thought of those users. Fascinating
Yes, it's something pretty much all UI frameworks end up implementing. The easiest way to do it is to simply disable the button at first click until the request is complete. This, of course, also prevents double submissions in cases the user doesn't get enough feedback and clicks again to make sure something actually happened.
For the kind of behaviors they are describing it would. An extra 250ms waiting for an app to load is a lot, but for something like the described autosave behavior, waiting for a 250ms pause in typing before autosaving or making a fetch call is pretty snappy.
An office keyboard's own debouncing could delay a key press 30 ms, and then the OS, software and graphics/monitor hardware would delay it just as much before the user could see the character on screen. So, indeed, 10 ms is much too low.
The delay between key press and sound starts to become noticeable at around 10ms when you play an electronic (musical) keyboard instrument.
At 20-30ms or more, it starts to make playing unpleasant (but I guess for text input it's still reasonable).
50ms+ and it starts becoming unusable or extremely unpleasant, even for low expectations.
I'm not sure how much the perception of delay and the brain lag differs between audio and visual stimuli.
But that's just about the perceived snappiness for immediate interactions like characters appearing on screen.
For events that trigger some more complex visual reaction, I'd say everything below 25ms (or more, depending on context) feels almost instant.
Above 50ms you get into the territory where you have to think about optimistic feedback.
Point that most seem to miss here is that debouncing in FE is often about asynchronous and/or heavy work, e.g. fetching search suggestions or filtering a large, visible list.
Good UIs do a lot of work to provide immediate feedback while debouncing expensive work.
A typical example: when you type and your input becomes longer with the same prefix, comboboxes don't always need to fetch, they can filter if the result set was already smallish.
If your combobox is more complex and more like a real search (and adding characters might add new results), this makes no sense – except as an optimistic update.
_Not_ debouncing expensive work can lead to jank though.
Type-ahead with an offline list of 1000+ search results can already be enough, especially when the suggestions are not just rows of text.
No, correct debouncing of a hardware button should not add any delay to a single press. It's not wait-then-act, but rather act-then-wait-to-act-again. You're probably thinking of a polling interval (often exacerbated by having key switches wired in a matrix rather than one per pin).
I've programmed my own keyboards, mice and game controllers. If you want the fastest response time then you'd make debouncing be asymmetric: report press ("Make") on the first leading edge, and don't report release ("Break") until the signal has been stable for n ms after a trailing edge. That is the opposite of what's done in the blog article.
Having a delay on the leading edge is for electrically noisy environments, such as among electric motors and a long wire from the switch to the MCU, where you could potentially get spurious signals that are not from a key press.
Debouncing could also be done in hardware without delay, if you have a three-pole switch and an electronic latch.
A better analogy would perhaps be "Event Compression": coalescing multiple consecutive events into one, used when producer and consumer are asynchronous.
Better but not perfect.
Debouncing is established terminology in UI and other event-handling stuff at this point, and has been for a decade. It's a bit too late to complain. Language evolves and not all new uses of words are good analogies.
Yeah. It is not too uncommon for terms to refer to how things were done in the past or in another context. For example, we still "dial" a number on our phone even though rotary phones are no longer used...for other examples see https://en.wikipedia.org/wiki/Misnomer#Older_name_retained
Debouncing is a term of art in UI development and has been for a long time. It is analogous to, but of course not exactly the same as, debouncing in electronics.
It's also worth mentioning that real debouncing doesn't always have to depend on time when you have an analog signal. Instead you could have different thresholds for going from stat A to B vs going from B to A with enough distance between those threshold that you won't switch back and forth during an event. This can even be implemented physically in the switch itself by having separate ON and OFF contacts.
> One important difference is that if you have unlimited amounts of low latency and processing power, you can do a full search for each keystroke,
But you don't want that, as it's useless. Until the user actually finished typing, they're going to have more results than they can meaningfully use - especially that the majority will be irrelevant and just get in the way of real results.
The signal in between is actually, really not useful - at least not on first try when the user is not aware what's in the data source and how can they hack the search query to get their results with minimal input.
Be that as it may, the performance side of it becomes irrelevant. The UI responds to the user's keystrokes instantly, and when they type what they had intended to type, the search suggestions are there.
Switch debouncing does not become irrelevant with unlimited computing power.
No one wants to see results for the letter "a", no one wants their database processing that search, and updating the UI while you're typing can be really distracting.
Don't make assumptions about what the user may or may not want to search for.
E.g. in my music collection I have albums from both !!! [1] and Ø [2]. I've encountered software that "helpfully" prevented me from searching for these artists, because the developers thought that surely noone would search for such terms.
No, you should definitely exercise good judgement in delivering a good UI to the user that doesn't lock up if they happen to type very quickly. But it is context dependent, and sometimes you will want to show them results for "a", sure. "No one" was rhetorical.
In your example, the developers have exercised poor judgment by making a brittle assumption about the data. That's bad. But there is no UX without some model of the user. Making assumptions about user's rate of perception is much safer (in a web app context, it would be a different story in a competitive esports game).
Edit: It does. So, this would be yet another of the squillion-ish examples to support the advice "Please, for the love of god, always enclose your URLs in '<>'.". (And if you're writing a general-purpose URL linkifier, PLEASE just assume that everything between those characters IS part of the URL, rather than assuming you know better than the user.)
I don't believe that they can, not unencoded. Check out the grammar in the relevant RFC[0], as well as the discussion about URL-unsafe characters in the RFC that's updated by 3986 [1], from which I'll quote below.
> Characters can be unsafe for a number of reasons. ... The characters "<" and ">" are unsafe because they are used as the delimiters around URLs in free text
Also note the "APPENDIX" section on page 22 of RFC1738, which provides recommendations for embedding URLs in other contexts (like, suchas, in an essay, email, or internet forum post.)
Do you have standards documents that disagree with these IETF ones?
If you're using the observed behavior of your browser's address bar as your proof that ">" is valid in a URL, do note that the URL
https://news.ycombinator.com/item?id=44826199>hello there
might appear to contain a space and the ">" character, but it is actually represented as
URLs with the characters ' ' and '>' in them are not valid URLs. Perhaps your web browser does things differently than my Firefox and Chrome instances, but when I copy out that pretty-printed URL from the address bar and paste it, I get the following string:
I don't care if there are results for the letter "a", if they are instant.
Don't become unresponsive after one key to search for results. If the search impacts responsiveness, you need to have a hold-off time before kicking it off so that a longer prefix/infix can be gathered which will reduce the search space and improve its relevance.
Yes. Unless you are pecking at your keyboard your eyes are free to look at the results on the screen and stop typing once you get the result you want. The only thing that's needed is for the results to be stable, i.e. if the top result for "abc" also matches "abcd" then it should also be the top result for "abcd". Unfortunately many search/autocomplete implementations fail at this but that's still a problem even with "debouncing".
It doesn't matter how fast you can read the results, you benefit from instant results as long as you can read them faster than you can complete typing.
Whatever delay you add before showing results doesn't get hidden by the display and user's reading latency, it adds to it.
"Instant," in the context of a user interface, is not zero seconds. It's more like 50ms to 1000ms (depending on the type of information being processed). If you want your user interface to feel snappy and responsive - then you don't want to process things as fast as the computer can, you want to process them in a way that feels instantaneous. If you get caught up processing every keystroke, the interface will feel sluggish.
> In electronics, I think we'd use a latch, so it switches high, and stays high despite input change.
RC circuits are more typical, you want to filter out high frequency pulses (indicative of bouncing) and only keep the settled/steady state signal. A latch would be too eager I think.
It's a word borrowed for a similar concept. This is so common in software, it is basically the norm. There are hundreds of analogistic terms in software.
Thank you for this comment! Suddenly 'bouncing' makes total sense as a mental image when before it only vaguely tracked in some abstract way about tons of tiny events bouncing around and triggering things excitedly until you contain them with debounce() :-)
Come to think of it throttle is the much easier to understand analogy.
Throttling is a different thing though. Debouncing is waiting until the input has stopped occurring so it can run on the final result, throttling is running immediately on the first input and blocking further input for a short duration.
Actually I think it's pretty similar to your example. The "intended semantics" of the search action in that sort of field are to search for the text you enter – not to search for the the side-effects of in-progress partial completion.
Yes, it's not an exact comparison (hence analogy) – but it's not anything worth getting into a fight about.
You debounce a physical switch because it makes contact multiple times before settling in the contacted position, e.g. you might wait until it's settled before acting, or you act upon the first signal and ignore the ones that quickly follow.
And that closely resembles the goal and even implementation of UI debouncing.
It also makes sense in a search box because there you have the distinction between intermediate vs. settled state. Do you act on what might be intermediate states, or do you try to assume settled state?
Just because it might have a more varied or more abstract meaning in another industry doesn't mean it's a bad analogy, even though Javascript is involved, sheesh.
The user intent is usually to get to what they are looking for as quickly as possible. If you intentionally introduce delays by forcing them to enter the complete query or pause to receive intermediate results then you are slowing that down.
> The user intent is usually to get to what they are looking for as quickly as possible.
Yes, and returning 30,000 results matching the "a" they just typed is not going to do that. "Getting the desired result fastest" probably requires somewhere between 2 and 10 characters, context-dependent.
Search is a bad example there, a better one would have been clicking a button to add an item to a list, or pressing a shortcut key to do so, where you want to only submit that item once even if someone frantically clicks on the button because they're feeling impatient.
No you should not filter user input like this. Keep user interfaces simple and predictable.
If it really only makes sense to perform the action once than disable/remove the button on the first click. If it makes sense to click the button multiple times then there should be no limit to how fast you can do that. It's really infuriating when crappy software drops user input because its too slow to process one input before the next. There is reason why input these days comes in events that are queued and we aren't still checking if the key is up or down in a loop.
One thing to watch out for when using debounce/throttle is the poor interaction with async functions. Debounced/throttled async functions can easily lead to unexpected behavior because they typically return the last result they have when the function is called, which would be a previous Promise for an async function. You can get a result that appears to violate causality, because the result of the promise returned by the debounce/throttle will (in a typical implementation) be from a prior invocation that happened before your debounce/throttle call.
There are async-safe variants but the typical lodash-style implementations are not. If you want the semantics of "return a promise when the function is actually invoked and resolve it when the underlying async function resolves", you'll have to carefully vet if the implementation actually does that.
Another thing to watch for is whether you actually need debouncing.
For example, debouncing is often recommended for handlers of the resize event, but, in most cases, it is not needed for handlers of observations coming from ResizeObserver.
I think this is the case for other modern APIs as well. I know that, for example, you don’t need debouncing for the relatively new scrollend event (it does the debouncing on its own).
That doesn't sound correct. An async function ought to return a _new_ Promise on each invocation, and each of those returned Promises are independent. Are you conflating memoization? Memoized functions will have these problems with denouncing, but not your standard async function.
Debouncing correctly is still super hard, even with rxjs.
There are always countless edge cases that behave incorrectly - it might not be important and can be ignored, but while the general idea of debouncing sounds easy - and adding it to an rxjs observable is indeed straightforward...
Actually getting the desired behavior done via rxjs gets complicated super fast if you're required to be correct/spec compliant
this sounds interesting but it's a bit too early here for me. by any chance can we (not simply a royal we :D) ask you to provide a code example (of a correct implementation), or a link to one? many thanks!
I needed an implementation of debounce in Java recently and was surprised to find out that there's no existing decent solution - there's none from the standard library, nor popular utilities libraries like Guava or Apache Commons. There are some implementations floating around like on Stackoverflow but I found them lacking, either there's no thread safety or there's no flexibility in supporting the execution of the task at the leading edge or trailing edge or both. Anyone has a good recommendation on a good implementation?
I've seen the term "request coalescing" used to refer to a technique to minimise the impact of cache stampedes. Protects your backend systems from huge spikes in traffic caused by a cache entry expiring.
Debounce ->
Like when we throw a ball once on ground, but it keeps bouncing,
To prevent that
Human interaction with circuits, sensors, receptors, occur like that
When we click a keyboard key or switch circuit switch the receptors are very sensitive
we feel we did once but during that one press our fingers hands are vibrating multiple times hence the event get registered multiple times due to pulsating, hence all after first event, the second useful event that can be considered legitimate if the idle period between both matches desired debounce delay
in terms of web and software programming or network request handling
it is used as a term to debounce to push away someone something aggresive
Example wise
a gate and a queue
Throttling -> gate get opened every 5 min and let one person in, no matter what
Debounce -> if the persons in queue are deliberately being aggressive thrashing at door to make it open
we push them away
Now instead of 5 min, we tell them you have to wait another 5 min since you are harassing, if before that they try again, we again tell them to wait another 5 min
Thus debounce is to prevent aggresive behaviour
In terms of say client server request over network
We can throttle requests processed by server, let say server will only process requests that happen every 5 min like how apis have limit, during that before 5min no matter how many request made they will be ignored
But if client is aggressive like they keep clicking submit button, keep making 100s of requests that even after throttling server would suffer kind of ddos
so at client side we add debounce to button click event
so even if user keep clicking it being impatient, unnecessary network request will not be made to server unless user stop
Debouncing refers to cleaning up the signal from an opening and closing switch contact so that the cleaned signal matches the intended semantics of the switch action (e.g. one simple press of a button, not fifty pulses).
The analogy here is poor; reducing thrashing in those obnoxious search completion interfaces isn't like debouncing.
Sure, if we ignore everything about it that is not like debouncing, and we still have something left after that, then whatever is left is like debouncing.
One important difference is that if you have unlimited amounts of low latency and processing power, you can do a full search for each keystroke, filter it down to half a dozen results and display the completions. In other words, the more power you have, the less important it is to do any "debouncing".
Switch debouncing is not like this. The faster is your processor at sampling the switch, the more bounces it sees and consequently the more crap it has to clean up. Debouncing certainly does not go away with a faster microcontroller.
It's the term used in frontend dev. It is actually a little worse than you're imagining, because we're not sampling, we're receiving callbacks (so more analogous to interrupts than sampling in a loop). Eg the oninput callback. I've used it for implementing auto save without making a localStorage call on every key press, for example.
I think it makes sense if you view it from a control theory perspective rather than an embedded perspective. The mechanics of the UI (be that a physical button or text input) create a flaggy signal. Naively updating the UI on that signal would create jank. So we apply some hysteresis to obtain a clean signal. In the day way that acting 50 times on a single button press is incorrect behavior, saving (or searching or what have you) 50 times from typing a single sentence isn't correct (or at least undesired).
The example of 10ms is way too low though, anything less than 250ms seems needlessly aggressive to me. 250ms is still going to feel very snappy. I think if you're typing at 40-50wpm you'll probably have an interval of 100-150ms between characters, so 10ms is hardly debouncing anything.
Additionally, regardless of naming, debouncing is an accessibility feature for a surprisingly large portion of the population. Many users who grew up with double-click continue to attempt to provide this input on web forms, because it mostly works. Many more with motor control issues may struggle to issue just a single click reliably, especially on a touchscreen.
Holy moly, for years I've had in the back of my head this thought about why, earlier in my career, I'd see random doubly submitted form submissions on certain projects. Same form code and processing as other sites, legitimate submissions too. Eventually we added more spam filtering and restrictions unrelated to these legitimate ones, but it was probably the double-click users causing those errant submissions. I'd never even have thought of those users. Fascinating
Yes, it's something pretty much all UI frameworks end up implementing. The easiest way to do it is to simply disable the button at first click until the request is complete. This, of course, also prevents double submissions in cases the user doesn't get enough feedback and clicks again to make sure something actually happened.
> 250ms is still going to feel very snappy
WTF no it won't.
For the kind of behaviors they are describing it would. An extra 250ms waiting for an app to load is a lot, but for something like the described autosave behavior, waiting for a 250ms pause in typing before autosaving or making a fetch call is pretty snappy.
What value would you recommend?
An office keyboard's own debouncing could delay a key press 30 ms, and then the OS, software and graphics/monitor hardware would delay it just as much before the user could see the character on screen. So, indeed, 10 ms is much too low.
The delay between key press and sound starts to become noticeable at around 10ms when you play an electronic (musical) keyboard instrument.
At 20-30ms or more, it starts to make playing unpleasant (but I guess for text input it's still reasonable).
50ms+ and it starts becoming unusable or extremely unpleasant, even for low expectations.
I'm not sure how much the perception of delay and the brain lag differs between audio and visual stimuli.
But that's just about the perceived snappiness for immediate interactions like characters appearing on screen.
For events that trigger some more complex visual reaction, I'd say everything below 25ms (or more, depending on context) feels almost instant.
Above 50ms you get into the territory where you have to think about optimistic feedback.
Point that most seem to miss here is that debouncing in FE is often about asynchronous and/or heavy work, e.g. fetching search suggestions or filtering a large, visible list.
Good UIs do a lot of work to provide immediate feedback while debouncing expensive work.
A typical example: when you type and your input becomes longer with the same prefix, comboboxes don't always need to fetch, they can filter if the result set was already smallish.
If your combobox is more complex and more like a real search (and adding characters might add new results), this makes no sense – except as an optimistic update.
_Not_ debouncing expensive work can lead to jank though.
Type-ahead with an offline list of 1000+ search results can already be enough, especially when the suggestions are not just rows of text.
Perhaps the people at MDN are 10x typists, with competition-grade gaming keyboards.
No, correct debouncing of a hardware button should not add any delay to a single press. It's not wait-then-act, but rather act-then-wait-to-act-again. You're probably thinking of a polling interval (often exacerbated by having key switches wired in a matrix rather than one per pin).
I agree that this is a bad analogy.
I've programmed my own keyboards, mice and game controllers. If you want the fastest response time then you'd make debouncing be asymmetric: report press ("Make") on the first leading edge, and don't report release ("Break") until the signal has been stable for n ms after a trailing edge. That is the opposite of what's done in the blog article.
Having a delay on the leading edge is for electrically noisy environments, such as among electric motors and a long wire from the switch to the MCU, where you could potentially get spurious signals that are not from a key press. Debouncing could also be done in hardware without delay, if you have a three-pole switch and an electronic latch.
A better analogy would perhaps be "Event Compression": coalescing multiple consecutive events into one, used when producer and consumer are asynchronous. Better but not perfect.
Debouncing is established terminology in UI and other event-handling stuff at this point, and has been for a decade. It's a bit too late to complain. Language evolves and not all new uses of words are good analogies.
Yeah. It is not too uncommon for terms to refer to how things were done in the past or in another context. For example, we still "dial" a number on our phone even though rotary phones are no longer used...for other examples see https://en.wikipedia.org/wiki/Misnomer#Older_name_retained
That's the same context (making a call to a phone number) with a different implementation.
> for a decade
Whoa!
Debouncing is a term of art in UI development and has been for a long time. It is analogous to, but of course not exactly the same as, debouncing in electronics.
It's also worth mentioning that real debouncing doesn't always have to depend on time when you have an analog signal. Instead you could have different thresholds for going from stat A to B vs going from B to A with enough distance between those threshold that you won't switch back and forth during an event. This can even be implemented physically in the switch itself by having separate ON and OFF contacts.
> if you have unlimited amounts of low latency and processing power
And battery, or at least enough air conditioning to cool down the desktop because of those extraneous operations, right?
> One important difference is that if you have unlimited amounts of low latency and processing power, you can do a full search for each keystroke,
But you don't want that, as it's useless. Until the user actually finished typing, they're going to have more results than they can meaningfully use - especially that the majority will be irrelevant and just get in the way of real results.
The signal in between is actually, really not useful - at least not on first try when the user is not aware what's in the data source and how can they hack the search query to get their results with minimal input.
> as it's useless
Be that as it may, the performance side of it becomes irrelevant. The UI responds to the user's keystrokes instantly, and when they type what they had intended to type, the search suggestions are there.
Switch debouncing does not become irrelevant with unlimited computing power.
No one wants to see results for the letter "a", no one wants their database processing that search, and updating the UI while you're typing can be really distracting.
>No one wants to see results for the letter "a"
Don't make assumptions about what the user may or may not want to search for.
E.g. in my music collection I have albums from both !!! [1] and Ø [2]. I've encountered software that "helpfully" prevented me from searching for these artists, because the developers thought that surely noone would search for such terms.
_______
[1] https://www.discogs.com/artist/207714-!!! ← See? The HN link highlighter also thinks that URLs cannot end with !!!.
[2] https://www.discogs.com/artist/31887-Ø
No, you should definitely exercise good judgement in delivering a good UI to the user that doesn't lock up if they happen to type very quickly. But it is context dependent, and sometimes you will want to show them results for "a", sure. "No one" was rhetorical.
In your example, the developers have exercised poor judgment by making a brittle assumption about the data. That's bad. But there is no UX without some model of the user. Making assumptions about user's rate of perception is much safer (in a web app context, it would be a different story in a competitive esports game).
Let's see if surrounding that URL in the URL-surrounding character pair helps the HN linkifier:
<https://www.discogs.com/artist/207714-!!!>
Edit: It does. So, this would be yet another of the squillion-ish examples to support the advice "Please, for the love of god, always enclose your URLs in '<>'.". (And if you're writing a general-purpose URL linkifier, PLEASE just assume that everything between those characters IS part of the URL, rather than assuming you know better than the user.)
URLs can contain > too.
I don't believe that they can, not unencoded. Check out the grammar in the relevant RFC[0], as well as the discussion about URL-unsafe characters in the RFC that's updated by 3986 [1], from which I'll quote below.
> Characters can be unsafe for a number of reasons. ... The characters "<" and ">" are unsafe because they are used as the delimiters around URLs in free text
Also note the "APPENDIX" section on page 22 of RFC1738, which provides recommendations for embedding URLs in other contexts (like, suchas, in an essay, email, or internet forum post.)
Do you have standards documents that disagree with these IETF ones?
If you're using the observed behavior of your browser's address bar as your proof that ">" is valid in a URL, do note that the URL
might appear to contain a space and the ">" character, but it is actually represented as behind the scenes. Your web browser is pretty-printing it for you so it looks nicer and is easier to read.[0] <https://datatracker.ietf.org/doc/html/rfc3986#appendix-A>
[1] <https://datatracker.ietf.org/doc/html/rfc1738#section-2.2>
Your point about URL encoding defeats your own other point about these characters being safely parsable as surrounding delimiters
No?
URLs with the characters ' ' and '>' in them are not valid URLs. Perhaps your web browser does things differently than my Firefox and Chrome instances, but when I copy out that pretty-printed URL from the address bar and paste it, I get the following string:
Though -oddly-, while Chrome's pretty-printer does pretty-print %3E, it fails to pretty-print %20 . Go figure.I don't care if there are results for the letter "a", if they are instant.
Don't become unresponsive after one key to search for results. If the search impacts responsiveness, you need to have a hold-off time before kicking it off so that a longer prefix/infix can be gathered which will reduce the search space and improve its relevance.
As a user, I often do want a list to start from a single letter. In a browser address bar, it could start showing items Amazon, Apple, etc.
That is fine. Do you want it to flicker between keystrokes when you're still typing?
"Flicker" can mean a lot of things, I generally don't have a problem with the list changing while I type.
Consider some people (the type to enable prefers-reduced-motion) find it very difficult to use a UI that is updating too frequently.
Yes. Unless you are pecking at your keyboard your eyes are free to look at the results on the screen and stop typing once you get the result you want. The only thing that's needed is for the results to be stable, i.e. if the top result for "abc" also matches "abcd" then it should also be the top result for "abcd". Unfortunately many search/autocomplete implementations fail at this but that's still a problem even with "debouncing".
Are you really able to scan all the results in a few milliseconds?
Even the 10ms in TFA is too low. I personally wouldn't mind (or probably even notice) a delay of 100 ms.
It doesn't matter how fast you can read the results, you benefit from instant results as long as you can read them faster than you can complete typing.
Whatever delay you add before showing results doesn't get hidden by the display and user's reading latency, it adds to it.
"Instant," in the context of a user interface, is not zero seconds. It's more like 50ms to 1000ms (depending on the type of information being processed). If you want your user interface to feel snappy and responsive - then you don't want to process things as fast as the computer can, you want to process them in a way that feels instantaneous. If you get caught up processing every keystroke, the interface will feel sluggish.
In electronics, I think we'd use a latch, so it switches high, and stays high despite input change.
Doesn't really apply to a search box, where it's more of a delayed event if no event during a specific time window, only keeping last event.
> In electronics, I think we'd use a latch, so it switches high, and stays high despite input change.
RC circuits are more typical, you want to filter out high frequency pulses (indicative of bouncing) and only keep the settled/steady state signal. A latch would be too eager I think.
It's a word borrowed for a similar concept. This is so common in software, it is basically the norm. There are hundreds of analogistic terms in software.
Thank you for this comment! Suddenly 'bouncing' makes total sense as a mental image when before it only vaguely tracked in some abstract way about tons of tiny events bouncing around and triggering things excitedly until you contain them with debounce() :-)
Come to think of it throttle is the much easier to understand analogy.
Throttling is a different thing though. Debouncing is waiting until the input has stopped occurring so it can run on the final result, throttling is running immediately on the first input and blocking further input for a short duration.
I like you said obnoxious... it is assumed this behaviour is what people want rather than just press a button or hit enter when ready.
It’s still a top 10% analogy.
Actually I think it's pretty similar to your example. The "intended semantics" of the search action in that sort of field are to search for the text you enter – not to search for the the side-effects of in-progress partial completion.
Yes, it's not an exact comparison (hence analogy) – but it's not anything worth getting into a fight about.
Yeah, I don't get how this thread is at the top.
You debounce a physical switch because it makes contact multiple times before settling in the contacted position, e.g. you might wait until it's settled before acting, or you act upon the first signal and ignore the ones that quickly follow.
And that closely resembles the goal and even implementation of UI debouncing.
It also makes sense in a search box because there you have the distinction between intermediate vs. settled state. Do you act on what might be intermediate states, or do you try to assume settled state?
Just because it might have a more varied or more abstract meaning in another industry doesn't mean it's a bad analogy, even though Javascript is involved, sheesh.
The user intent is usually to get to what they are looking for as quickly as possible. If you intentionally introduce delays by forcing them to enter the complete query or pause to receive intermediate results then you are slowing that down.
> The user intent is usually to get to what they are looking for as quickly as possible.
Yes, and returning 30,000 results matching the "a" they just typed is not going to do that. "Getting the desired result fastest" probably requires somewhere between 2 and 10 characters, context-dependent.
Search is a bad example there, a better one would have been clicking a button to add an item to a list, or pressing a shortcut key to do so, where you want to only submit that item once even if someone frantically clicks on the button because they're feeling impatient.
No you should not filter user input like this. Keep user interfaces simple and predictable.
If it really only makes sense to perform the action once than disable/remove the button on the first click. If it makes sense to click the button multiple times then there should be no limit to how fast you can do that. It's really infuriating when crappy software drops user input because its too slow to process one input before the next. There is reason why input these days comes in events that are queued and we aren't still checking if the key is up or down in a loop.
Removing the button from the DOM after click is maybe the worst advice I’ve ever heard for web UX
It's basically debouncing plus edge detection.
One thing to watch out for when using debounce/throttle is the poor interaction with async functions. Debounced/throttled async functions can easily lead to unexpected behavior because they typically return the last result they have when the function is called, which would be a previous Promise for an async function. You can get a result that appears to violate causality, because the result of the promise returned by the debounce/throttle will (in a typical implementation) be from a prior invocation that happened before your debounce/throttle call.
There are async-safe variants but the typical lodash-style implementations are not. If you want the semantics of "return a promise when the function is actually invoked and resolve it when the underlying async function resolves", you'll have to carefully vet if the implementation actually does that.
Another thing to watch for is whether you actually need debouncing.
For example, debouncing is often recommended for handlers of the resize event, but, in most cases, it is not needed for handlers of observations coming from ResizeObserver.
I think this is the case for other modern APIs as well. I know that, for example, you don’t need debouncing for the relatively new scrollend event (it does the debouncing on its own).
Sad that the `scrollend` event isn't supported by Safari and doesn't look to be a part of their release this fall either.
Yep. Apple is a weird organization. scrollend is in Chrome since May 2023 and in Firefox since January 2023.
That doesn't sound correct. An async function ought to return a _new_ Promise on each invocation, and each of those returned Promises are independent. Are you conflating memoization? Memoized functions will have these problems with denouncing, but not your standard async function.
If I understand it correctly, they're saying the debounce function itself usually implements memoization in a way that will return you stale promises.
Reactive programming (such as with RxJS [0]) can be a good solution to this, since they provide primitives that understand time-based dependencies.
[0]: https://rxjs.dev/api/index/function/switchMap
Debouncing correctly is still super hard, even with rxjs.
There are always countless edge cases that behave incorrectly - it might not be important and can be ignored, but while the general idea of debouncing sounds easy - and adding it to an rxjs observable is indeed straightforward...
Actually getting the desired behavior done via rxjs gets complicated super fast if you're required to be correct/spec compliant
this sounds interesting but it's a bit too early here for me. by any chance can we (not simply a royal we :D) ask you to provide a code example (of a correct implementation), or a link to one? many thanks!
I would probably look at the lodash implementation (for JS/TS) as a complete implementation example.
https://github.com/lodash/lodash/blob/8a26eb42adb303f4adc7ef...
So - events / handlers may need to be tagged as "for human interaction"? (to avoid over-filtering signals?)
Damn, there was another thread not too long ago claiming that a sync does not mean concurrent - this would have been a great example to bring up.
Such stuff has first-class support in Kotlin: Structured concurrency simplifies multi-threaded programming pretty effectively.
I recently wrote a bit about debouncing fetch using timeouts and AbortController, including small demos, here: https://thomascountz.com/2025/07/02/debouncing-api-calls
I needed an implementation of debounce in Java recently and was surprised to find out that there's no existing decent solution - there's none from the standard library, nor popular utilities libraries like Guava or Apache Commons. There are some implementations floating around like on Stackoverflow but I found them lacking, either there's no thread safety or there's no flexibility in supporting the execution of the task at the leading edge or trailing edge or both. Anyone has a good recommendation on a good implementation?
I've always used the term Coalescing in the past: https://en.wikipedia.org/wiki/Coalescing_(computer_science)
I've seen the term "request coalescing" used to refer to a technique to minimise the impact of cache stampedes. Protects your backend systems from huge spikes in traffic caused by a cache entry expiring.
And I've used coalesce to describe Array.prototype.reduce and Object.assign as well.
I think it's a fitting analogy. Depends on the intended behaviour, really.
That said, this is a good resource on the original meaning: https://www.ganssle.com/debouncing.htm
what is so special about this that it is worth posting and garnering 100+ votes?
meta: it seems like this account just submits misc links to mdn?
https://en.wikipedia.org/wiki/Switch#Contact_bounce
Ooh thanks for the link, didn't know this was where this came from
Not sure if anyone else has noticed, but this has been a super popular interview question for front end interviews.
Debounce -> Like when we throw a ball once on ground, but it keeps bouncing, To prevent that
Human interaction with circuits, sensors, receptors, occur like that
When we click a keyboard key or switch circuit switch the receptors are very sensitive
we feel we did once but during that one press our fingers hands are vibrating multiple times hence the event get registered multiple times due to pulsating, hence all after first event, the second useful event that can be considered legitimate if the idle period between both matches desired debounce delay
in terms of web and software programming or network request handling
it is used as a term to debounce to push away someone something aggresive
Example wise
a gate and a queue Throttling -> gate get opened every 5 min and let one person in, no matter what
Debounce -> if the persons in queue are deliberately being aggressive thrashing at door to make it open we push them away Now instead of 5 min, we tell them you have to wait another 5 min since you are harassing, if before that they try again, we again tell them to wait another 5 min Thus debounce is to prevent aggresive behaviour
In terms of say client server request over network
We can throttle requests processed by server, let say server will only process requests that happen every 5 min like how apis have limit, during that before 5min no matter how many request made they will be ignored
But if client is aggressive like they keep clicking submit button, keep making 100s of requests that even after throttling server would suffer kind of ddos
so at client side we add debounce to button click event
so even if user keep clicking it being impatient, unnecessary network request will not be made to server unless user stop