18 comments

  • semiquaver 5 hours ago ago

      > Regolith attempts to be a drop-in replacement for RegExp and requires minimal (to no) changes to be used instead
    
    vs

      > Since Regolith uses Rust bindings to implement the Rust Regex library to achieve linear time worst case, this means that backreferences and look-around aren't available in Regolith either.
    
    Obviously it cannot be a drop-in replacement if the regex dialect differs. That it has a compatible API is not the only relevant factor. I’d recommend removing the top part from the readme.

    Another thought: since backreferences and lookaround are the features in JS regexes which _cause_ ReDOS, why not just wrap vanilla JS regex, rejecting patterns including them? Wouldn’t that achieve the same result in a simpler way?

    • btown 4 hours ago ago

      As someone who's been saved by look-aheads in many a situation, I'm quite partial to the approach detailed in [0]: use a regex library that checks for a timeout in its main matching loop.

      This lets you have full backwards compatibility in languages like Python and JS/TS that support backreferences/lookarounds, without running any risk of DOS (including by your own handrolled regexes!)

      And on modern processors, a suitably implemented check for a timeout would largely be branch-predicted to be a no-op, and would in theory result in no measurable change in performance. Unfortunately, the most optimized and battle-tested implementations seem to have either taken the linear-time NFA approaches, or have technical debt making timeout checks impractical (see comment in [0] on the Python core team's resistance to this) - so we're in a situation where we don't have the best of both worlds. Efforts like [1] are promising, especially if augmented with timeout logic, but early-stage.

      [0] https://stackoverflow.com/a/74992735

      [1] https://github.com/fancy-regex/fancy-regex

    • roggenbuck 4 hours ago ago

      Thanks for the feedback! Yea, you're totally right. I'll update the docs to reflect this.

      > why not just wrap vanilla JS regex, rejecting patterns including them?

      Yea! I was thinking about this too actually. And this would solve the problem of being server side only. I'm thinking about making a new version to do just this.

      For a pattern rejecting wrapper, how would you want it to communicate that an unsafe pattern has been created.

      • DemocracyFTW2 2 hours ago ago

        > how would you want it to communicate that an unsafe pattern has been created

        Given this is running on a JS engine, an error should be thrown much as an error will be thrown on syntactically invalid regexes in the source. Sadly, this can't happen a module load / compile time unless a build step is implemented, complicating the matter; but on the other hand, a regex that is never used can also not be a problem. The build step could be stupidly simple, such as relying on an otherwise disallowed construction like `safe/[match]*me/`.

    • bawolff 4 hours ago ago

      > Another thought: since backreferences and lookaround are the features in JS regexes which _cause_ ReDOS,

      This is incorrect. Other features can cause ReDOS.

      The other problematic features have linear time algorithms that could be used, but generally are not used (i assume for better average case performance)

      • thomasmg 3 hours ago ago

        Right. An example regex that can be slow is CSV parsing [1]:

        .*,.*,.*,.*,.* etc.

        I believe a timeout is a better (simpler) solution than to try to prevent 'bad' patterns. I use this approach in my own (tiny, ~400 lines) regex library [2]. I use a limit at most ~100 operations per input byte. So, without measuring wall clock time, which can be inaccurate.

        [1]: https://stackoverflow.com/questions/2667015/is-regex-too-slo... [2]: https://github.com/thomasmueller/bau-lang/blob/main/src/test...

      • roggenbuck 4 hours ago ago

        Yea, I can expand the description to include other features that may cause issues. Here is an example of how counting can cause latency too: https://www.usenix.org/system/files/sec22fall_turonova.pdf

        • thomasmg 2 hours ago ago

          A static analysis of the regular expression has the advantage that many problematic cases can be caught at compile time. Not all: the expression is sometimes generated at runtime. There's also a risk that too many cases might be rejected.

          Did you consider a hybrid approach, where static analysis is used to get compiler warnings / errors, combined with limiting the number of operations at runtime? An API change might be needed, so instead of just "matches(regex)" a new method might be needed with a limit "matches(regex, opCountLimit)" and a different return type (true / false / timeout).

    • bbor 3 hours ago ago

      Totally agree -- those are two incredibly useful features of regex[1][2] that are often effectively irreplaceable. I could see this being a straightforward tradeoff for applications that know for sure they don't need complex regexes but still must accept patterns written by the client for some reason(?), but otherwise this seems like a hell of a way to go to replace a `timeout` wrapper.

      This paragraph in particular seems very wholesome, but misguided in light of the tradeoff:

        Having a library or project that is immune to these vulnerabilities would save this effort for each project that adopted it, and would save the whole package ecosystem that effort if widely adopted.
      
      Honestly, the biggest shock here for me is that Rust doesn't support these. Sure, Python has yet to integrate the actually-functional `regex`[3] into stdlib to replace the dreadfully under-specced `re`, but Rust is the new kid on the block! I guess people just aren't writing complex regexes anymore...[4]

      RE:simpler wrapper, I personally don't see any reason it wouldn't work, and dropping a whole language seems like a big win if it does. I happened to have some scaffolding on hand for the cursed, dark art of metaregexes, so AFAICT, this pattern would work for a blanket ban: https://regexr.com/8gplg Ironically, I don't think there's a way to A) prevent false-positives on triple-backslashes without using lookarounds, or B) highlight the offending groups in full without backrefs!

      [1] https://www.regular-expressions.info/backref.html

      [2] https://www.regular-expressions.info/lookaround.html

      [3] https://github.com/mrabarnett/mrab-regex

      [4] We need a regex renaissance IMO, though the feasibility of "just throw a small fine-tuned LLM at it" may delay/obviate that for users that can afford the compute... It's one of the OG AI concepts, back before intuition seemed possible!

  • spankalee 5 hours ago ago

    It's very, very weird to speak of TypeScript and JavaScript as two separate languages here.

    There is no TypeScript RegExp, there is only the JavaScript RegExp as implemented in various VMs. There is no TypeScript VM, only JavaScript VMs. And there are no TypeScript CVEs unless it's against the TypeScript compiler, language server, etc.

    • serial_dev 4 hours ago ago

      I was also confused first, I thought it is against the TypeScript compiler, too.

    • maxloh 3 hours ago ago

      Deno and Node use V8 under the hood, so the code should essentially run on the same VM regardless.

  • xyzzy123 5 hours ago ago

    It's great to have a safe options - and it would have been great if the default had been safe.

    I think many people are annoyed with ReDos as a bug class. It seems like mostly noise in the CVE trackers, library churn and badge collecting for "researchers". It'd be less of a problem if people stuck to filing CVEs against libraries that might remotely see untrusted input rather than scrambling to collect pointless "scalps" from every tool under the sun that accepts a configuration regex - build tools, very commonly :(

    Perhaps you can stop this madness... :)

    • bawolff 4 hours ago ago

      Even in cases where malicious input could be hit, this bug class is stupid on the client side where the attacker can only attack themselves.

      • xyzzy123 4 hours ago ago

        Stored... ReDoS, reflected... ReDoS(??)... [it pained me to type those] (╯°□°)╯︵ ┻━┻

    • roggenbuck 4 hours ago ago

      > and it would have been great if the default had been safe.

      I totally agree here. Safety can and should be from the language itself.

  • dwoldrich 3 hours ago ago

    Perhaps regex is just a bad little language for pattern matching.

    I have a foggy recollection of compute times exploding for me on a large regex in .Net code and I used a feature I hadn't seen in JavaScript's RegExp that allowed me to mark off sections of already matched parts of the regular expression that prevented it from backtracking.

    Perhaps the answer isn't removing features for linear regex, but adding more features to make it more expressive and tunable?

  • DemocracyFTW2 2 hours ago ago

    FWIW there's also https://github.com/slevithan/regex "JS regexes future. A template tag for readable, high-performance, native JS regexes with extended syntax, context-aware interpolation, and always-on best practices". From the docs:

    Highlights include support for insignificant whitespace and comments, atomic groups and possessive quantifiers (that can help you avoid ReDoS), subroutines and subroutine definition groups (that enable powerful subpattern composition), and context-aware interpolation of regexes, escaped strings, and partial patterns.