Coverage Report

Created: 2026-01-10 06:20

next uncovered line (L), next uncovered region (R), next uncovered branch (B)
/rust/registry/src/index.crates.io-1949cf8c6b5b557f/regex-syntax-0.8.8/src/lib.rs
Line
Count
Source
1
/*!
2
This crate provides a robust regular expression parser.
3
4
This crate defines two primary types:
5
6
* [`Ast`](ast::Ast) is the abstract syntax of a regular expression.
7
  An abstract syntax corresponds to a *structured representation* of the
8
  concrete syntax of a regular expression, where the concrete syntax is the
9
  pattern string itself (e.g., `foo(bar)+`). Given some abstract syntax, it
10
  can be converted back to the original concrete syntax (modulo some details,
11
  like whitespace). To a first approximation, the abstract syntax is complex
12
  and difficult to analyze.
13
* [`Hir`](hir::Hir) is the high-level intermediate representation
14
  ("HIR" or "high-level IR" for short) of regular expression. It corresponds to
15
  an intermediate state of a regular expression that sits between the abstract
16
  syntax and the low level compiled opcodes that are eventually responsible for
17
  executing a regular expression search. Given some high-level IR, it is not
18
  possible to produce the original concrete syntax (although it is possible to
19
  produce an equivalent concrete syntax, but it will likely scarcely resemble
20
  the original pattern). To a first approximation, the high-level IR is simple
21
  and easy to analyze.
22
23
These two types come with conversion routines:
24
25
* An [`ast::parse::Parser`] converts concrete syntax (a `&str`) to an
26
[`Ast`](ast::Ast).
27
* A [`hir::translate::Translator`] converts an [`Ast`](ast::Ast) to a
28
[`Hir`](hir::Hir).
29
30
As a convenience, the above two conversion routines are combined into one via
31
the top-level [`Parser`] type. This `Parser` will first convert your pattern to
32
an `Ast` and then convert the `Ast` to an `Hir`. It's also exposed as top-level
33
[`parse`] free function.
34
35
36
# Example
37
38
This example shows how to parse a pattern string into its HIR:
39
40
```
41
use regex_syntax::{hir::Hir, parse};
42
43
let hir = parse("a|b")?;
44
assert_eq!(hir, Hir::alternation(vec![
45
    Hir::literal("a".as_bytes()),
46
    Hir::literal("b".as_bytes()),
47
]));
48
# Ok::<(), Box<dyn std::error::Error>>(())
49
```
50
51
52
# Concrete syntax supported
53
54
The concrete syntax is documented as part of the public API of the
55
[`regex` crate](https://docs.rs/regex/%2A/regex/#syntax).
56
57
58
# Input safety
59
60
A key feature of this library is that it is safe to use with end user facing
61
input. This plays a significant role in the internal implementation. In
62
particular:
63
64
1. Parsers provide a `nest_limit` option that permits callers to control how
65
   deeply nested a regular expression is allowed to be. This makes it possible
66
   to do case analysis over an `Ast` or an `Hir` using recursion without
67
   worrying about stack overflow.
68
2. Since relying on a particular stack size is brittle, this crate goes to
69
   great lengths to ensure that all interactions with both the `Ast` and the
70
   `Hir` do not use recursion. Namely, they use constant stack space and heap
71
   space proportional to the size of the original pattern string (in bytes).
72
   This includes the type's corresponding destructors. (One exception to this
73
   is literal extraction, but this will eventually get fixed.)
74
75
76
# Error reporting
77
78
The `Display` implementations on all `Error` types exposed in this library
79
provide nice human readable errors that are suitable for showing to end users
80
in a monospace font.
81
82
83
# Literal extraction
84
85
This crate provides limited support for [literal extraction from `Hir`
86
values](hir::literal). Be warned that literal extraction uses recursion, and
87
therefore, stack size proportional to the size of the `Hir`.
88
89
The purpose of literal extraction is to speed up searches. That is, if you
90
know a regular expression must match a prefix or suffix literal, then it is
91
often quicker to search for instances of that literal, and then confirm or deny
92
the match using the full regular expression engine. These optimizations are
93
done automatically in the `regex` crate.
94
95
96
# Crate features
97
98
An important feature provided by this crate is its Unicode support. This
99
includes things like case folding, boolean properties, general categories,
100
scripts and Unicode-aware support for the Perl classes `\w`, `\s` and `\d`.
101
However, a downside of this support is that it requires bundling several
102
Unicode data tables that are substantial in size.
103
104
A fair number of use cases do not require full Unicode support. For this
105
reason, this crate exposes a number of features to control which Unicode
106
data is available.
107
108
If a regular expression attempts to use a Unicode feature that is not available
109
because the corresponding crate feature was disabled, then translating that
110
regular expression to an `Hir` will return an error. (It is still possible
111
construct an `Ast` for such a regular expression, since Unicode data is not
112
used until translation to an `Hir`.) Stated differently, enabling or disabling
113
any of the features below can only add or subtract from the total set of valid
114
regular expressions. Enabling or disabling a feature will never modify the
115
match semantics of a regular expression.
116
117
The following features are available:
118
119
* **std** -
120
  Enables support for the standard library. This feature is enabled by default.
121
  When disabled, only `core` and `alloc` are used. Otherwise, enabling `std`
122
  generally just enables `std::error::Error` trait impls for the various error
123
  types.
124
* **unicode** -
125
  Enables all Unicode features. This feature is enabled by default, and will
126
  always cover all Unicode features, even if more are added in the future.
127
* **unicode-age** -
128
  Provide the data for the
129
  [Unicode `Age` property](https://www.unicode.org/reports/tr44/tr44-24.html#Character_Age).
130
  This makes it possible to use classes like `\p{Age:6.0}` to refer to all
131
  codepoints first introduced in Unicode 6.0
132
* **unicode-bool** -
133
  Provide the data for numerous Unicode boolean properties. The full list
134
  is not included here, but contains properties like `Alphabetic`, `Emoji`,
135
  `Lowercase`, `Math`, `Uppercase` and `White_Space`.
136
* **unicode-case** -
137
  Provide the data for case insensitive matching using
138
  [Unicode's "simple loose matches" specification](https://www.unicode.org/reports/tr18/#Simple_Loose_Matches).
139
* **unicode-gencat** -
140
  Provide the data for
141
  [Unicode general categories](https://www.unicode.org/reports/tr44/tr44-24.html#General_Category_Values).
142
  This includes, but is not limited to, `Decimal_Number`, `Letter`,
143
  `Math_Symbol`, `Number` and `Punctuation`.
144
* **unicode-perl** -
145
  Provide the data for supporting the Unicode-aware Perl character classes,
146
  corresponding to `\w`, `\s` and `\d`. This is also necessary for using
147
  Unicode-aware word boundary assertions. Note that if this feature is
148
  disabled, the `\s` and `\d` character classes are still available if the
149
  `unicode-bool` and `unicode-gencat` features are enabled, respectively.
150
* **unicode-script** -
151
  Provide the data for
152
  [Unicode scripts and script extensions](https://www.unicode.org/reports/tr24/).
153
  This includes, but is not limited to, `Arabic`, `Cyrillic`, `Hebrew`,
154
  `Latin` and `Thai`.
155
* **unicode-segment** -
156
  Provide the data necessary to provide the properties used to implement the
157
  [Unicode text segmentation algorithms](https://www.unicode.org/reports/tr29/).
158
  This enables using classes like `\p{gcb=Extend}`, `\p{wb=Katakana}` and
159
  `\p{sb=ATerm}`.
160
* **arbitrary** -
161
  Enabling this feature introduces a public dependency on the
162
  [`arbitrary`](https://crates.io/crates/arbitrary)
163
  crate. Namely, it implements the `Arbitrary` trait from that crate for the
164
  [`Ast`](crate::ast::Ast) type. This feature is disabled by default.
165
*/
166
167
#![no_std]
168
#![forbid(unsafe_code)]
169
#![deny(missing_docs, rustdoc::broken_intra_doc_links)]
170
#![warn(missing_debug_implementations)]
171
// This adds Cargo feature annotations to items in the rustdoc output. Which is
172
// sadly hugely beneficial for this crate due to the number of features.
173
#![cfg_attr(docsrs_regex, feature(doc_cfg))]
174
175
#[cfg(any(test, feature = "std"))]
176
extern crate std;
177
178
extern crate alloc;
179
180
pub use crate::{
181
    error::Error,
182
    parser::{parse, Parser, ParserBuilder},
183
    unicode::UnicodeWordError,
184
};
185
186
use alloc::string::String;
187
188
pub mod ast;
189
mod debug;
190
mod either;
191
mod error;
192
pub mod hir;
193
mod parser;
194
mod rank;
195
mod unicode;
196
mod unicode_tables;
197
pub mod utf8;
198
199
/// Escapes all regular expression meta characters in `text`.
200
///
201
/// The string returned may be safely used as a literal in a regular
202
/// expression.
203
0
pub fn escape(text: &str) -> String {
204
0
    let mut quoted = String::new();
205
0
    escape_into(text, &mut quoted);
206
0
    quoted
207
0
}
208
209
/// Escapes all meta characters in `text` and writes the result into `buf`.
210
///
211
/// This will append escape characters into the given buffer. The characters
212
/// that are appended are safe to use as a literal in a regular expression.
213
0
pub fn escape_into(text: &str, buf: &mut String) {
214
0
    buf.reserve(text.len());
215
0
    for c in text.chars() {
216
0
        if is_meta_character(c) {
217
0
            buf.push('\\');
218
0
        }
219
0
        buf.push(c);
220
    }
221
0
}
222
223
/// Returns true if the given character has significance in a regex.
224
///
225
/// Generally speaking, these are the only characters which _must_ be escaped
226
/// in order to match their literal meaning. For example, to match a literal
227
/// `|`, one could write `\|`. Sometimes escaping isn't always necessary. For
228
/// example, `-` is treated as a meta character because of its significance
229
/// for writing ranges inside of character classes, but the regex `-` will
230
/// match a literal `-` because `-` has no special meaning outside of character
231
/// classes.
232
///
233
/// In order to determine whether a character may be escaped at all, the
234
/// [`is_escapeable_character`] routine should be used. The difference between
235
/// `is_meta_character` and `is_escapeable_character` is that the latter will
236
/// return true for some characters that are _not_ meta characters. For
237
/// example, `%` and `\%` both match a literal `%` in all contexts. In other
238
/// words, `is_escapeable_character` includes "superfluous" escapes.
239
///
240
/// Note that the set of characters for which this function returns `true` or
241
/// `false` is fixed and won't change in a semver compatible release. (In this
242
/// case, "semver compatible release" actually refers to the `regex` crate
243
/// itself, since reducing or expanding the set of meta characters would be a
244
/// breaking change for not just `regex-syntax` but also `regex` itself.)
245
///
246
/// # Example
247
///
248
/// ```
249
/// use regex_syntax::is_meta_character;
250
///
251
/// assert!(is_meta_character('?'));
252
/// assert!(is_meta_character('-'));
253
/// assert!(is_meta_character('&'));
254
/// assert!(is_meta_character('#'));
255
///
256
/// assert!(!is_meta_character('%'));
257
/// assert!(!is_meta_character('/'));
258
/// assert!(!is_meta_character('!'));
259
/// assert!(!is_meta_character('"'));
260
/// assert!(!is_meta_character('e'));
261
/// ```
262
46
pub fn is_meta_character(c: char) -> bool {
263
46
    match c {
264
        '\\' | '.' | '+' | '*' | '?' | '(' | ')' | '|' | '[' | ']' | '{'
265
20
        | '}' | '^' | '$' | '#' | '&' | '-' | '~' => true,
266
26
        _ => false,
267
    }
268
46
}
269
270
/// Returns true if the given character can be escaped in a regex.
271
///
272
/// This returns true in all cases that `is_meta_character` returns true, but
273
/// also returns true in some cases where `is_meta_character` returns false.
274
/// For example, `%` is not a meta character, but it is escapable. That is,
275
/// `%` and `\%` both match a literal `%` in all contexts.
276
///
277
/// The purpose of this routine is to provide knowledge about what characters
278
/// may be escaped. Namely, most regex engines permit "superfluous" escapes
279
/// where characters without any special significance may be escaped even
280
/// though there is no actual _need_ to do so.
281
///
282
/// This will return false for some characters. For example, `e` is not
283
/// escapable. Therefore, `\e` will either result in a parse error (which is
284
/// true today), or it could backwards compatibly evolve into a new construct
285
/// with its own meaning. Indeed, that is the purpose of banning _some_
286
/// superfluous escapes: it provides a way to evolve the syntax in a compatible
287
/// manner.
288
///
289
/// # Example
290
///
291
/// ```
292
/// use regex_syntax::is_escapeable_character;
293
///
294
/// assert!(is_escapeable_character('?'));
295
/// assert!(is_escapeable_character('-'));
296
/// assert!(is_escapeable_character('&'));
297
/// assert!(is_escapeable_character('#'));
298
/// assert!(is_escapeable_character('%'));
299
/// assert!(is_escapeable_character('/'));
300
/// assert!(is_escapeable_character('!'));
301
/// assert!(is_escapeable_character('"'));
302
///
303
/// assert!(!is_escapeable_character('e'));
304
/// ```
305
13
pub fn is_escapeable_character(c: char) -> bool {
306
    // Certainly escapable if it's a meta character.
307
13
    if is_meta_character(c) {
308
0
        return true;
309
13
    }
310
    // Any character that isn't ASCII is definitely not escapable. There's
311
    // no real need to allow things like \☃ right?
312
13
    if !c.is_ascii() {
313
0
        return false;
314
13
    }
315
    // Otherwise, we basically say that everything is escapable unless it's a
316
    // letter or digit. Things like \3 are either octal (when enabled) or an
317
    // error, and we should keep it that way. Otherwise, letters are reserved
318
    // for adding new syntax in a backwards compatible way.
319
13
    match c {
320
13
        '0'..='9' | 'A'..='Z' | 'a'..='z' => false,
321
        // While not currently supported, we keep these as not escapable to
322
        // give us some flexibility with respect to supporting the \< and
323
        // \> word boundary assertions in the future. By rejecting them as
324
        // escapable, \< and \> will result in a parse error. Thus, we can
325
        // turn them into something else in the future without it being a
326
        // backwards incompatible change.
327
        //
328
        // OK, now we support \< and \>, and we need to retain them as *not*
329
        // escapable here since the escape sequence is significant.
330
0
        '<' | '>' => false,
331
0
        _ => true,
332
    }
333
13
}
334
335
/// Returns true if and only if the given character is a Unicode word
336
/// character.
337
///
338
/// A Unicode word character is defined by
339
/// [UTS#18 Annex C](https://unicode.org/reports/tr18/#Compatibility_Properties).
340
/// In particular, a character
341
/// is considered a word character if it is in either of the `Alphabetic` or
342
/// `Join_Control` properties, or is in one of the `Decimal_Number`, `Mark`
343
/// or `Connector_Punctuation` general categories.
344
///
345
/// # Panics
346
///
347
/// If the `unicode-perl` feature is not enabled, then this function
348
/// panics. For this reason, it is recommended that callers use
349
/// [`try_is_word_character`] instead.
350
0
pub fn is_word_character(c: char) -> bool {
351
0
    try_is_word_character(c).expect("unicode-perl feature must be enabled")
352
0
}
353
354
/// Returns true if and only if the given character is a Unicode word
355
/// character.
356
///
357
/// A Unicode word character is defined by
358
/// [UTS#18 Annex C](https://unicode.org/reports/tr18/#Compatibility_Properties).
359
/// In particular, a character
360
/// is considered a word character if it is in either of the `Alphabetic` or
361
/// `Join_Control` properties, or is in one of the `Decimal_Number`, `Mark`
362
/// or `Connector_Punctuation` general categories.
363
///
364
/// # Errors
365
///
366
/// If the `unicode-perl` feature is not enabled, then this function always
367
/// returns an error.
368
18.3M
pub fn try_is_word_character(
369
18.3M
    c: char,
370
18.3M
) -> core::result::Result<bool, UnicodeWordError> {
371
18.3M
    unicode::is_word_character(c)
372
18.3M
}
373
374
/// Returns true if and only if the given character is an ASCII word character.
375
///
376
/// An ASCII word character is defined by the following character class:
377
/// `[_0-9a-zA-Z]`.
378
18.3M
pub fn is_word_byte(c: u8) -> bool {
379
18.3M
    match c {
380
17.5M
        b'_' | b'0'..=b'9' | b'a'..=b'z' | b'A'..=b'Z' => true,
381
770k
        _ => false,
382
    }
383
18.3M
}
384
385
#[cfg(test)]
386
mod tests {
387
    use alloc::string::ToString;
388
389
    use super::*;
390
391
    #[test]
392
    fn escape_meta() {
393
        assert_eq!(
394
            escape(r"\.+*?()|[]{}^$#&-~"),
395
            r"\\\.\+\*\?\(\)\|\[\]\{\}\^\$\#\&\-\~".to_string()
396
        );
397
    }
398
399
    #[test]
400
    fn word_byte() {
401
        assert!(is_word_byte(b'a'));
402
        assert!(!is_word_byte(b'-'));
403
    }
404
405
    #[test]
406
    #[cfg(feature = "unicode-perl")]
407
    fn word_char() {
408
        assert!(is_word_character('a'), "ASCII");
409
        assert!(is_word_character('à'), "Latin-1");
410
        assert!(is_word_character('β'), "Greek");
411
        assert!(is_word_character('\u{11011}'), "Brahmi (Unicode 6.0)");
412
        assert!(is_word_character('\u{11611}'), "Modi (Unicode 7.0)");
413
        assert!(is_word_character('\u{11711}'), "Ahom (Unicode 8.0)");
414
        assert!(is_word_character('\u{17828}'), "Tangut (Unicode 9.0)");
415
        assert!(is_word_character('\u{1B1B1}'), "Nushu (Unicode 10.0)");
416
        assert!(is_word_character('\u{16E40}'), "Medefaidrin (Unicode 11.0)");
417
        assert!(!is_word_character('-'));
418
        assert!(!is_word_character('☃'));
419
    }
420
421
    #[test]
422
    #[should_panic]
423
    #[cfg(not(feature = "unicode-perl"))]
424
    fn word_char_disabled_panic() {
425
        assert!(is_word_character('a'));
426
    }
427
428
    #[test]
429
    #[cfg(not(feature = "unicode-perl"))]
430
    fn word_char_disabled_error() {
431
        assert!(try_is_word_character('a').is_err());
432
    }
433
}