Coverage Report

Created: 2025-11-11 06:12

next uncovered line (L), next uncovered region (R), next uncovered branch (B)
/rust/registry/src/index.crates.io-1949cf8c6b5b557f/bzip2-0.4.4/src/lib.rs
Line
Count
Source
1
//! Bzip compression for Rust
2
//!
3
//! This library contains bindings to libbz2 to support bzip compression and
4
//! decompression for Rust. The streams offered in this library are primarily
5
//! found in the `reader` and `writer` modules. Both compressors and
6
//! decompressors are available in each module depending on what operation you
7
//! need.
8
//!
9
//! Access to the raw decompression/compression stream is also provided through
10
//! the `raw` module which has a much closer interface to libbz2.
11
//!
12
//! # Example
13
//!
14
//! ```
15
//! use std::io::prelude::*;
16
//! use bzip2::Compression;
17
//! use bzip2::read::{BzEncoder, BzDecoder};
18
//!
19
//! // Round trip some bytes from a byte source, into a compressor, into a
20
//! // decompressor, and finally into a vector.
21
//! let data = "Hello, World!".as_bytes();
22
//! let compressor = BzEncoder::new(data, Compression::best());
23
//! let mut decompressor = BzDecoder::new(compressor);
24
//!
25
//! let mut contents = String::new();
26
//! decompressor.read_to_string(&mut contents).unwrap();
27
//! assert_eq!(contents, "Hello, World!");
28
//! ```
29
//!
30
//! # Multistreams (e.g. Wikipedia or pbzip2)
31
//!
32
//! Some tools such as pbzip2 or data from sources such as Wikipedia
33
//! are encoded as so called bzip2 "multistreams," meaning they
34
//! contain back to back chunks of bzip'd data. `BzDecoder` does not
35
//! attempt to convert anything after the the first bzip chunk in the
36
//! source stream. Thus, if you wish to decode all bzip chunks from
37
//! the input until end of file, use `MultiBzDecoder`.
38
//!
39
//! *Protip*: If you use `BzDecoder` to decode data and the output is
40
//! incomplete and exactly 900K bytes, you probably need a
41
//! `MultiBzDecoder`.
42
//!
43
//! # Async I/O
44
//!
45
//! This crate optionally can support async I/O streams with the Tokio stack via
46
//! the `tokio` feature of this crate:
47
//!
48
//! ```toml
49
//! bzip2 = { version = "0.4", features = ["tokio"] }
50
//! ```
51
//!
52
//! All methods are internally capable of working with streams that may return
53
//! `ErrorKind::WouldBlock` when they're not ready to perform the particular
54
//! operation.
55
//!
56
//! Note that care needs to be taken when using these objects, however. The
57
//! Tokio runtime, in particular, requires that data is fully flushed before
58
//! dropping streams. For compatibility with blocking streams all streams are
59
//! flushed/written when they are dropped, and this is not always a suitable
60
//! time to perform I/O. If I/O streams are flushed before drop, however, then
61
//! these operations will be a noop.
62
63
#![deny(missing_docs)]
64
#![doc(html_root_url = "https://docs.rs/bzip2/")]
65
66
extern crate bzip2_sys as ffi;
67
extern crate libc;
68
#[cfg(test)]
69
extern crate partial_io;
70
#[cfg(test)]
71
extern crate quickcheck;
72
#[cfg(test)]
73
extern crate rand;
74
#[cfg(feature = "tokio")]
75
#[macro_use]
76
extern crate tokio_io;
77
#[cfg(feature = "tokio")]
78
extern crate futures;
79
80
pub use mem::{Action, Compress, Decompress, Error, Status};
81
82
mod mem;
83
84
pub mod bufread;
85
pub mod read;
86
pub mod write;
87
88
/// When compressing data, the compression level can be specified by a value in
89
/// this enum.
90
#[derive(Copy, Clone, Debug)]
91
pub struct Compression(u32);
92
93
impl Compression {
94
    /// Create a new compression spec with a specific numeric level (0-9).
95
3.59k
    pub fn new(level: u32) -> Compression {
96
3.59k
        Compression(level)
97
3.59k
    }
98
99
    /// Do not compress.
100
3.59k
    pub fn none() -> Compression {
101
3.59k
        Compression(0)
102
3.59k
    }
103
104
    /// Optimize for the best speed of encoding.
105
0
    pub fn fast() -> Compression {
106
0
        Compression(1)
107
0
    }
108
109
    /// Optimize for the size of data being encoded.
110
3.59k
    pub fn best() -> Compression {
111
3.59k
        Compression(9)
112
3.59k
    }
113
114
    /// Return the compression level as an integer.
115
14.3k
    pub fn level(&self) -> u32 {
116
14.3k
        self.0
117
14.3k
    }
118
}
119
120
impl Default for Compression {
121
    /// Choose the default compression, a balance between speed and size.
122
3.59k
    fn default() -> Compression {
123
3.59k
        Compression(6)
124
3.59k
    }
125
}