Coverage Report

Created: 2025-02-25 06:39

/rust/registry/src/index.crates.io-6f17d22bba15001f/zerocopy-0.8.18/src/lib.rs
Line
Count
Source (jump to first uncovered line)
1
// Copyright 2018 The Fuchsia Authors
2
//
3
// Licensed under the 2-Clause BSD License <LICENSE-BSD or
4
// https://opensource.org/license/bsd-2-clause>, Apache License, Version 2.0
5
// <LICENSE-APACHE or https://www.apache.org/licenses/LICENSE-2.0>, or the MIT
6
// license <LICENSE-MIT or https://opensource.org/licenses/MIT>, at your option.
7
// This file may not be copied, modified, or distributed except according to
8
// those terms.
9
10
// After updating the following doc comment, make sure to run the following
11
// command to update `README.md` based on its contents:
12
//
13
//   cargo -q run --manifest-path tools/Cargo.toml -p generate-readme > README.md
14
15
//! *<span style="font-size: 100%; color:grey;">Need more out of zerocopy?
16
//! Submit a [customer request issue][customer-request-issue]!</span>*
17
//!
18
//! ***<span style="font-size: 140%">Fast, safe, <span
19
//! style="color:red;">compile error</span>. Pick two.</span>***
20
//!
21
//! Zerocopy makes zero-cost memory manipulation effortless. We write `unsafe`
22
//! so you don't have to.
23
//!
24
//! *Thanks for using zerocopy 0.8! For an overview of what changes from 0.7,
25
//! check out our [release notes][release-notes], which include a step-by-step
26
//! guide for upgrading from 0.7.*
27
//!
28
//! *Have questions? Need help? Ask the maintainers on [GitHub][github-q-a] or
29
//! on [Discord][discord]!*
30
//!
31
//! [customer-request-issue]: https://github.com/google/zerocopy/issues/new/choose
32
//! [release-notes]: https://github.com/google/zerocopy/discussions/1680
33
//! [github-q-a]: https://github.com/google/zerocopy/discussions/categories/q-a
34
//! [discord]: https://discord.gg/MAvWH2R6zk
35
//!
36
//! # Overview
37
//!
38
//! ##### Conversion Traits
39
//!
40
//! Zerocopy provides four derivable traits for zero-cost conversions:
41
//! - [`TryFromBytes`] indicates that a type may safely be converted from
42
//!   certain byte sequences (conditional on runtime checks)
43
//! - [`FromZeros`] indicates that a sequence of zero bytes represents a valid
44
//!   instance of a type
45
//! - [`FromBytes`] indicates that a type may safely be converted from an
46
//!   arbitrary byte sequence
47
//! - [`IntoBytes`] indicates that a type may safely be converted *to* a byte
48
//!   sequence
49
//!
50
//! These traits support sized types, slices, and [slice DSTs][slice-dsts].
51
//!
52
//! [slice-dsts]: KnownLayout#dynamically-sized-types
53
//!
54
//! ##### Marker Traits
55
//!
56
//! Zerocopy provides three derivable marker traits that do not provide any
57
//! functionality themselves, but are required to call certain methods provided
58
//! by the conversion traits:
59
//! - [`KnownLayout`] indicates that zerocopy can reason about certain layout
60
//!   qualities of a type
61
//! - [`Immutable`] indicates that a type is free from interior mutability,
62
//!   except by ownership or an exclusive (`&mut`) borrow
63
//! - [`Unaligned`] indicates that a type's alignment requirement is 1
64
//!
65
//! You should generally derive these marker traits whenever possible.
66
//!
67
//! ##### Conversion Macros
68
//!
69
//! Zerocopy provides six macros for safe casting between types:
70
//!
71
//! - ([`try_`][try_transmute])[`transmute`] (conditionally) converts a value of
72
//!   one type to a value of another type of the same size
73
//! - ([`try_`][try_transmute_mut])[`transmute_mut`] (conditionally) converts a
74
//!   mutable reference of one type to a mutable reference of another type of
75
//!   the same size
76
//! - ([`try_`][try_transmute_ref])[`transmute_ref`] (conditionally) converts a
77
//!   mutable or immutable reference of one type to an immutable reference of
78
//!   another type of the same size
79
//!
80
//! These macros perform *compile-time* size and alignment checks, meaning that
81
//! unconditional casts have zero cost at runtime. Conditional casts do not need
82
//! to validate size or alignment runtime, but do need to validate contents.
83
//!
84
//! These macros cannot be used in generic contexts. For generic conversions,
85
//! use the methods defined by the [conversion traits](#conversion-traits).
86
//!
87
//! ##### Byteorder-Aware Numerics
88
//!
89
//! Zerocopy provides byte-order aware integer types that support these
90
//! conversions; see the [`byteorder`] module. These types are especially useful
91
//! for network parsing.
92
//!
93
//! # Cargo Features
94
//!
95
//! - **`alloc`**
96
//!   By default, `zerocopy` is `no_std`. When the `alloc` feature is enabled,
97
//!   the `alloc` crate is added as a dependency, and some allocation-related
98
//!   functionality is added.
99
//!
100
//! - **`std`**
101
//!   By default, `zerocopy` is `no_std`. When the `std` feature is enabled, the
102
//!   `std` crate is added as a dependency (ie, `no_std` is disabled), and
103
//!   support for some `std` types is added. `std` implies `alloc`.
104
//!
105
//! - **`derive`**
106
//!   Provides derives for the core marker traits via the `zerocopy-derive`
107
//!   crate. These derives are re-exported from `zerocopy`, so it is not
108
//!   necessary to depend on `zerocopy-derive` directly.
109
//!
110
//!   However, you may experience better compile times if you instead directly
111
//!   depend on both `zerocopy` and `zerocopy-derive` in your `Cargo.toml`,
112
//!   since doing so will allow Rust to compile these crates in parallel. To do
113
//!   so, do *not* enable the `derive` feature, and list both dependencies in
114
//!   your `Cargo.toml` with the same leading non-zero version number; e.g:
115
//!
116
//!   ```toml
117
//!   [dependencies]
118
//!   zerocopy = "0.X"
119
//!   zerocopy-derive = "0.X"
120
//!   ```
121
//!
122
//!   To avoid the risk of [duplicate import errors][duplicate-import-errors] if
123
//!   one of your dependencies enables zerocopy's `derive` feature, import
124
//!   derives as `use zerocopy_derive::*` rather than by name (e.g., `use
125
//!   zerocopy_derive::FromBytes`).
126
//!
127
//! - **`simd`**
128
//!   When the `simd` feature is enabled, `FromZeros`, `FromBytes`, and
129
//!   `IntoBytes` impls are emitted for all stable SIMD types which exist on the
130
//!   target platform. Note that the layout of SIMD types is not yet stabilized,
131
//!   so these impls may be removed in the future if layout changes make them
132
//!   invalid. For more information, see the Unsafe Code Guidelines Reference
133
//!   page on the [layout of packed SIMD vectors][simd-layout].
134
//!
135
//! - **`simd-nightly`**
136
//!   Enables the `simd` feature and adds support for SIMD types which are only
137
//!   available on nightly. Since these types are unstable, support for any type
138
//!   may be removed at any point in the future.
139
//!
140
//! - **`float-nightly`**
141
//!   Adds support for the unstable `f16` and `f128` types. These types are
142
//!   not yet fully implemented and may not be supported on all platforms.
143
//!
144
//! [duplicate-import-errors]: https://github.com/google/zerocopy/issues/1587
145
//! [simd-layout]: https://rust-lang.github.io/unsafe-code-guidelines/layout/packed-simd-vectors.html
146
//!
147
//! # Security Ethos
148
//!
149
//! Zerocopy is expressly designed for use in security-critical contexts. We
150
//! strive to ensure that that zerocopy code is sound under Rust's current
151
//! memory model, and *any future memory model*. We ensure this by:
152
//! - **...not 'guessing' about Rust's semantics.**
153
//!   We annotate `unsafe` code with a precise rationale for its soundness that
154
//!   cites a relevant section of Rust's official documentation. When Rust's
155
//!   documented semantics are unclear, we work with the Rust Operational
156
//!   Semantics Team to clarify Rust's documentation.
157
//! - **...rigorously testing our implementation.**
158
//!   We run tests using [Miri], ensuring that zerocopy is sound across a wide
159
//!   array of supported target platforms of varying endianness and pointer
160
//!   width, and across both current and experimental memory models of Rust.
161
//! - **...formally proving the correctness of our implementation.**
162
//!   We apply formal verification tools like [Kani][kani] to prove zerocopy's
163
//!   correctness.
164
//!
165
//! For more information, see our full [soundness policy].
166
//!
167
//! [Miri]: https://github.com/rust-lang/miri
168
//! [Kani]: https://github.com/model-checking/kani
169
//! [soundness policy]: https://github.com/google/zerocopy/blob/main/POLICIES.md#soundness
170
//!
171
//! # Relationship to Project Safe Transmute
172
//!
173
//! [Project Safe Transmute] is an official initiative of the Rust Project to
174
//! develop language-level support for safer transmutation. The Project consults
175
//! with crates like zerocopy to identify aspects of safer transmutation that
176
//! would benefit from compiler support, and has developed an [experimental,
177
//! compiler-supported analysis][mcp-transmutability] which determines whether,
178
//! for a given type, any value of that type may be soundly transmuted into
179
//! another type. Once this functionality is sufficiently mature, zerocopy
180
//! intends to replace its internal transmutability analysis (implemented by our
181
//! custom derives) with the compiler-supported one. This change will likely be
182
//! an implementation detail that is invisible to zerocopy's users.
183
//!
184
//! Project Safe Transmute will not replace the need for most of zerocopy's
185
//! higher-level abstractions. The experimental compiler analysis is a tool for
186
//! checking the soundness of `unsafe` code, not a tool to avoid writing
187
//! `unsafe` code altogether. For the foreseeable future, crates like zerocopy
188
//! will still be required in order to provide higher-level abstractions on top
189
//! of the building block provided by Project Safe Transmute.
190
//!
191
//! [Project Safe Transmute]: https://rust-lang.github.io/rfcs/2835-project-safe-transmute.html
192
//! [mcp-transmutability]: https://github.com/rust-lang/compiler-team/issues/411
193
//!
194
//! # MSRV
195
//!
196
//! See our [MSRV policy].
197
//!
198
//! [MSRV policy]: https://github.com/google/zerocopy/blob/main/POLICIES.md#msrv
199
//!
200
//! # Changelog
201
//!
202
//! Zerocopy uses [GitHub Releases].
203
//!
204
//! [GitHub Releases]: https://github.com/google/zerocopy/releases
205
206
// Sometimes we want to use lints which were added after our MSRV.
207
// `unknown_lints` is `warn` by default and we deny warnings in CI, so without
208
// this attribute, any unknown lint would cause a CI failure when testing with
209
// our MSRV.
210
#![allow(unknown_lints, non_local_definitions, unreachable_patterns)]
211
#![deny(renamed_and_removed_lints)]
212
#![deny(
213
    anonymous_parameters,
214
    deprecated_in_future,
215
    late_bound_lifetime_arguments,
216
    missing_copy_implementations,
217
    missing_debug_implementations,
218
    missing_docs,
219
    path_statements,
220
    patterns_in_fns_without_body,
221
    rust_2018_idioms,
222
    trivial_numeric_casts,
223
    unreachable_pub,
224
    unsafe_op_in_unsafe_fn,
225
    unused_extern_crates,
226
    // We intentionally choose not to deny `unused_qualifications`. When items
227
    // are added to the prelude (e.g., `core::mem::size_of`), this has the
228
    // consequence of making some uses trigger this lint on the latest toolchain
229
    // (e.g., `mem::size_of`), but fixing it (e.g. by replacing with `size_of`)
230
    // does not work on older toolchains.
231
    //
232
    // We tested a more complicated fix in #1413, but ultimately decided that,
233
    // since this lint is just a minor style lint, the complexity isn't worth it
234
    // - it's fine to occasionally have unused qualifications slip through,
235
    // especially since these do not affect our user-facing API in any way.
236
    variant_size_differences
237
)]
238
#![cfg_attr(
239
    __ZEROCOPY_INTERNAL_USE_ONLY_NIGHTLY_FEATURES_IN_TESTS,
240
    deny(fuzzy_provenance_casts, lossy_provenance_casts)
241
)]
242
#![deny(
243
    clippy::all,
244
    clippy::alloc_instead_of_core,
245
    clippy::arithmetic_side_effects,
246
    clippy::as_underscore,
247
    clippy::assertions_on_result_states,
248
    clippy::as_conversions,
249
    clippy::correctness,
250
    clippy::dbg_macro,
251
    clippy::decimal_literal_representation,
252
    clippy::double_must_use,
253
    clippy::get_unwrap,
254
    clippy::indexing_slicing,
255
    clippy::missing_inline_in_public_items,
256
    clippy::missing_safety_doc,
257
    clippy::must_use_candidate,
258
    clippy::must_use_unit,
259
    clippy::obfuscated_if_else,
260
    clippy::perf,
261
    clippy::print_stdout,
262
    clippy::return_self_not_must_use,
263
    clippy::std_instead_of_core,
264
    clippy::style,
265
    clippy::suspicious,
266
    clippy::todo,
267
    clippy::undocumented_unsafe_blocks,
268
    clippy::unimplemented,
269
    clippy::unnested_or_patterns,
270
    clippy::unwrap_used,
271
    clippy::use_debug
272
)]
273
#![allow(clippy::type_complexity)]
274
#![deny(
275
    rustdoc::bare_urls,
276
    rustdoc::broken_intra_doc_links,
277
    rustdoc::invalid_codeblock_attributes,
278
    rustdoc::invalid_html_tags,
279
    rustdoc::invalid_rust_codeblocks,
280
    rustdoc::missing_crate_level_docs,
281
    rustdoc::private_intra_doc_links
282
)]
283
// In test code, it makes sense to weight more heavily towards concise, readable
284
// code over correct or debuggable code.
285
#![cfg_attr(any(test, kani), allow(
286
    // In tests, you get line numbers and have access to source code, so panic
287
    // messages are less important. You also often unwrap a lot, which would
288
    // make expect'ing instead very verbose.
289
    clippy::unwrap_used,
290
    // In tests, there's no harm to "panic risks" - the worst that can happen is
291
    // that your test will fail, and you'll fix it. By contrast, panic risks in
292
    // production code introduce the possibly of code panicking unexpectedly "in
293
    // the field".
294
    clippy::arithmetic_side_effects,
295
    clippy::indexing_slicing,
296
))]
297
#![cfg_attr(not(any(test, feature = "std")), no_std)]
298
#![cfg_attr(
299
    all(feature = "simd-nightly", any(target_arch = "x86", target_arch = "x86_64")),
300
    feature(stdarch_x86_avx512)
301
)]
302
#![cfg_attr(
303
    all(feature = "simd-nightly", target_arch = "arm"),
304
    feature(stdarch_arm_dsp, stdarch_arm_neon_intrinsics)
305
)]
306
#![cfg_attr(
307
    all(feature = "simd-nightly", any(target_arch = "powerpc", target_arch = "powerpc64")),
308
    feature(stdarch_powerpc)
309
)]
310
#![cfg_attr(feature = "float-nightly", feature(f16, f128))]
311
#![cfg_attr(doc_cfg, feature(doc_cfg))]
312
#![cfg_attr(
313
    __ZEROCOPY_INTERNAL_USE_ONLY_NIGHTLY_FEATURES_IN_TESTS,
314
    feature(layout_for_ptr, coverage_attribute)
315
)]
316
317
// This is a hack to allow zerocopy-derive derives to work in this crate. They
318
// assume that zerocopy is linked as an extern crate, so they access items from
319
// it as `zerocopy::Xxx`. This makes that still work.
320
#[cfg(any(feature = "derive", test))]
321
extern crate self as zerocopy;
322
323
#[doc(hidden)]
324
#[macro_use]
325
pub mod util;
326
327
pub mod byte_slice;
328
pub mod byteorder;
329
mod deprecated;
330
// This module is `pub` so that zerocopy's error types and error handling
331
// documentation is grouped together in a cohesive module. In practice, we
332
// expect most users to use the re-export of `error`'s items to avoid identifier
333
// stuttering.
334
pub mod error;
335
mod impls;
336
#[doc(hidden)]
337
pub mod layout;
338
mod macros;
339
#[doc(hidden)]
340
pub mod pointer;
341
mod r#ref;
342
// TODO(#252): If we make this pub, come up with a better name.
343
mod wrappers;
344
345
pub use crate::byte_slice::*;
346
pub use crate::byteorder::*;
347
pub use crate::error::*;
348
pub use crate::r#ref::*;
349
pub use crate::wrappers::*;
350
351
use core::{
352
    cell::UnsafeCell,
353
    cmp::Ordering,
354
    fmt::{self, Debug, Display, Formatter},
355
    hash::Hasher,
356
    marker::PhantomData,
357
    mem::{self, ManuallyDrop, MaybeUninit as CoreMaybeUninit},
358
    num::{
359
        NonZeroI128, NonZeroI16, NonZeroI32, NonZeroI64, NonZeroI8, NonZeroIsize, NonZeroU128,
360
        NonZeroU16, NonZeroU32, NonZeroU64, NonZeroU8, NonZeroUsize, Wrapping,
361
    },
362
    ops::{Deref, DerefMut},
363
    ptr::{self, NonNull},
364
    slice,
365
};
366
367
#[cfg(feature = "std")]
368
use std::io;
369
370
use crate::pointer::{invariant, BecauseExclusive};
371
372
#[cfg(any(feature = "alloc", test))]
373
extern crate alloc;
374
#[cfg(any(feature = "alloc", test))]
375
use alloc::{boxed::Box, vec::Vec};
376
377
#[cfg(any(feature = "alloc", test, kani))]
378
use core::alloc::Layout;
379
380
// Used by `TryFromBytes::is_bit_valid`.
381
#[doc(hidden)]
382
pub use crate::pointer::{BecauseImmutable, Maybe, MaybeAligned, Ptr};
383
// Used by `KnownLayout`.
384
#[doc(hidden)]
385
pub use crate::layout::*;
386
387
// For each trait polyfill, as soon as the corresponding feature is stable, the
388
// polyfill import will be unused because method/function resolution will prefer
389
// the inherent method/function over a trait method/function. Thus, we suppress
390
// the `unused_imports` warning.
391
//
392
// See the documentation on `util::polyfills` for more information.
393
#[allow(unused_imports)]
394
use crate::util::polyfills::{self, NonNullExt as _, NumExt as _};
395
396
#[rustversion::nightly]
397
#[cfg(all(test, not(__ZEROCOPY_INTERNAL_USE_ONLY_NIGHTLY_FEATURES_IN_TESTS)))]
398
const _: () = {
399
    #[deprecated = "some tests may be skipped due to missing RUSTFLAGS=\"--cfg __ZEROCOPY_INTERNAL_USE_ONLY_NIGHTLY_FEATURES_IN_TESTS\""]
400
    const _WARNING: () = ();
401
    #[warn(deprecated)]
402
    _WARNING
403
};
404
405
// These exist so that code which was written against the old names will get
406
// less confusing error messages when they upgrade to a more recent version of
407
// zerocopy. On our MSRV toolchain, the error messages read, for example:
408
//
409
//   error[E0603]: trait `FromZeroes` is private
410
//       --> examples/deprecated.rs:1:15
411
//        |
412
//   1    | use zerocopy::FromZeroes;
413
//        |               ^^^^^^^^^^ private trait
414
//        |
415
//   note: the trait `FromZeroes` is defined here
416
//       --> /Users/josh/workspace/zerocopy/src/lib.rs:1845:5
417
//        |
418
//   1845 | use FromZeros as FromZeroes;
419
//        |     ^^^^^^^^^^^^^^^^^^^^^^^
420
//
421
// The "note" provides enough context to make it easy to figure out how to fix
422
// the error.
423
#[allow(unused)]
424
use {FromZeros as FromZeroes, IntoBytes as AsBytes, Ref as LayoutVerified};
425
426
/// Implements [`KnownLayout`].
427
///
428
/// This derive analyzes various aspects of a type's layout that are needed for
429
/// some of zerocopy's APIs. It can be applied to structs, enums, and unions;
430
/// e.g.:
431
///
432
/// ```
433
/// # use zerocopy_derive::KnownLayout;
434
/// #[derive(KnownLayout)]
435
/// struct MyStruct {
436
/// # /*
437
///     ...
438
/// # */
439
/// }
440
///
441
/// #[derive(KnownLayout)]
442
/// enum MyEnum {
443
/// #   V00,
444
/// # /*
445
///     ...
446
/// # */
447
/// }
448
///
449
/// #[derive(KnownLayout)]
450
/// union MyUnion {
451
/// #   variant: u8,
452
/// # /*
453
///     ...
454
/// # */
455
/// }
456
/// ```
457
///
458
/// # Limitations
459
///
460
/// This derive cannot currently be applied to unsized structs without an
461
/// explicit `repr` attribute.
462
///
463
/// Some invocations of this derive run afoul of a [known bug] in Rust's type
464
/// privacy checker. For example, this code:
465
///
466
/// ```compile_fail,E0446
467
/// use zerocopy::*;
468
/// # use zerocopy_derive::*;
469
///
470
/// #[derive(KnownLayout)]
471
/// #[repr(C)]
472
/// pub struct PublicType {
473
///     leading: Foo,
474
///     trailing: Bar,
475
/// }
476
///
477
/// #[derive(KnownLayout)]
478
/// struct Foo;
479
///
480
/// #[derive(KnownLayout)]
481
/// struct Bar;
482
/// ```
483
///
484
/// ...results in a compilation error:
485
///
486
/// ```text
487
/// error[E0446]: private type `Bar` in public interface
488
///  --> examples/bug.rs:3:10
489
///    |
490
/// 3  | #[derive(KnownLayout)]
491
///    |          ^^^^^^^^^^^ can't leak private type
492
/// ...
493
/// 14 | struct Bar;
494
///    | ---------- `Bar` declared as private
495
///    |
496
///    = note: this error originates in the derive macro `KnownLayout` (in Nightly builds, run with -Z macro-backtrace for more info)
497
/// ```
498
///
499
/// This issue arises when `#[derive(KnownLayout)]` is applied to `repr(C)`
500
/// structs whose trailing field type is less public than the enclosing struct.
501
///
502
/// To work around this, mark the trailing field type `pub` and annotate it with
503
/// `#[doc(hidden)]`; e.g.:
504
///
505
/// ```no_run
506
/// use zerocopy::*;
507
/// # use zerocopy_derive::*;
508
///
509
/// #[derive(KnownLayout)]
510
/// #[repr(C)]
511
/// pub struct PublicType {
512
///     leading: Foo,
513
///     trailing: Bar,
514
/// }
515
///
516
/// #[derive(KnownLayout)]
517
/// struct Foo;
518
///
519
/// #[doc(hidden)]
520
/// #[derive(KnownLayout)]
521
/// pub struct Bar; // <- `Bar` is now also `pub`
522
/// ```
523
///
524
/// [known bug]: https://github.com/rust-lang/rust/issues/45713
525
#[cfg(any(feature = "derive", test))]
526
#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
527
pub use zerocopy_derive::KnownLayout;
528
529
/// Indicates that zerocopy can reason about certain aspects of a type's layout.
530
///
531
/// This trait is required by many of zerocopy's APIs. It supports sized types,
532
/// slices, and [slice DSTs](#dynamically-sized-types).
533
///
534
/// # Implementation
535
///
536
/// **Do not implement this trait yourself!** Instead, use
537
/// [`#[derive(KnownLayout)]`][derive]; e.g.:
538
///
539
/// ```
540
/// # use zerocopy_derive::KnownLayout;
541
/// #[derive(KnownLayout)]
542
/// struct MyStruct {
543
/// # /*
544
///     ...
545
/// # */
546
/// }
547
///
548
/// #[derive(KnownLayout)]
549
/// enum MyEnum {
550
/// # /*
551
///     ...
552
/// # */
553
/// }
554
///
555
/// #[derive(KnownLayout)]
556
/// union MyUnion {
557
/// #   variant: u8,
558
/// # /*
559
///     ...
560
/// # */
561
/// }
562
/// ```
563
///
564
/// This derive performs a sophisticated analysis to deduce the layout
565
/// characteristics of types. You **must** implement this trait via the derive.
566
///
567
/// # Dynamically-sized types
568
///
569
/// `KnownLayout` supports slice-based dynamically sized types ("slice DSTs").
570
///
571
/// A slice DST is a type whose trailing field is either a slice or another
572
/// slice DST, rather than a type with fixed size. For example:
573
///
574
/// ```
575
/// #[repr(C)]
576
/// struct PacketHeader {
577
/// # /*
578
///     ...
579
/// # */
580
/// }
581
///
582
/// #[repr(C)]
583
/// struct Packet {
584
///     header: PacketHeader,
585
///     body: [u8],
586
/// }
587
/// ```
588
///
589
/// It can be useful to think of slice DSTs as a generalization of slices - in
590
/// other words, a normal slice is just the special case of a slice DST with
591
/// zero leading fields. In particular:
592
/// - Like slices, slice DSTs can have different lengths at runtime
593
/// - Like slices, slice DSTs cannot be passed by-value, but only by reference
594
///   or via other indirection such as `Box`
595
/// - Like slices, a reference (or `Box`, or other pointer type) to a slice DST
596
///   encodes the number of elements in the trailing slice field
597
///
598
/// ## Slice DST layout
599
///
600
/// Just like other composite Rust types, the layout of a slice DST is not
601
/// well-defined unless it is specified using an explicit `#[repr(...)]`
602
/// attribute such as `#[repr(C)]`. [Other representations are
603
/// supported][reprs], but in this section, we'll use `#[repr(C)]` as our
604
/// example.
605
///
606
/// A `#[repr(C)]` slice DST is laid out [just like sized `#[repr(C)]`
607
/// types][repr-c-structs], but the presenence of a variable-length field
608
/// introduces the possibility of *dynamic padding*. In particular, it may be
609
/// necessary to add trailing padding *after* the trailing slice field in order
610
/// to satisfy the outer type's alignment, and the amount of padding required
611
/// may be a function of the length of the trailing slice field. This is just a
612
/// natural consequence of the normal `#[repr(C)]` rules applied to slice DSTs,
613
/// but it can result in surprising behavior. For example, consider the
614
/// following type:
615
///
616
/// ```
617
/// #[repr(C)]
618
/// struct Foo {
619
///     a: u32,
620
///     b: u8,
621
///     z: [u16],
622
/// }
623
/// ```
624
///
625
/// Assuming that `u32` has alignment 4 (this is not true on all platforms),
626
/// then `Foo` has alignment 4 as well. Here is the smallest possible value for
627
/// `Foo`:
628
///
629
/// ```text
630
/// byte offset | 01234567
631
///       field | aaaab---
632
///                    ><
633
/// ```
634
///
635
/// In this value, `z` has length 0. Abiding by `#[repr(C)]`, the lowest offset
636
/// that we can place `z` at is 5, but since `z` has alignment 2, we need to
637
/// round up to offset 6. This means that there is one byte of padding between
638
/// `b` and `z`, then 0 bytes of `z` itself (denoted `><` in this diagram), and
639
/// then two bytes of padding after `z` in order to satisfy the overall
640
/// alignment of `Foo`. The size of this instance is 8 bytes.
641
///
642
/// What about if `z` has length 1?
643
///
644
/// ```text
645
/// byte offset | 01234567
646
///       field | aaaab-zz
647
/// ```
648
///
649
/// In this instance, `z` has length 1, and thus takes up 2 bytes. That means
650
/// that we no longer need padding after `z` in order to satisfy `Foo`'s
651
/// alignment. We've now seen two different values of `Foo` with two different
652
/// lengths of `z`, but they both have the same size - 8 bytes.
653
///
654
/// What about if `z` has length 2?
655
///
656
/// ```text
657
/// byte offset | 012345678901
658
///       field | aaaab-zzzz--
659
/// ```
660
///
661
/// Now `z` has length 2, and thus takes up 4 bytes. This brings our un-padded
662
/// size to 10, and so we now need another 2 bytes of padding after `z` to
663
/// satisfy `Foo`'s alignment.
664
///
665
/// Again, all of this is just a logical consequence of the `#[repr(C)]` rules
666
/// applied to slice DSTs, but it can be surprising that the amount of trailing
667
/// padding becomes a function of the trailing slice field's length, and thus
668
/// can only be computed at runtime.
669
///
670
/// [reprs]: https://doc.rust-lang.org/reference/type-layout.html#representations
671
/// [repr-c-structs]: https://doc.rust-lang.org/reference/type-layout.html#reprc-structs
672
///
673
/// ## What is a valid size?
674
///
675
/// There are two places in zerocopy's API that we refer to "a valid size" of a
676
/// type. In normal casts or conversions, where the source is a byte slice, we
677
/// need to know whether the source byte slice is a valid size of the
678
/// destination type. In prefix or suffix casts, we need to know whether *there
679
/// exists* a valid size of the destination type which fits in the source byte
680
/// slice and, if so, what the largest such size is.
681
///
682
/// As outlined above, a slice DST's size is defined by the number of elements
683
/// in its trailing slice field. However, there is not necessarily a 1-to-1
684
/// mapping between trailing slice field length and overall size. As we saw in
685
/// the previous section with the type `Foo`, instances with both 0 and 1
686
/// elements in the trailing `z` field result in a `Foo` whose size is 8 bytes.
687
///
688
/// When we say "x is a valid size of `T`", we mean one of two things:
689
/// - If `T: Sized`, then we mean that `x == size_of::<T>()`
690
/// - If `T` is a slice DST, then we mean that there exists a `len` such that the instance of
691
///   `T` with `len` trailing slice elements has size `x`
692
///
693
/// When we say "largest possible size of `T` that fits in a byte slice", we
694
/// mean one of two things:
695
/// - If `T: Sized`, then we mean `size_of::<T>()` if the byte slice is at least
696
///   `size_of::<T>()` bytes long
697
/// - If `T` is a slice DST, then we mean to consider all values, `len`, such
698
///   that the instance of `T` with `len` trailing slice elements fits in the
699
///   byte slice, and to choose the largest such `len`, if any
700
///
701
///
702
/// # Safety
703
///
704
/// This trait does not convey any safety guarantees to code outside this crate.
705
///
706
/// You must not rely on the `#[doc(hidden)]` internals of `KnownLayout`. Future
707
/// releases of zerocopy may make backwards-breaking changes to these items,
708
/// including changes that only affect soundness, which may cause code which
709
/// uses those items to silently become unsound.
710
///
711
#[cfg_attr(feature = "derive", doc = "[derive]: zerocopy_derive::KnownLayout")]
712
#[cfg_attr(
713
    not(feature = "derive"),
714
    doc = concat!("[derive]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.KnownLayout.html"),
715
)]
716
#[cfg_attr(
717
    zerocopy_diagnostic_on_unimplemented_1_78_0,
718
    diagnostic::on_unimplemented(note = "Consider adding `#[derive(KnownLayout)]` to `{Self}`")
719
)]
720
pub unsafe trait KnownLayout {
721
    // The `Self: Sized` bound makes it so that `KnownLayout` can still be
722
    // object safe. It's not currently object safe thanks to `const LAYOUT`, and
723
    // it likely won't be in the future, but there's no reason not to be
724
    // forwards-compatible with object safety.
725
    #[doc(hidden)]
726
    fn only_derive_is_allowed_to_implement_this_trait()
727
    where
728
        Self: Sized;
729
730
    /// The type of metadata stored in a pointer to `Self`.
731
    ///
732
    /// This is `()` for sized types and `usize` for slice DSTs.
733
    type PointerMetadata: PointerMetadata;
734
735
    /// A maybe-uninitialized analog of `Self`
736
    ///
737
    /// # Safety
738
    ///
739
    /// `Self::LAYOUT` and `Self::MaybeUninit::LAYOUT` are identical.
740
    /// `Self::MaybeUninit` admits uninitialized bytes in all positions.
741
    #[doc(hidden)]
742
    type MaybeUninit: ?Sized + KnownLayout<PointerMetadata = Self::PointerMetadata>;
743
744
    /// The layout of `Self`.
745
    ///
746
    /// # Safety
747
    ///
748
    /// Callers may assume that `LAYOUT` accurately reflects the layout of
749
    /// `Self`. In particular:
750
    /// - `LAYOUT.align` is equal to `Self`'s alignment
751
    /// - If `Self: Sized`, then `LAYOUT.size_info == SizeInfo::Sized { size }`
752
    ///   where `size == size_of::<Self>()`
753
    /// - If `Self` is a slice DST, then `LAYOUT.size_info ==
754
    ///   SizeInfo::SliceDst(slice_layout)` where:
755
    ///   - The size, `size`, of an instance of `Self` with `elems` trailing
756
    ///     slice elements is equal to `slice_layout.offset +
757
    ///     slice_layout.elem_size * elems` rounded up to the nearest multiple
758
    ///     of `LAYOUT.align`
759
    ///   - For such an instance, any bytes in the range `[slice_layout.offset +
760
    ///     slice_layout.elem_size * elems, size)` are padding and must not be
761
    ///     assumed to be initialized
762
    #[doc(hidden)]
763
    const LAYOUT: DstLayout;
764
765
    /// SAFETY: The returned pointer has the same address and provenance as
766
    /// `bytes`. If `Self` is a DST, the returned pointer's referent has `elems`
767
    /// elements in its trailing slice.
768
    #[doc(hidden)]
769
    fn raw_from_ptr_len(bytes: NonNull<u8>, meta: Self::PointerMetadata) -> NonNull<Self>;
770
771
    /// Extracts the metadata from a pointer to `Self`.
772
    ///
773
    /// # Safety
774
    ///
775
    /// `pointer_to_metadata` always returns the correct metadata stored in
776
    /// `ptr`.
777
    #[doc(hidden)]
778
    fn pointer_to_metadata(ptr: *mut Self) -> Self::PointerMetadata;
779
780
    /// Computes the length of the byte range addressed by `ptr`.
781
    ///
782
    /// Returns `None` if the resulting length would not fit in an `usize`.
783
    ///
784
    /// # Safety
785
    ///
786
    /// Callers may assume that `size_of_val_raw` always returns the correct
787
    /// size.
788
    ///
789
    /// Callers may assume that, if `ptr` addresses a byte range whose length
790
    /// fits in an `usize`, this will return `Some`.
791
    #[doc(hidden)]
792
    #[must_use]
793
    #[inline(always)]
794
    fn size_of_val_raw(ptr: NonNull<Self>) -> Option<usize> {
795
        let meta = Self::pointer_to_metadata(ptr.as_ptr());
796
        // SAFETY: `size_for_metadata` promises to only return `None` if the
797
        // resulting size would not fit in a `usize`.
798
        meta.size_for_metadata(Self::LAYOUT)
799
    }
800
}
801
802
/// The metadata associated with a [`KnownLayout`] type.
803
#[doc(hidden)]
804
pub trait PointerMetadata: Copy + Eq + Debug {
805
    /// Constructs a `Self` from an element count.
806
    ///
807
    /// If `Self = ()`, this returns `()`. If `Self = usize`, this returns
808
    /// `elems`. No other types are currently supported.
809
    fn from_elem_count(elems: usize) -> Self;
810
811
    /// Computes the size of the object with the given layout and pointer
812
    /// metadata.
813
    ///
814
    /// # Panics
815
    ///
816
    /// If `Self = ()`, `layout` must describe a sized type. If `Self = usize`,
817
    /// `layout` must describe a slice DST. Otherwise, `size_for_metadata` may
818
    /// panic.
819
    ///
820
    /// # Safety
821
    ///
822
    /// `size_for_metadata` promises to only return `None` if the resulting size
823
    /// would not fit in a `usize`.
824
    fn size_for_metadata(&self, layout: DstLayout) -> Option<usize>;
825
}
826
827
impl PointerMetadata for () {
828
    #[inline]
829
    #[allow(clippy::unused_unit)]
830
    fn from_elem_count(_elems: usize) -> () {}
831
832
    #[inline]
833
    fn size_for_metadata(&self, layout: DstLayout) -> Option<usize> {
834
        match layout.size_info {
835
            SizeInfo::Sized { size } => Some(size),
836
            // NOTE: This branch is unreachable, but we return `None` rather
837
            // than `unreachable!()` to avoid generating panic paths.
838
            SizeInfo::SliceDst(_) => None,
839
        }
840
    }
841
}
842
843
impl PointerMetadata for usize {
844
    #[inline]
845
    fn from_elem_count(elems: usize) -> usize {
846
        elems
847
    }
848
849
    #[inline]
850
    fn size_for_metadata(&self, layout: DstLayout) -> Option<usize> {
851
        match layout.size_info {
852
            SizeInfo::SliceDst(TrailingSliceLayout { offset, elem_size }) => {
853
                let slice_len = elem_size.checked_mul(*self)?;
854
                let without_padding = offset.checked_add(slice_len)?;
855
                without_padding.checked_add(util::padding_needed_for(without_padding, layout.align))
856
            }
857
            // NOTE: This branch is unreachable, but we return `None` rather
858
            // than `unreachable!()` to avoid generating panic paths.
859
            SizeInfo::Sized { .. } => None,
860
        }
861
    }
862
}
863
864
// SAFETY: Delegates safety to `DstLayout::for_slice`.
865
unsafe impl<T> KnownLayout for [T] {
866
    #[allow(clippy::missing_inline_in_public_items)]
867
    #[cfg_attr(
868
        all(coverage_nightly, __ZEROCOPY_INTERNAL_USE_ONLY_NIGHTLY_FEATURES_IN_TESTS),
869
        coverage(off)
870
    )]
871
    fn only_derive_is_allowed_to_implement_this_trait()
872
    where
873
        Self: Sized,
874
    {
875
    }
876
877
    type PointerMetadata = usize;
878
879
    // SAFETY: `CoreMaybeUninit<T>::LAYOUT` and `T::LAYOUT` are identical
880
    // because `CoreMaybeUninit<T>` has the same size and alignment as `T` [1].
881
    // Consequently, `[CoreMaybeUninit<T>]::LAYOUT` and `[T]::LAYOUT` are
882
    // identical, because they both lack a fixed-sized prefix and because they
883
    // inherit the alignments of their inner element type (which are identical)
884
    // [2][3].
885
    //
886
    // `[CoreMaybeUninit<T>]` admits uninitialized bytes at all positions
887
    // because `CoreMaybeUninit<T>` admits uninitialized bytes at all positions
888
    // and because the inner elements of `[CoreMaybeUninit<T>]` are laid out
889
    // back-to-back [2][3].
890
    //
891
    // [1] Per https://doc.rust-lang.org/1.81.0/std/mem/union.MaybeUninit.html#layout-1:
892
    //
893
    //   `MaybeUninit<T>` is guaranteed to have the same size, alignment, and ABI as
894
    //   `T`
895
    //
896
    // [2] Per https://doc.rust-lang.org/1.82.0/reference/type-layout.html#slice-layout:
897
    //
898
    //   Slices have the same layout as the section of the array they slice.
899
    //
900
    // [3] Per https://doc.rust-lang.org/1.82.0/reference/type-layout.html#array-layout:
901
    //
902
    //   An array of `[T; N]` has a size of `size_of::<T>() * N` and the same
903
    //   alignment of `T`. Arrays are laid out so that the zero-based `nth`
904
    //   element of the array is offset from the start of the array by `n *
905
    //   size_of::<T>()` bytes.
906
    type MaybeUninit = [CoreMaybeUninit<T>];
907
908
    const LAYOUT: DstLayout = DstLayout::for_slice::<T>();
909
910
    // SAFETY: `.cast` preserves address and provenance. The returned pointer
911
    // refers to an object with `elems` elements by construction.
912
    #[inline(always)]
913
    fn raw_from_ptr_len(data: NonNull<u8>, elems: usize) -> NonNull<Self> {
914
        // TODO(#67): Remove this allow. See NonNullExt for more details.
915
        #[allow(unstable_name_collisions)]
916
        NonNull::slice_from_raw_parts(data.cast::<T>(), elems)
917
    }
918
919
    #[inline(always)]
920
    fn pointer_to_metadata(ptr: *mut [T]) -> usize {
921
        #[allow(clippy::as_conversions)]
922
        let slc = ptr as *const [()];
923
924
        // SAFETY:
925
        // - `()` has alignment 1, so `slc` is trivially aligned.
926
        // - `slc` was derived from a non-null pointer.
927
        // - The size is 0 regardless of the length, so it is sound to
928
        //   materialize a reference regardless of location.
929
        // - By invariant, `self.ptr` has valid provenance.
930
        let slc = unsafe { &*slc };
931
932
        // This is correct because the preceding `as` cast preserves the number
933
        // of slice elements. [1]
934
        //
935
        // [1] Per https://doc.rust-lang.org/reference/expressions/operator-expr.html#pointer-to-pointer-cast:
936
        //
937
        //   For slice types like `[T]` and `[U]`, the raw pointer types `*const
938
        //   [T]`, `*mut [T]`, `*const [U]`, and `*mut [U]` encode the number of
939
        //   elements in this slice. Casts between these raw pointer types
940
        //   preserve the number of elements. ... The same holds for `str` and
941
        //   any compound type whose unsized tail is a slice type, such as
942
        //   struct `Foo(i32, [u8])` or `(u64, Foo)`.
943
        slc.len()
944
    }
945
}
946
947
#[rustfmt::skip]
948
impl_known_layout!(
949
    (),
950
    u8, i8, u16, i16, u32, i32, u64, i64, u128, i128, usize, isize, f32, f64,
951
    bool, char,
952
    NonZeroU8, NonZeroI8, NonZeroU16, NonZeroI16, NonZeroU32, NonZeroI32,
953
    NonZeroU64, NonZeroI64, NonZeroU128, NonZeroI128, NonZeroUsize, NonZeroIsize
954
);
955
#[rustfmt::skip]
956
#[cfg(feature = "float-nightly")]
957
impl_known_layout!(
958
    #[cfg_attr(doc_cfg, doc(cfg(feature = "float-nightly")))]
959
    f16,
960
    #[cfg_attr(doc_cfg, doc(cfg(feature = "float-nightly")))]
961
    f128
962
);
963
#[rustfmt::skip]
964
impl_known_layout!(
965
    T         => Option<T>,
966
    T: ?Sized => PhantomData<T>,
967
    T         => Wrapping<T>,
968
    T         => CoreMaybeUninit<T>,
969
    T: ?Sized => *const T,
970
    T: ?Sized => *mut T,
971
    T: ?Sized => &'_ T,
972
    T: ?Sized => &'_ mut T,
973
);
974
impl_known_layout!(const N: usize, T => [T; N]);
975
976
safety_comment! {
977
    /// SAFETY:
978
    /// `str`, `ManuallyDrop<[T]>` [1], and `UnsafeCell<T>` [2] have the same
979
    /// representations as `[u8]`, `[T]`, and `T` repsectively. `str` has
980
    /// different bit validity than `[u8]`, but that doesn't affect the
981
    /// soundness of this impl.
982
    ///
983
    /// [1] Per https://doc.rust-lang.org/nightly/core/mem/struct.ManuallyDrop.html:
984
    ///
985
    ///   `ManuallyDrop<T>` is guaranteed to have the same layout and bit
986
    ///   validity as `T`
987
    ///
988
    /// [2] Per https://doc.rust-lang.org/core/cell/struct.UnsafeCell.html#memory-layout:
989
    ///
990
    ///   `UnsafeCell<T>` has the same in-memory representation as its inner
991
    ///   type `T`.
992
    ///
993
    /// TODO(#429):
994
    /// -  Add quotes from docs.
995
    /// -  Once [1] (added in
996
    /// https://github.com/rust-lang/rust/pull/115522) is available on stable,
997
    /// quote the stable docs instead of the nightly docs.
998
    unsafe_impl_known_layout!(#[repr([u8])] str);
999
    unsafe_impl_known_layout!(T: ?Sized + KnownLayout => #[repr(T)] ManuallyDrop<T>);
1000
    unsafe_impl_known_layout!(T: ?Sized + KnownLayout => #[repr(T)] UnsafeCell<T>);
1001
}
1002
1003
safety_comment! {
1004
    /// SAFETY:
1005
    /// - By consequence of the invariant on `T::MaybeUninit` that `T::LAYOUT`
1006
    ///   and `T::MaybeUninit::LAYOUT` are equal, `T` and `T::MaybeUninit`
1007
    ///   have the same:
1008
    ///   - Fixed prefix size
1009
    ///   - Alignment
1010
    ///   - (For DSTs) trailing slice element size
1011
    /// - By consequence of the above, referents `T::MaybeUninit` and `T` have
1012
    ///   the require the same kind of pointer metadata, and thus it is valid to
1013
    ///   perform an `as` cast from `*mut T` and `*mut T::MaybeUninit`, and this
1014
    ///   operation preserves referent size (ie, `size_of_val_raw`).
1015
    unsafe_impl_known_layout!(T: ?Sized + KnownLayout => #[repr(T::MaybeUninit)] MaybeUninit<T>);
1016
}
1017
1018
/// Analyzes whether a type is [`FromZeros`].
1019
///
1020
/// This derive analyzes, at compile time, whether the annotated type satisfies
1021
/// the [safety conditions] of `FromZeros` and implements `FromZeros` and its
1022
/// supertraits if it is sound to do so. This derive can be applied to structs,
1023
/// enums, and unions; e.g.:
1024
///
1025
/// ```
1026
/// # use zerocopy_derive::{FromZeros, Immutable};
1027
/// #[derive(FromZeros)]
1028
/// struct MyStruct {
1029
/// # /*
1030
///     ...
1031
/// # */
1032
/// }
1033
///
1034
/// #[derive(FromZeros)]
1035
/// #[repr(u8)]
1036
/// enum MyEnum {
1037
/// #   Variant0,
1038
/// # /*
1039
///     ...
1040
/// # */
1041
/// }
1042
///
1043
/// #[derive(FromZeros, Immutable)]
1044
/// union MyUnion {
1045
/// #   variant: u8,
1046
/// # /*
1047
///     ...
1048
/// # */
1049
/// }
1050
/// ```
1051
///
1052
/// [safety conditions]: trait@FromZeros#safety
1053
///
1054
/// # Analysis
1055
///
1056
/// *This section describes, roughly, the analysis performed by this derive to
1057
/// determine whether it is sound to implement `FromZeros` for a given type.
1058
/// Unless you are modifying the implementation of this derive, or attempting to
1059
/// manually implement `FromZeros` for a type yourself, you don't need to read
1060
/// this section.*
1061
///
1062
/// If a type has the following properties, then this derive can implement
1063
/// `FromZeros` for that type:
1064
///
1065
/// - If the type is a struct, all of its fields must be `FromZeros`.
1066
/// - If the type is an enum:
1067
///   - It must have a defined representation (`repr`s `C`, `u8`, `u16`, `u32`,
1068
///     `u64`, `usize`, `i8`, `i16`, `i32`, `i64`, or `isize`).
1069
///   - It must have a variant with a discriminant/tag of `0`, and its fields
1070
///     must be `FromZeros`. See [the reference] for a description of
1071
///     discriminant values are specified.
1072
///   - The fields of that variant must be `FromZeros`.
1073
///
1074
/// This analysis is subject to change. Unsafe code may *only* rely on the
1075
/// documented [safety conditions] of `FromZeros`, and must *not* rely on the
1076
/// implementation details of this derive.
1077
///
1078
/// [the reference]: https://doc.rust-lang.org/reference/items/enumerations.html#custom-discriminant-values-for-fieldless-enumerations
1079
///
1080
/// ## Why isn't an explicit representation required for structs?
1081
///
1082
/// Neither this derive, nor the [safety conditions] of `FromZeros`, requires
1083
/// that structs are marked with `#[repr(C)]`.
1084
///
1085
/// Per the [Rust reference](reference),
1086
///
1087
/// > The representation of a type can change the padding between fields, but
1088
/// > does not change the layout of the fields themselves.
1089
///
1090
/// [reference]: https://doc.rust-lang.org/reference/type-layout.html#representations
1091
///
1092
/// Since the layout of structs only consists of padding bytes and field bytes,
1093
/// a struct is soundly `FromZeros` if:
1094
/// 1. its padding is soundly `FromZeros`, and
1095
/// 2. its fields are soundly `FromZeros`.
1096
///
1097
/// The answer to the first question is always yes: padding bytes do not have
1098
/// any validity constraints. A [discussion] of this question in the Unsafe Code
1099
/// Guidelines Working Group concluded that it would be virtually unimaginable
1100
/// for future versions of rustc to add validity constraints to padding bytes.
1101
///
1102
/// [discussion]: https://github.com/rust-lang/unsafe-code-guidelines/issues/174
1103
///
1104
/// Whether a struct is soundly `FromZeros` therefore solely depends on whether
1105
/// its fields are `FromZeros`.
1106
// TODO(#146): Document why we don't require an enum to have an explicit `repr`
1107
// attribute.
1108
#[cfg(any(feature = "derive", test))]
1109
#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
1110
pub use zerocopy_derive::FromZeros;
1111
1112
/// Analyzes whether a type is [`Immutable`].
1113
///
1114
/// This derive analyzes, at compile time, whether the annotated type satisfies
1115
/// the [safety conditions] of `Immutable` and implements `Immutable` if it is
1116
/// sound to do so. This derive can be applied to structs, enums, and unions;
1117
/// e.g.:
1118
///
1119
/// ```
1120
/// # use zerocopy_derive::Immutable;
1121
/// #[derive(Immutable)]
1122
/// struct MyStruct {
1123
/// # /*
1124
///     ...
1125
/// # */
1126
/// }
1127
///
1128
/// #[derive(Immutable)]
1129
/// enum MyEnum {
1130
/// #   Variant0,
1131
/// # /*
1132
///     ...
1133
/// # */
1134
/// }
1135
///
1136
/// #[derive(Immutable)]
1137
/// union MyUnion {
1138
/// #   variant: u8,
1139
/// # /*
1140
///     ...
1141
/// # */
1142
/// }
1143
/// ```
1144
///
1145
/// # Analysis
1146
///
1147
/// *This section describes, roughly, the analysis performed by this derive to
1148
/// determine whether it is sound to implement `Immutable` for a given type.
1149
/// Unless you are modifying the implementation of this derive, you don't need
1150
/// to read this section.*
1151
///
1152
/// If a type has the following properties, then this derive can implement
1153
/// `Immutable` for that type:
1154
///
1155
/// - All fields must be `Immutable`.
1156
///
1157
/// This analysis is subject to change. Unsafe code may *only* rely on the
1158
/// documented [safety conditions] of `Immutable`, and must *not* rely on the
1159
/// implementation details of this derive.
1160
///
1161
/// [safety conditions]: trait@Immutable#safety
1162
#[cfg(any(feature = "derive", test))]
1163
#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
1164
pub use zerocopy_derive::Immutable;
1165
1166
/// Types which are free from interior mutability.
1167
///
1168
/// `T: Immutable` indicates that `T` does not permit interior mutation, except
1169
/// by ownership or an exclusive (`&mut`) borrow.
1170
///
1171
/// # Implementation
1172
///
1173
/// **Do not implement this trait yourself!** Instead, use
1174
/// [`#[derive(Immutable)]`][derive] (requires the `derive` Cargo feature);
1175
/// e.g.:
1176
///
1177
/// ```
1178
/// # use zerocopy_derive::Immutable;
1179
/// #[derive(Immutable)]
1180
/// struct MyStruct {
1181
/// # /*
1182
///     ...
1183
/// # */
1184
/// }
1185
///
1186
/// #[derive(Immutable)]
1187
/// enum MyEnum {
1188
/// # /*
1189
///     ...
1190
/// # */
1191
/// }
1192
///
1193
/// #[derive(Immutable)]
1194
/// union MyUnion {
1195
/// #   variant: u8,
1196
/// # /*
1197
///     ...
1198
/// # */
1199
/// }
1200
/// ```
1201
///
1202
/// This derive performs a sophisticated, compile-time safety analysis to
1203
/// determine whether a type is `Immutable`.
1204
///
1205
/// # Safety
1206
///
1207
/// Unsafe code outside of this crate must not make any assumptions about `T`
1208
/// based on `T: Immutable`. We reserve the right to relax the requirements for
1209
/// `Immutable` in the future, and if unsafe code outside of this crate makes
1210
/// assumptions based on `T: Immutable`, future relaxations may cause that code
1211
/// to become unsound.
1212
///
1213
// # Safety (Internal)
1214
//
1215
// If `T: Immutable`, unsafe code *inside of this crate* may assume that, given
1216
// `t: &T`, `t` does not contain any [`UnsafeCell`]s at any byte location
1217
// within the byte range addressed by `t`. This includes ranges of length 0
1218
// (e.g., `UnsafeCell<()>` and `[UnsafeCell<u8>; 0]`). If a type implements
1219
// `Immutable` which violates this assumptions, it may cause this crate to
1220
// exhibit [undefined behavior].
1221
//
1222
// [`UnsafeCell`]: core::cell::UnsafeCell
1223
// [undefined behavior]: https://raphlinus.github.io/programming/rust/2018/08/17/undefined-behavior.html
1224
#[cfg_attr(
1225
    feature = "derive",
1226
    doc = "[derive]: zerocopy_derive::Immutable",
1227
    doc = "[derive-analysis]: zerocopy_derive::Immutable#analysis"
1228
)]
1229
#[cfg_attr(
1230
    not(feature = "derive"),
1231
    doc = concat!("[derive]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.Immutable.html"),
1232
    doc = concat!("[derive-analysis]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.Immutable.html#analysis"),
1233
)]
1234
#[cfg_attr(
1235
    zerocopy_diagnostic_on_unimplemented_1_78_0,
1236
    diagnostic::on_unimplemented(note = "Consider adding `#[derive(Immutable)]` to `{Self}`")
1237
)]
1238
pub unsafe trait Immutable {
1239
    // The `Self: Sized` bound makes it so that `Immutable` is still object
1240
    // safe.
1241
    #[doc(hidden)]
1242
    fn only_derive_is_allowed_to_implement_this_trait()
1243
    where
1244
        Self: Sized;
1245
}
1246
1247
/// Implements [`TryFromBytes`].
1248
///
1249
/// This derive synthesizes the runtime checks required to check whether a
1250
/// sequence of initialized bytes corresponds to a valid instance of a type.
1251
/// This derive can be applied to structs, enums, and unions; e.g.:
1252
///
1253
/// ```
1254
/// # use zerocopy_derive::{TryFromBytes, Immutable};
1255
/// #[derive(TryFromBytes)]
1256
/// struct MyStruct {
1257
/// # /*
1258
///     ...
1259
/// # */
1260
/// }
1261
///
1262
/// #[derive(TryFromBytes)]
1263
/// #[repr(u8)]
1264
/// enum MyEnum {
1265
/// #   V00,
1266
/// # /*
1267
///     ...
1268
/// # */
1269
/// }
1270
///
1271
/// #[derive(TryFromBytes, Immutable)]
1272
/// union MyUnion {
1273
/// #   variant: u8,
1274
/// # /*
1275
///     ...
1276
/// # */
1277
/// }
1278
/// ```
1279
///
1280
/// [safety conditions]: trait@TryFromBytes#safety
1281
#[cfg(any(feature = "derive", test))]
1282
#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
1283
pub use zerocopy_derive::TryFromBytes;
1284
1285
/// Types for which some bit patterns are valid.
1286
///
1287
/// A memory region of the appropriate length which contains initialized bytes
1288
/// can be viewed as a `TryFromBytes` type so long as the runtime value of those
1289
/// bytes corresponds to a [*valid instance*] of that type. For example,
1290
/// [`bool`] is `TryFromBytes`, so zerocopy can transmute a [`u8`] into a
1291
/// [`bool`] so long as it first checks that the value of the [`u8`] is `0` or
1292
/// `1`.
1293
///
1294
/// # Implementation
1295
///
1296
/// **Do not implement this trait yourself!** Instead, use
1297
/// [`#[derive(TryFromBytes)]`][derive]; e.g.:
1298
///
1299
/// ```
1300
/// # use zerocopy_derive::{TryFromBytes, Immutable};
1301
/// #[derive(TryFromBytes)]
1302
/// struct MyStruct {
1303
/// # /*
1304
///     ...
1305
/// # */
1306
/// }
1307
///
1308
/// #[derive(TryFromBytes)]
1309
/// #[repr(u8)]
1310
/// enum MyEnum {
1311
/// #   V00,
1312
/// # /*
1313
///     ...
1314
/// # */
1315
/// }
1316
///
1317
/// #[derive(TryFromBytes, Immutable)]
1318
/// union MyUnion {
1319
/// #   variant: u8,
1320
/// # /*
1321
///     ...
1322
/// # */
1323
/// }
1324
/// ```
1325
///
1326
/// This derive ensures that the runtime check of whether bytes correspond to a
1327
/// valid instance is sound. You **must** implement this trait via the derive.
1328
///
1329
/// # What is a "valid instance"?
1330
///
1331
/// In Rust, each type has *bit validity*, which refers to the set of bit
1332
/// patterns which may appear in an instance of that type. It is impossible for
1333
/// safe Rust code to produce values which violate bit validity (ie, values
1334
/// outside of the "valid" set of bit patterns). If `unsafe` code produces an
1335
/// invalid value, this is considered [undefined behavior].
1336
///
1337
/// Rust's bit validity rules are currently being decided, which means that some
1338
/// types have three classes of bit patterns: those which are definitely valid,
1339
/// and whose validity is documented in the language; those which may or may not
1340
/// be considered valid at some point in the future; and those which are
1341
/// definitely invalid.
1342
///
1343
/// Zerocopy takes a conservative approach, and only considers a bit pattern to
1344
/// be valid if its validity is a documenteed guarantee provided by the
1345
/// language.
1346
///
1347
/// For most use cases, Rust's current guarantees align with programmers'
1348
/// intuitions about what ought to be valid. As a result, zerocopy's
1349
/// conservatism should not affect most users.
1350
///
1351
/// If you are negatively affected by lack of support for a particular type,
1352
/// we encourage you to let us know by [filing an issue][github-repo].
1353
///
1354
/// # `TryFromBytes` is not symmetrical with [`IntoBytes`]
1355
///
1356
/// There are some types which implement both `TryFromBytes` and [`IntoBytes`],
1357
/// but for which `TryFromBytes` is not guaranteed to accept all byte sequences
1358
/// produced by `IntoBytes`. In other words, for some `T: TryFromBytes +
1359
/// IntoBytes`, there exist values of `t: T` such that
1360
/// `TryFromBytes::try_ref_from_bytes(t.as_bytes()) == None`. Code should not
1361
/// generally assume that values produced by `IntoBytes` will necessarily be
1362
/// accepted as valid by `TryFromBytes`.
1363
///
1364
/// # Safety
1365
///
1366
/// On its own, `T: TryFromBytes` does not make any guarantees about the layout
1367
/// or representation of `T`. It merely provides the ability to perform a
1368
/// validity check at runtime via methods like [`try_ref_from_bytes`].
1369
///
1370
/// You must not rely on the `#[doc(hidden)]` internals of `TryFromBytes`.
1371
/// Future releases of zerocopy may make backwards-breaking changes to these
1372
/// items, including changes that only affect soundness, which may cause code
1373
/// which uses those items to silently become unsound.
1374
///
1375
/// [undefined behavior]: https://raphlinus.github.io/programming/rust/2018/08/17/undefined-behavior.html
1376
/// [github-repo]: https://github.com/google/zerocopy
1377
/// [`try_ref_from_bytes`]: TryFromBytes::try_ref_from_bytes
1378
/// [*valid instance*]: #what-is-a-valid-instance
1379
#[cfg_attr(feature = "derive", doc = "[derive]: zerocopy_derive::TryFromBytes")]
1380
#[cfg_attr(
1381
    not(feature = "derive"),
1382
    doc = concat!("[derive]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.TryFromBytes.html"),
1383
)]
1384
#[cfg_attr(
1385
    zerocopy_diagnostic_on_unimplemented_1_78_0,
1386
    diagnostic::on_unimplemented(note = "Consider adding `#[derive(TryFromBytes)]` to `{Self}`")
1387
)]
1388
pub unsafe trait TryFromBytes {
1389
    // The `Self: Sized` bound makes it so that `TryFromBytes` is still object
1390
    // safe.
1391
    #[doc(hidden)]
1392
    fn only_derive_is_allowed_to_implement_this_trait()
1393
    where
1394
        Self: Sized;
1395
1396
    /// Does a given memory range contain a valid instance of `Self`?
1397
    ///
1398
    /// # Safety
1399
    ///
1400
    /// Unsafe code may assume that, if `is_bit_valid(candidate)` returns true,
1401
    /// `*candidate` contains a valid `Self`.
1402
    ///
1403
    /// # Panics
1404
    ///
1405
    /// `is_bit_valid` may panic. Callers are responsible for ensuring that any
1406
    /// `unsafe` code remains sound even in the face of `is_bit_valid`
1407
    /// panicking. (We support user-defined validation routines; so long as
1408
    /// these routines are not required to be `unsafe`, there is no way to
1409
    /// ensure that these do not generate panics.)
1410
    ///
1411
    /// Besides user-defined validation routines panicking, `is_bit_valid` will
1412
    /// either panic or fail to compile if called on a pointer with [`Shared`]
1413
    /// aliasing when `Self: !Immutable`.
1414
    ///
1415
    /// [`UnsafeCell`]: core::cell::UnsafeCell
1416
    /// [`Shared`]: invariant::Shared
1417
    #[doc(hidden)]
1418
    fn is_bit_valid<A: invariant::Aliasing + invariant::AtLeast<invariant::Shared>>(
1419
        candidate: Maybe<'_, Self, A>,
1420
    ) -> bool;
1421
1422
    /// Attempts to interpret the given `source` as a `&Self`.
1423
    ///
1424
    /// If the bytes of `source` are a valid instance of `Self`, this method
1425
    /// returns a reference to those bytes interpreted as a `Self`. If the
1426
    /// length of `source` is not a [valid size of `Self`][valid-size], or if
1427
    /// `source` is not appropriately aligned, or if `source` is not a valid
1428
    /// instance of `Self`, this returns `Err`. If [`Self:
1429
    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
1430
    /// error][ConvertError::from].
1431
    ///
1432
    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
1433
    ///
1434
    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
1435
    /// [self-unaligned]: Unaligned
1436
    /// [slice-dst]: KnownLayout#dynamically-sized-types
1437
    ///
1438
    /// # Compile-Time Assertions
1439
    ///
1440
    /// This method cannot yet be used on unsized types whose dynamically-sized
1441
    /// component is zero-sized. Attempting to use this method on such types
1442
    /// results in a compile-time assertion error; e.g.:
1443
    ///
1444
    /// ```compile_fail,E0080
1445
    /// use zerocopy::*;
1446
    /// # use zerocopy_derive::*;
1447
    ///
1448
    /// #[derive(TryFromBytes, Immutable, KnownLayout)]
1449
    /// #[repr(C)]
1450
    /// struct ZSTy {
1451
    ///     leading_sized: u16,
1452
    ///     trailing_dst: [()],
1453
    /// }
1454
    ///
1455
    /// let _ = ZSTy::try_ref_from_bytes(0u16.as_bytes()); // âš  Compile Error!
1456
    /// ```
1457
    ///
1458
    /// # Examples
1459
    ///
1460
    /// ```
1461
    /// use zerocopy::TryFromBytes;
1462
    /// # use zerocopy_derive::*;
1463
    ///
1464
    /// // The only valid value of this type is the byte `0xC0`
1465
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1466
    /// #[repr(u8)]
1467
    /// enum C0 { xC0 = 0xC0 }
1468
    ///
1469
    /// // The only valid value of this type is the byte sequence `0xC0C0`.
1470
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1471
    /// #[repr(C)]
1472
    /// struct C0C0(C0, C0);
1473
    ///
1474
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1475
    /// #[repr(C)]
1476
    /// struct Packet {
1477
    ///     magic_number: C0C0,
1478
    ///     mug_size: u8,
1479
    ///     temperature: u8,
1480
    ///     marshmallows: [[u8; 2]],
1481
    /// }
1482
    ///
1483
    /// let bytes = &[0xC0, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5][..];
1484
    ///
1485
    /// let packet = Packet::try_ref_from_bytes(bytes).unwrap();
1486
    ///
1487
    /// assert_eq!(packet.mug_size, 240);
1488
    /// assert_eq!(packet.temperature, 77);
1489
    /// assert_eq!(packet.marshmallows, [[0, 1], [2, 3], [4, 5]]);
1490
    ///
1491
    /// // These bytes are not valid instance of `Packet`.
1492
    /// let bytes = &[0x10, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5][..];
1493
    /// assert!(Packet::try_ref_from_bytes(bytes).is_err());
1494
    /// ```
1495
    #[must_use = "has no side effects"]
1496
    #[inline]
1497
    fn try_ref_from_bytes(source: &[u8]) -> Result<&Self, TryCastError<&[u8], Self>>
1498
    where
1499
        Self: KnownLayout + Immutable,
1500
    {
1501
        static_assert_dst_is_not_zst!(Self);
1502
        match Ptr::from_ref(source).try_cast_into_no_leftover::<Self, BecauseImmutable>(None) {
1503
            Ok(source) => {
1504
                // This call may panic. If that happens, it doesn't cause any soundness
1505
                // issues, as we have not generated any invalid state which we need to
1506
                // fix before returning.
1507
                //
1508
                // Note that one panic or post-monomorphization error condition is
1509
                // calling `try_into_valid` (and thus `is_bit_valid`) with a shared
1510
                // pointer when `Self: !Immutable`. Since `Self: Immutable`, this panic
1511
                // condition will not happen.
1512
                match source.try_into_valid() {
1513
                    Ok(valid) => Ok(valid.as_ref()),
1514
                    Err(e) => {
1515
                        Err(e.map_src(|src| src.as_bytes::<BecauseImmutable>().as_ref()).into())
1516
                    }
1517
                }
1518
            }
1519
            Err(e) => Err(e.map_src(Ptr::as_ref).into()),
1520
        }
1521
    }
1522
1523
    /// Attempts to interpret the prefix of the given `source` as a `&Self`.
1524
    ///
1525
    /// This method computes the [largest possible size of `Self`][valid-size]
1526
    /// that can fit in the leading bytes of `source`. If that prefix is a valid
1527
    /// instance of `Self`, this method returns a reference to those bytes
1528
    /// interpreted as `Self`, and a reference to the remaining bytes. If there
1529
    /// are insufficient bytes, or if `source` is not appropriately aligned, or
1530
    /// if those bytes are not a valid instance of `Self`, this returns `Err`.
1531
    /// If [`Self: Unaligned`][self-unaligned], you can [infallibly discard the
1532
    /// alignment error][ConvertError::from].
1533
    ///
1534
    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
1535
    ///
1536
    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
1537
    /// [self-unaligned]: Unaligned
1538
    /// [slice-dst]: KnownLayout#dynamically-sized-types
1539
    ///
1540
    /// # Compile-Time Assertions
1541
    ///
1542
    /// This method cannot yet be used on unsized types whose dynamically-sized
1543
    /// component is zero-sized. Attempting to use this method on such types
1544
    /// results in a compile-time assertion error; e.g.:
1545
    ///
1546
    /// ```compile_fail,E0080
1547
    /// use zerocopy::*;
1548
    /// # use zerocopy_derive::*;
1549
    ///
1550
    /// #[derive(TryFromBytes, Immutable, KnownLayout)]
1551
    /// #[repr(C)]
1552
    /// struct ZSTy {
1553
    ///     leading_sized: u16,
1554
    ///     trailing_dst: [()],
1555
    /// }
1556
    ///
1557
    /// let _ = ZSTy::try_ref_from_prefix(0u16.as_bytes()); // âš  Compile Error!
1558
    /// ```
1559
    ///
1560
    /// # Examples
1561
    ///
1562
    /// ```
1563
    /// use zerocopy::TryFromBytes;
1564
    /// # use zerocopy_derive::*;
1565
    ///
1566
    /// // The only valid value of this type is the byte `0xC0`
1567
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1568
    /// #[repr(u8)]
1569
    /// enum C0 { xC0 = 0xC0 }
1570
    ///
1571
    /// // The only valid value of this type is the bytes `0xC0C0`.
1572
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1573
    /// #[repr(C)]
1574
    /// struct C0C0(C0, C0);
1575
    ///
1576
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1577
    /// #[repr(C)]
1578
    /// struct Packet {
1579
    ///     magic_number: C0C0,
1580
    ///     mug_size: u8,
1581
    ///     temperature: u8,
1582
    ///     marshmallows: [[u8; 2]],
1583
    /// }
1584
    ///
1585
    /// // These are more bytes than are needed to encode a `Packet`.
1586
    /// let bytes = &[0xC0, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5, 6][..];
1587
    ///
1588
    /// let (packet, suffix) = Packet::try_ref_from_prefix(bytes).unwrap();
1589
    ///
1590
    /// assert_eq!(packet.mug_size, 240);
1591
    /// assert_eq!(packet.temperature, 77);
1592
    /// assert_eq!(packet.marshmallows, [[0, 1], [2, 3], [4, 5]]);
1593
    /// assert_eq!(suffix, &[6u8][..]);
1594
    ///
1595
    /// // These bytes are not valid instance of `Packet`.
1596
    /// let bytes = &[0x10, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5, 6][..];
1597
    /// assert!(Packet::try_ref_from_prefix(bytes).is_err());
1598
    /// ```
1599
    #[must_use = "has no side effects"]
1600
    #[inline]
1601
    fn try_ref_from_prefix(source: &[u8]) -> Result<(&Self, &[u8]), TryCastError<&[u8], Self>>
1602
    where
1603
        Self: KnownLayout + Immutable,
1604
    {
1605
        static_assert_dst_is_not_zst!(Self);
1606
        try_ref_from_prefix_suffix(source, CastType::Prefix, None)
1607
    }
1608
1609
    /// Attempts to interpret the suffix of the given `source` as a `&Self`.
1610
    ///
1611
    /// This method computes the [largest possible size of `Self`][valid-size]
1612
    /// that can fit in the trailing bytes of `source`. If that suffix is a
1613
    /// valid instance of `Self`, this method returns a reference to those bytes
1614
    /// interpreted as `Self`, and a reference to the preceding bytes. If there
1615
    /// are insufficient bytes, or if the suffix of `source` would not be
1616
    /// appropriately aligned, or if the suffix is not a valid instance of
1617
    /// `Self`, this returns `Err`. If [`Self: Unaligned`][self-unaligned], you
1618
    /// can [infallibly discard the alignment error][ConvertError::from].
1619
    ///
1620
    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
1621
    ///
1622
    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
1623
    /// [self-unaligned]: Unaligned
1624
    /// [slice-dst]: KnownLayout#dynamically-sized-types
1625
    ///
1626
    /// # Compile-Time Assertions
1627
    ///
1628
    /// This method cannot yet be used on unsized types whose dynamically-sized
1629
    /// component is zero-sized. Attempting to use this method on such types
1630
    /// results in a compile-time assertion error; e.g.:
1631
    ///
1632
    /// ```compile_fail,E0080
1633
    /// use zerocopy::*;
1634
    /// # use zerocopy_derive::*;
1635
    ///
1636
    /// #[derive(TryFromBytes, Immutable, KnownLayout)]
1637
    /// #[repr(C)]
1638
    /// struct ZSTy {
1639
    ///     leading_sized: u16,
1640
    ///     trailing_dst: [()],
1641
    /// }
1642
    ///
1643
    /// let _ = ZSTy::try_ref_from_suffix(0u16.as_bytes()); // âš  Compile Error!
1644
    /// ```
1645
    ///
1646
    /// # Examples
1647
    ///
1648
    /// ```
1649
    /// use zerocopy::TryFromBytes;
1650
    /// # use zerocopy_derive::*;
1651
    ///
1652
    /// // The only valid value of this type is the byte `0xC0`
1653
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1654
    /// #[repr(u8)]
1655
    /// enum C0 { xC0 = 0xC0 }
1656
    ///
1657
    /// // The only valid value of this type is the bytes `0xC0C0`.
1658
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1659
    /// #[repr(C)]
1660
    /// struct C0C0(C0, C0);
1661
    ///
1662
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1663
    /// #[repr(C)]
1664
    /// struct Packet {
1665
    ///     magic_number: C0C0,
1666
    ///     mug_size: u8,
1667
    ///     temperature: u8,
1668
    ///     marshmallows: [[u8; 2]],
1669
    /// }
1670
    ///
1671
    /// // These are more bytes than are needed to encode a `Packet`.
1672
    /// let bytes = &[0, 0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7][..];
1673
    ///
1674
    /// let (prefix, packet) = Packet::try_ref_from_suffix(bytes).unwrap();
1675
    ///
1676
    /// assert_eq!(packet.mug_size, 240);
1677
    /// assert_eq!(packet.temperature, 77);
1678
    /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
1679
    /// assert_eq!(prefix, &[0u8][..]);
1680
    ///
1681
    /// // These bytes are not valid instance of `Packet`.
1682
    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 77, 240, 0xC0, 0x10][..];
1683
    /// assert!(Packet::try_ref_from_suffix(bytes).is_err());
1684
    /// ```
1685
    #[must_use = "has no side effects"]
1686
    #[inline]
1687
    fn try_ref_from_suffix(source: &[u8]) -> Result<(&[u8], &Self), TryCastError<&[u8], Self>>
1688
    where
1689
        Self: KnownLayout + Immutable,
1690
    {
1691
        static_assert_dst_is_not_zst!(Self);
1692
        try_ref_from_prefix_suffix(source, CastType::Suffix, None).map(swap)
1693
    }
1694
1695
    /// Attempts to interpret the given `source` as a `&mut Self` without
1696
    /// copying.
1697
    ///
1698
    /// If the bytes of `source` are a valid instance of `Self`, this method
1699
    /// returns a reference to those bytes interpreted as a `Self`. If the
1700
    /// length of `source` is not a [valid size of `Self`][valid-size], or if
1701
    /// `source` is not appropriately aligned, or if `source` is not a valid
1702
    /// instance of `Self`, this returns `Err`. If [`Self:
1703
    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
1704
    /// error][ConvertError::from].
1705
    ///
1706
    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
1707
    ///
1708
    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
1709
    /// [self-unaligned]: Unaligned
1710
    /// [slice-dst]: KnownLayout#dynamically-sized-types
1711
    ///
1712
    /// # Compile-Time Assertions
1713
    ///
1714
    /// This method cannot yet be used on unsized types whose dynamically-sized
1715
    /// component is zero-sized. Attempting to use this method on such types
1716
    /// results in a compile-time assertion error; e.g.:
1717
    ///
1718
    /// ```compile_fail,E0080
1719
    /// use zerocopy::*;
1720
    /// # use zerocopy_derive::*;
1721
    ///
1722
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1723
    /// #[repr(C, packed)]
1724
    /// struct ZSTy {
1725
    ///     leading_sized: [u8; 2],
1726
    ///     trailing_dst: [()],
1727
    /// }
1728
    ///
1729
    /// let mut source = [85, 85];
1730
    /// let _ = ZSTy::try_mut_from_bytes(&mut source[..]); // âš  Compile Error!
1731
    /// ```
1732
    ///
1733
    /// # Examples
1734
    ///
1735
    /// ```
1736
    /// use zerocopy::TryFromBytes;
1737
    /// # use zerocopy_derive::*;
1738
    ///
1739
    /// // The only valid value of this type is the byte `0xC0`
1740
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1741
    /// #[repr(u8)]
1742
    /// enum C0 { xC0 = 0xC0 }
1743
    ///
1744
    /// // The only valid value of this type is the bytes `0xC0C0`.
1745
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1746
    /// #[repr(C)]
1747
    /// struct C0C0(C0, C0);
1748
    ///
1749
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1750
    /// #[repr(C, packed)]
1751
    /// struct Packet {
1752
    ///     magic_number: C0C0,
1753
    ///     mug_size: u8,
1754
    ///     temperature: u8,
1755
    ///     marshmallows: [[u8; 2]],
1756
    /// }
1757
    ///
1758
    /// let bytes = &mut [0xC0, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5][..];
1759
    ///
1760
    /// let packet = Packet::try_mut_from_bytes(bytes).unwrap();
1761
    ///
1762
    /// assert_eq!(packet.mug_size, 240);
1763
    /// assert_eq!(packet.temperature, 77);
1764
    /// assert_eq!(packet.marshmallows, [[0, 1], [2, 3], [4, 5]]);
1765
    ///
1766
    /// packet.temperature = 111;
1767
    ///
1768
    /// assert_eq!(bytes, [0xC0, 0xC0, 240, 111, 0, 1, 2, 3, 4, 5]);
1769
    ///
1770
    /// // These bytes are not valid instance of `Packet`.
1771
    /// let bytes = &mut [0x10, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5, 6][..];
1772
    /// assert!(Packet::try_mut_from_bytes(bytes).is_err());
1773
    /// ```
1774
    #[must_use = "has no side effects"]
1775
    #[inline]
1776
    fn try_mut_from_bytes(bytes: &mut [u8]) -> Result<&mut Self, TryCastError<&mut [u8], Self>>
1777
    where
1778
        Self: KnownLayout + IntoBytes,
1779
    {
1780
        static_assert_dst_is_not_zst!(Self);
1781
        match Ptr::from_mut(bytes).try_cast_into_no_leftover::<Self, BecauseExclusive>(None) {
1782
            Ok(source) => {
1783
                // This call may panic. If that happens, it doesn't cause any soundness
1784
                // issues, as we have not generated any invalid state which we need to
1785
                // fix before returning.
1786
                //
1787
                // Note that one panic or post-monomorphization error condition is
1788
                // calling `try_into_valid` (and thus `is_bit_valid`) with a shared
1789
                // pointer when `Self: !Immutable`. Since `Self: Immutable`, this panic
1790
                // condition will not happen.
1791
                match source.try_into_valid() {
1792
                    Ok(source) => Ok(source.as_mut()),
1793
                    Err(e) => {
1794
                        Err(e.map_src(|src| src.as_bytes::<BecauseExclusive>().as_mut()).into())
1795
                    }
1796
                }
1797
            }
1798
            Err(e) => Err(e.map_src(Ptr::as_mut).into()),
1799
        }
1800
    }
1801
1802
    /// Attempts to interpret the prefix of the given `source` as a `&mut
1803
    /// Self`.
1804
    ///
1805
    /// This method computes the [largest possible size of `Self`][valid-size]
1806
    /// that can fit in the leading bytes of `source`. If that prefix is a valid
1807
    /// instance of `Self`, this method returns a reference to those bytes
1808
    /// interpreted as `Self`, and a reference to the remaining bytes. If there
1809
    /// are insufficient bytes, or if `source` is not appropriately aligned, or
1810
    /// if the bytes are not a valid instance of `Self`, this returns `Err`. If
1811
    /// [`Self: Unaligned`][self-unaligned], you can [infallibly discard the
1812
    /// alignment error][ConvertError::from].
1813
    ///
1814
    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
1815
    ///
1816
    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
1817
    /// [self-unaligned]: Unaligned
1818
    /// [slice-dst]: KnownLayout#dynamically-sized-types
1819
    ///
1820
    /// # Compile-Time Assertions
1821
    ///
1822
    /// This method cannot yet be used on unsized types whose dynamically-sized
1823
    /// component is zero-sized. Attempting to use this method on such types
1824
    /// results in a compile-time assertion error; e.g.:
1825
    ///
1826
    /// ```compile_fail,E0080
1827
    /// use zerocopy::*;
1828
    /// # use zerocopy_derive::*;
1829
    ///
1830
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1831
    /// #[repr(C, packed)]
1832
    /// struct ZSTy {
1833
    ///     leading_sized: [u8; 2],
1834
    ///     trailing_dst: [()],
1835
    /// }
1836
    ///
1837
    /// let mut source = [85, 85];
1838
    /// let _ = ZSTy::try_mut_from_prefix(&mut source[..]); // âš  Compile Error!
1839
    /// ```
1840
    ///
1841
    /// # Examples
1842
    ///
1843
    /// ```
1844
    /// use zerocopy::TryFromBytes;
1845
    /// # use zerocopy_derive::*;
1846
    ///
1847
    /// // The only valid value of this type is the byte `0xC0`
1848
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1849
    /// #[repr(u8)]
1850
    /// enum C0 { xC0 = 0xC0 }
1851
    ///
1852
    /// // The only valid value of this type is the bytes `0xC0C0`.
1853
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1854
    /// #[repr(C)]
1855
    /// struct C0C0(C0, C0);
1856
    ///
1857
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1858
    /// #[repr(C, packed)]
1859
    /// struct Packet {
1860
    ///     magic_number: C0C0,
1861
    ///     mug_size: u8,
1862
    ///     temperature: u8,
1863
    ///     marshmallows: [[u8; 2]],
1864
    /// }
1865
    ///
1866
    /// // These are more bytes than are needed to encode a `Packet`.
1867
    /// let bytes = &mut [0xC0, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5, 6][..];
1868
    ///
1869
    /// let (packet, suffix) = Packet::try_mut_from_prefix(bytes).unwrap();
1870
    ///
1871
    /// assert_eq!(packet.mug_size, 240);
1872
    /// assert_eq!(packet.temperature, 77);
1873
    /// assert_eq!(packet.marshmallows, [[0, 1], [2, 3], [4, 5]]);
1874
    /// assert_eq!(suffix, &[6u8][..]);
1875
    ///
1876
    /// packet.temperature = 111;
1877
    /// suffix[0] = 222;
1878
    ///
1879
    /// assert_eq!(bytes, [0xC0, 0xC0, 240, 111, 0, 1, 2, 3, 4, 5, 222]);
1880
    ///
1881
    /// // These bytes are not valid instance of `Packet`.
1882
    /// let bytes = &mut [0x10, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5, 6][..];
1883
    /// assert!(Packet::try_mut_from_prefix(bytes).is_err());
1884
    /// ```
1885
    #[must_use = "has no side effects"]
1886
    #[inline]
1887
    fn try_mut_from_prefix(
1888
        source: &mut [u8],
1889
    ) -> Result<(&mut Self, &mut [u8]), TryCastError<&mut [u8], Self>>
1890
    where
1891
        Self: KnownLayout + IntoBytes,
1892
    {
1893
        static_assert_dst_is_not_zst!(Self);
1894
        try_mut_from_prefix_suffix(source, CastType::Prefix, None)
1895
    }
1896
1897
    /// Attempts to interpret the suffix of the given `source` as a `&mut
1898
    /// Self`.
1899
    ///
1900
    /// This method computes the [largest possible size of `Self`][valid-size]
1901
    /// that can fit in the trailing bytes of `source`. If that suffix is a
1902
    /// valid instance of `Self`, this method returns a reference to those bytes
1903
    /// interpreted as `Self`, and a reference to the preceding bytes. If there
1904
    /// are insufficient bytes, or if the suffix of `source` would not be
1905
    /// appropriately aligned, or if the suffix is not a valid instance of
1906
    /// `Self`, this returns `Err`. If [`Self: Unaligned`][self-unaligned], you
1907
    /// can [infallibly discard the alignment error][ConvertError::from].
1908
    ///
1909
    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
1910
    ///
1911
    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
1912
    /// [self-unaligned]: Unaligned
1913
    /// [slice-dst]: KnownLayout#dynamically-sized-types
1914
    ///
1915
    /// # Compile-Time Assertions
1916
    ///
1917
    /// This method cannot yet be used on unsized types whose dynamically-sized
1918
    /// component is zero-sized. Attempting to use this method on such types
1919
    /// results in a compile-time assertion error; e.g.:
1920
    ///
1921
    /// ```compile_fail,E0080
1922
    /// use zerocopy::*;
1923
    /// # use zerocopy_derive::*;
1924
    ///
1925
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1926
    /// #[repr(C, packed)]
1927
    /// struct ZSTy {
1928
    ///     leading_sized: u16,
1929
    ///     trailing_dst: [()],
1930
    /// }
1931
    ///
1932
    /// let mut source = [85, 85];
1933
    /// let _ = ZSTy::try_mut_from_suffix(&mut source[..]); // âš  Compile Error!
1934
    /// ```
1935
    ///
1936
    /// # Examples
1937
    ///
1938
    /// ```
1939
    /// use zerocopy::TryFromBytes;
1940
    /// # use zerocopy_derive::*;
1941
    ///
1942
    /// // The only valid value of this type is the byte `0xC0`
1943
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1944
    /// #[repr(u8)]
1945
    /// enum C0 { xC0 = 0xC0 }
1946
    ///
1947
    /// // The only valid value of this type is the bytes `0xC0C0`.
1948
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1949
    /// #[repr(C)]
1950
    /// struct C0C0(C0, C0);
1951
    ///
1952
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1953
    /// #[repr(C, packed)]
1954
    /// struct Packet {
1955
    ///     magic_number: C0C0,
1956
    ///     mug_size: u8,
1957
    ///     temperature: u8,
1958
    ///     marshmallows: [[u8; 2]],
1959
    /// }
1960
    ///
1961
    /// // These are more bytes than are needed to encode a `Packet`.
1962
    /// let bytes = &mut [0, 0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7][..];
1963
    ///
1964
    /// let (prefix, packet) = Packet::try_mut_from_suffix(bytes).unwrap();
1965
    ///
1966
    /// assert_eq!(packet.mug_size, 240);
1967
    /// assert_eq!(packet.temperature, 77);
1968
    /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
1969
    /// assert_eq!(prefix, &[0u8][..]);
1970
    ///
1971
    /// prefix[0] = 111;
1972
    /// packet.temperature = 222;
1973
    ///
1974
    /// assert_eq!(bytes, [111, 0xC0, 0xC0, 240, 222, 2, 3, 4, 5, 6, 7]);
1975
    ///
1976
    /// // These bytes are not valid instance of `Packet`.
1977
    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 77, 240, 0xC0, 0x10][..];
1978
    /// assert!(Packet::try_mut_from_suffix(bytes).is_err());
1979
    /// ```
1980
    #[must_use = "has no side effects"]
1981
    #[inline]
1982
    fn try_mut_from_suffix(
1983
        source: &mut [u8],
1984
    ) -> Result<(&mut [u8], &mut Self), TryCastError<&mut [u8], Self>>
1985
    where
1986
        Self: KnownLayout + IntoBytes,
1987
    {
1988
        static_assert_dst_is_not_zst!(Self);
1989
        try_mut_from_prefix_suffix(source, CastType::Suffix, None).map(swap)
1990
    }
1991
1992
    /// Attempts to interpret the given `source` as a `&Self` with a DST length
1993
    /// equal to `count`.
1994
    ///
1995
    /// This method attempts to return a reference to `source` interpreted as a
1996
    /// `Self` with `count` trailing elements. If the length of `source` is not
1997
    /// equal to the size of `Self` with `count` elements, if `source` is not
1998
    /// appropriately aligned, or if `source` does not contain a valid instance
1999
    /// of `Self`, this returns `Err`. If [`Self: Unaligned`][self-unaligned],
2000
    /// you can [infallibly discard the alignment error][ConvertError::from].
2001
    ///
2002
    /// [self-unaligned]: Unaligned
2003
    /// [slice-dst]: KnownLayout#dynamically-sized-types
2004
    ///
2005
    /// # Examples
2006
    ///
2007
    /// ```
2008
    /// # #![allow(non_camel_case_types)] // For C0::xC0
2009
    /// use zerocopy::TryFromBytes;
2010
    /// # use zerocopy_derive::*;
2011
    ///
2012
    /// // The only valid value of this type is the byte `0xC0`
2013
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2014
    /// #[repr(u8)]
2015
    /// enum C0 { xC0 = 0xC0 }
2016
    ///
2017
    /// // The only valid value of this type is the bytes `0xC0C0`.
2018
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2019
    /// #[repr(C)]
2020
    /// struct C0C0(C0, C0);
2021
    ///
2022
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2023
    /// #[repr(C)]
2024
    /// struct Packet {
2025
    ///     magic_number: C0C0,
2026
    ///     mug_size: u8,
2027
    ///     temperature: u8,
2028
    ///     marshmallows: [[u8; 2]],
2029
    /// }
2030
    ///
2031
    /// let bytes = &[0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7][..];
2032
    ///
2033
    /// let packet = Packet::try_ref_from_bytes_with_elems(bytes, 3).unwrap();
2034
    ///
2035
    /// assert_eq!(packet.mug_size, 240);
2036
    /// assert_eq!(packet.temperature, 77);
2037
    /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
2038
    ///
2039
    /// // These bytes are not valid instance of `Packet`.
2040
    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 77, 240, 0xC0, 0xC0][..];
2041
    /// assert!(Packet::try_ref_from_bytes_with_elems(bytes, 3).is_err());
2042
    /// ```
2043
    ///
2044
    /// Since an explicit `count` is provided, this method supports types with
2045
    /// zero-sized trailing slice elements. Methods such as [`try_ref_from_bytes`]
2046
    /// which do not take an explicit count do not support such types.
2047
    ///
2048
    /// ```
2049
    /// use core::num::NonZeroU16;
2050
    /// use zerocopy::*;
2051
    /// # use zerocopy_derive::*;
2052
    ///
2053
    /// #[derive(TryFromBytes, Immutable, KnownLayout)]
2054
    /// #[repr(C)]
2055
    /// struct ZSTy {
2056
    ///     leading_sized: NonZeroU16,
2057
    ///     trailing_dst: [()],
2058
    /// }
2059
    ///
2060
    /// let src = 0xCAFEu16.as_bytes();
2061
    /// let zsty = ZSTy::try_ref_from_bytes_with_elems(src, 42).unwrap();
2062
    /// assert_eq!(zsty.trailing_dst.len(), 42);
2063
    /// ```
2064
    ///
2065
    /// [`try_ref_from_bytes`]: TryFromBytes::try_ref_from_bytes
2066
    #[must_use = "has no side effects"]
2067
    #[inline]
2068
    fn try_ref_from_bytes_with_elems(
2069
        source: &[u8],
2070
        count: usize,
2071
    ) -> Result<&Self, TryCastError<&[u8], Self>>
2072
    where
2073
        Self: KnownLayout<PointerMetadata = usize> + Immutable,
2074
    {
2075
        match Ptr::from_ref(source).try_cast_into_no_leftover::<Self, BecauseImmutable>(Some(count))
2076
        {
2077
            Ok(source) => {
2078
                // This call may panic. If that happens, it doesn't cause any soundness
2079
                // issues, as we have not generated any invalid state which we need to
2080
                // fix before returning.
2081
                //
2082
                // Note that one panic or post-monomorphization error condition is
2083
                // calling `try_into_valid` (and thus `is_bit_valid`) with a shared
2084
                // pointer when `Self: !Immutable`. Since `Self: Immutable`, this panic
2085
                // condition will not happen.
2086
                match source.try_into_valid() {
2087
                    Ok(source) => Ok(source.as_ref()),
2088
                    Err(e) => {
2089
                        Err(e.map_src(|src| src.as_bytes::<BecauseImmutable>().as_ref()).into())
2090
                    }
2091
                }
2092
            }
2093
            Err(e) => Err(e.map_src(Ptr::as_ref).into()),
2094
        }
2095
    }
2096
2097
    /// Attempts to interpret the prefix of the given `source` as a `&Self` with
2098
    /// a DST length equal to `count`.
2099
    ///
2100
    /// This method attempts to return a reference to the prefix of `source`
2101
    /// interpreted as a `Self` with `count` trailing elements, and a reference
2102
    /// to the remaining bytes. If the length of `source` is less than the size
2103
    /// of `Self` with `count` elements, if `source` is not appropriately
2104
    /// aligned, or if the prefix of `source` does not contain a valid instance
2105
    /// of `Self`, this returns `Err`. If [`Self: Unaligned`][self-unaligned],
2106
    /// you can [infallibly discard the alignment error][ConvertError::from].
2107
    ///
2108
    /// [self-unaligned]: Unaligned
2109
    /// [slice-dst]: KnownLayout#dynamically-sized-types
2110
    ///
2111
    /// # Examples
2112
    ///
2113
    /// ```
2114
    /// # #![allow(non_camel_case_types)] // For C0::xC0
2115
    /// use zerocopy::TryFromBytes;
2116
    /// # use zerocopy_derive::*;
2117
    ///
2118
    /// // The only valid value of this type is the byte `0xC0`
2119
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2120
    /// #[repr(u8)]
2121
    /// enum C0 { xC0 = 0xC0 }
2122
    ///
2123
    /// // The only valid value of this type is the bytes `0xC0C0`.
2124
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2125
    /// #[repr(C)]
2126
    /// struct C0C0(C0, C0);
2127
    ///
2128
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2129
    /// #[repr(C)]
2130
    /// struct Packet {
2131
    ///     magic_number: C0C0,
2132
    ///     mug_size: u8,
2133
    ///     temperature: u8,
2134
    ///     marshmallows: [[u8; 2]],
2135
    /// }
2136
    ///
2137
    /// let bytes = &[0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7, 8][..];
2138
    ///
2139
    /// let (packet, suffix) = Packet::try_ref_from_prefix_with_elems(bytes, 3).unwrap();
2140
    ///
2141
    /// assert_eq!(packet.mug_size, 240);
2142
    /// assert_eq!(packet.temperature, 77);
2143
    /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
2144
    /// assert_eq!(suffix, &[8u8][..]);
2145
    ///
2146
    /// // These bytes are not valid instance of `Packet`.
2147
    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7, 8, 77, 240, 0xC0, 0xC0][..];
2148
    /// assert!(Packet::try_ref_from_prefix_with_elems(bytes, 3).is_err());
2149
    /// ```
2150
    ///
2151
    /// Since an explicit `count` is provided, this method supports types with
2152
    /// zero-sized trailing slice elements. Methods such as [`try_ref_from_prefix`]
2153
    /// which do not take an explicit count do not support such types.
2154
    ///
2155
    /// ```
2156
    /// use core::num::NonZeroU16;
2157
    /// use zerocopy::*;
2158
    /// # use zerocopy_derive::*;
2159
    ///
2160
    /// #[derive(TryFromBytes, Immutable, KnownLayout)]
2161
    /// #[repr(C)]
2162
    /// struct ZSTy {
2163
    ///     leading_sized: NonZeroU16,
2164
    ///     trailing_dst: [()],
2165
    /// }
2166
    ///
2167
    /// let src = 0xCAFEu16.as_bytes();
2168
    /// let (zsty, _) = ZSTy::try_ref_from_prefix_with_elems(src, 42).unwrap();
2169
    /// assert_eq!(zsty.trailing_dst.len(), 42);
2170
    /// ```
2171
    ///
2172
    /// [`try_ref_from_prefix`]: TryFromBytes::try_ref_from_prefix
2173
    #[must_use = "has no side effects"]
2174
    #[inline]
2175
    fn try_ref_from_prefix_with_elems(
2176
        source: &[u8],
2177
        count: usize,
2178
    ) -> Result<(&Self, &[u8]), TryCastError<&[u8], Self>>
2179
    where
2180
        Self: KnownLayout<PointerMetadata = usize> + Immutable,
2181
    {
2182
        try_ref_from_prefix_suffix(source, CastType::Prefix, Some(count))
2183
    }
2184
2185
    /// Attempts to interpret the suffix of the given `source` as a `&Self` with
2186
    /// a DST length equal to `count`.
2187
    ///
2188
    /// This method attempts to return a reference to the suffix of `source`
2189
    /// interpreted as a `Self` with `count` trailing elements, and a reference
2190
    /// to the preceding bytes. If the length of `source` is less than the size
2191
    /// of `Self` with `count` elements, if the suffix of `source` is not
2192
    /// appropriately aligned, or if the suffix of `source` does not contain a
2193
    /// valid instance of `Self`, this returns `Err`. If [`Self:
2194
    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
2195
    /// error][ConvertError::from].
2196
    ///
2197
    /// [self-unaligned]: Unaligned
2198
    /// [slice-dst]: KnownLayout#dynamically-sized-types
2199
    ///
2200
    /// # Examples
2201
    ///
2202
    /// ```
2203
    /// # #![allow(non_camel_case_types)] // For C0::xC0
2204
    /// use zerocopy::TryFromBytes;
2205
    /// # use zerocopy_derive::*;
2206
    ///
2207
    /// // The only valid value of this type is the byte `0xC0`
2208
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2209
    /// #[repr(u8)]
2210
    /// enum C0 { xC0 = 0xC0 }
2211
    ///
2212
    /// // The only valid value of this type is the bytes `0xC0C0`.
2213
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2214
    /// #[repr(C)]
2215
    /// struct C0C0(C0, C0);
2216
    ///
2217
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2218
    /// #[repr(C)]
2219
    /// struct Packet {
2220
    ///     magic_number: C0C0,
2221
    ///     mug_size: u8,
2222
    ///     temperature: u8,
2223
    ///     marshmallows: [[u8; 2]],
2224
    /// }
2225
    ///
2226
    /// let bytes = &[123, 0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7][..];
2227
    ///
2228
    /// let (prefix, packet) = Packet::try_ref_from_suffix_with_elems(bytes, 3).unwrap();
2229
    ///
2230
    /// assert_eq!(packet.mug_size, 240);
2231
    /// assert_eq!(packet.temperature, 77);
2232
    /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
2233
    /// assert_eq!(prefix, &[123u8][..]);
2234
    ///
2235
    /// // These bytes are not valid instance of `Packet`.
2236
    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 77, 240, 0xC0, 0xC0][..];
2237
    /// assert!(Packet::try_ref_from_suffix_with_elems(bytes, 3).is_err());
2238
    /// ```
2239
    ///
2240
    /// Since an explicit `count` is provided, this method supports types with
2241
    /// zero-sized trailing slice elements. Methods such as [`try_ref_from_prefix`]
2242
    /// which do not take an explicit count do not support such types.
2243
    ///
2244
    /// ```
2245
    /// use core::num::NonZeroU16;
2246
    /// use zerocopy::*;
2247
    /// # use zerocopy_derive::*;
2248
    ///
2249
    /// #[derive(TryFromBytes, Immutable, KnownLayout)]
2250
    /// #[repr(C)]
2251
    /// struct ZSTy {
2252
    ///     leading_sized: NonZeroU16,
2253
    ///     trailing_dst: [()],
2254
    /// }
2255
    ///
2256
    /// let src = 0xCAFEu16.as_bytes();
2257
    /// let (_, zsty) = ZSTy::try_ref_from_suffix_with_elems(src, 42).unwrap();
2258
    /// assert_eq!(zsty.trailing_dst.len(), 42);
2259
    /// ```
2260
    ///
2261
    /// [`try_ref_from_prefix`]: TryFromBytes::try_ref_from_prefix
2262
    #[must_use = "has no side effects"]
2263
    #[inline]
2264
    fn try_ref_from_suffix_with_elems(
2265
        source: &[u8],
2266
        count: usize,
2267
    ) -> Result<(&[u8], &Self), TryCastError<&[u8], Self>>
2268
    where
2269
        Self: KnownLayout<PointerMetadata = usize> + Immutable,
2270
    {
2271
        try_ref_from_prefix_suffix(source, CastType::Suffix, Some(count)).map(swap)
2272
    }
2273
2274
    /// Attempts to interpret the given `source` as a `&mut Self` with a DST
2275
    /// length equal to `count`.
2276
    ///
2277
    /// This method attempts to return a reference to `source` interpreted as a
2278
    /// `Self` with `count` trailing elements. If the length of `source` is not
2279
    /// equal to the size of `Self` with `count` elements, if `source` is not
2280
    /// appropriately aligned, or if `source` does not contain a valid instance
2281
    /// of `Self`, this returns `Err`. If [`Self: Unaligned`][self-unaligned],
2282
    /// you can [infallibly discard the alignment error][ConvertError::from].
2283
    ///
2284
    /// [self-unaligned]: Unaligned
2285
    /// [slice-dst]: KnownLayout#dynamically-sized-types
2286
    ///
2287
    /// # Examples
2288
    ///
2289
    /// ```
2290
    /// # #![allow(non_camel_case_types)] // For C0::xC0
2291
    /// use zerocopy::TryFromBytes;
2292
    /// # use zerocopy_derive::*;
2293
    ///
2294
    /// // The only valid value of this type is the byte `0xC0`
2295
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2296
    /// #[repr(u8)]
2297
    /// enum C0 { xC0 = 0xC0 }
2298
    ///
2299
    /// // The only valid value of this type is the bytes `0xC0C0`.
2300
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2301
    /// #[repr(C)]
2302
    /// struct C0C0(C0, C0);
2303
    ///
2304
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2305
    /// #[repr(C, packed)]
2306
    /// struct Packet {
2307
    ///     magic_number: C0C0,
2308
    ///     mug_size: u8,
2309
    ///     temperature: u8,
2310
    ///     marshmallows: [[u8; 2]],
2311
    /// }
2312
    ///
2313
    /// let bytes = &mut [0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7][..];
2314
    ///
2315
    /// let packet = Packet::try_mut_from_bytes_with_elems(bytes, 3).unwrap();
2316
    ///
2317
    /// assert_eq!(packet.mug_size, 240);
2318
    /// assert_eq!(packet.temperature, 77);
2319
    /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
2320
    ///
2321
    /// packet.temperature = 111;
2322
    ///
2323
    /// assert_eq!(bytes, [0xC0, 0xC0, 240, 111, 2, 3, 4, 5, 6, 7]);
2324
    ///
2325
    /// // These bytes are not valid instance of `Packet`.
2326
    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 77, 240, 0xC0, 0xC0][..];
2327
    /// assert!(Packet::try_mut_from_bytes_with_elems(bytes, 3).is_err());
2328
    /// ```
2329
    ///
2330
    /// Since an explicit `count` is provided, this method supports types with
2331
    /// zero-sized trailing slice elements. Methods such as [`try_mut_from_bytes`]
2332
    /// which do not take an explicit count do not support such types.
2333
    ///
2334
    /// ```
2335
    /// use core::num::NonZeroU16;
2336
    /// use zerocopy::*;
2337
    /// # use zerocopy_derive::*;
2338
    ///
2339
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2340
    /// #[repr(C, packed)]
2341
    /// struct ZSTy {
2342
    ///     leading_sized: NonZeroU16,
2343
    ///     trailing_dst: [()],
2344
    /// }
2345
    ///
2346
    /// let mut src = 0xCAFEu16;
2347
    /// let src = src.as_mut_bytes();
2348
    /// let zsty = ZSTy::try_mut_from_bytes_with_elems(src, 42).unwrap();
2349
    /// assert_eq!(zsty.trailing_dst.len(), 42);
2350
    /// ```
2351
    ///
2352
    /// [`try_mut_from_bytes`]: TryFromBytes::try_mut_from_bytes
2353
    #[must_use = "has no side effects"]
2354
    #[inline]
2355
    fn try_mut_from_bytes_with_elems(
2356
        source: &mut [u8],
2357
        count: usize,
2358
    ) -> Result<&mut Self, TryCastError<&mut [u8], Self>>
2359
    where
2360
        Self: KnownLayout<PointerMetadata = usize> + IntoBytes,
2361
    {
2362
        match Ptr::from_mut(source).try_cast_into_no_leftover::<Self, BecauseExclusive>(Some(count))
2363
        {
2364
            Ok(source) => {
2365
                // This call may panic. If that happens, it doesn't cause any soundness
2366
                // issues, as we have not generated any invalid state which we need to
2367
                // fix before returning.
2368
                //
2369
                // Note that one panic or post-monomorphization error condition is
2370
                // calling `try_into_valid` (and thus `is_bit_valid`) with a shared
2371
                // pointer when `Self: !Immutable`. Since `Self: Immutable`, this panic
2372
                // condition will not happen.
2373
                match source.try_into_valid() {
2374
                    Ok(source) => Ok(source.as_mut()),
2375
                    Err(e) => {
2376
                        Err(e.map_src(|src| src.as_bytes::<BecauseExclusive>().as_mut()).into())
2377
                    }
2378
                }
2379
            }
2380
            Err(e) => Err(e.map_src(Ptr::as_mut).into()),
2381
        }
2382
    }
2383
2384
    /// Attempts to interpret the prefix of the given `source` as a `&mut Self`
2385
    /// with a DST length equal to `count`.
2386
    ///
2387
    /// This method attempts to return a reference to the prefix of `source`
2388
    /// interpreted as a `Self` with `count` trailing elements, and a reference
2389
    /// to the remaining bytes. If the length of `source` is less than the size
2390
    /// of `Self` with `count` elements, if `source` is not appropriately
2391
    /// aligned, or if the prefix of `source` does not contain a valid instance
2392
    /// of `Self`, this returns `Err`. If [`Self: Unaligned`][self-unaligned],
2393
    /// you can [infallibly discard the alignment error][ConvertError::from].
2394
    ///
2395
    /// [self-unaligned]: Unaligned
2396
    /// [slice-dst]: KnownLayout#dynamically-sized-types
2397
    ///
2398
    /// # Examples
2399
    ///
2400
    /// ```
2401
    /// # #![allow(non_camel_case_types)] // For C0::xC0
2402
    /// use zerocopy::TryFromBytes;
2403
    /// # use zerocopy_derive::*;
2404
    ///
2405
    /// // The only valid value of this type is the byte `0xC0`
2406
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2407
    /// #[repr(u8)]
2408
    /// enum C0 { xC0 = 0xC0 }
2409
    ///
2410
    /// // The only valid value of this type is the bytes `0xC0C0`.
2411
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2412
    /// #[repr(C)]
2413
    /// struct C0C0(C0, C0);
2414
    ///
2415
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2416
    /// #[repr(C, packed)]
2417
    /// struct Packet {
2418
    ///     magic_number: C0C0,
2419
    ///     mug_size: u8,
2420
    ///     temperature: u8,
2421
    ///     marshmallows: [[u8; 2]],
2422
    /// }
2423
    ///
2424
    /// let bytes = &mut [0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7, 8][..];
2425
    ///
2426
    /// let (packet, suffix) = Packet::try_mut_from_prefix_with_elems(bytes, 3).unwrap();
2427
    ///
2428
    /// assert_eq!(packet.mug_size, 240);
2429
    /// assert_eq!(packet.temperature, 77);
2430
    /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
2431
    /// assert_eq!(suffix, &[8u8][..]);
2432
    ///
2433
    /// packet.temperature = 111;
2434
    /// suffix[0] = 222;
2435
    ///
2436
    /// assert_eq!(bytes, [0xC0, 0xC0, 240, 111, 2, 3, 4, 5, 6, 7, 222]);
2437
    ///
2438
    /// // These bytes are not valid instance of `Packet`.
2439
    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7, 8, 77, 240, 0xC0, 0xC0][..];
2440
    /// assert!(Packet::try_mut_from_prefix_with_elems(bytes, 3).is_err());
2441
    /// ```
2442
    ///
2443
    /// Since an explicit `count` is provided, this method supports types with
2444
    /// zero-sized trailing slice elements. Methods such as [`try_mut_from_prefix`]
2445
    /// which do not take an explicit count do not support such types.
2446
    ///
2447
    /// ```
2448
    /// use core::num::NonZeroU16;
2449
    /// use zerocopy::*;
2450
    /// # use zerocopy_derive::*;
2451
    ///
2452
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2453
    /// #[repr(C, packed)]
2454
    /// struct ZSTy {
2455
    ///     leading_sized: NonZeroU16,
2456
    ///     trailing_dst: [()],
2457
    /// }
2458
    ///
2459
    /// let mut src = 0xCAFEu16;
2460
    /// let src = src.as_mut_bytes();
2461
    /// let (zsty, _) = ZSTy::try_mut_from_prefix_with_elems(src, 42).unwrap();
2462
    /// assert_eq!(zsty.trailing_dst.len(), 42);
2463
    /// ```
2464
    ///
2465
    /// [`try_mut_from_prefix`]: TryFromBytes::try_mut_from_prefix
2466
    #[must_use = "has no side effects"]
2467
    #[inline]
2468
    fn try_mut_from_prefix_with_elems(
2469
        source: &mut [u8],
2470
        count: usize,
2471
    ) -> Result<(&mut Self, &mut [u8]), TryCastError<&mut [u8], Self>>
2472
    where
2473
        Self: KnownLayout<PointerMetadata = usize> + IntoBytes,
2474
    {
2475
        try_mut_from_prefix_suffix(source, CastType::Prefix, Some(count))
2476
    }
2477
2478
    /// Attempts to interpret the suffix of the given `source` as a `&mut Self`
2479
    /// with a DST length equal to `count`.
2480
    ///
2481
    /// This method attempts to return a reference to the suffix of `source`
2482
    /// interpreted as a `Self` with `count` trailing elements, and a reference
2483
    /// to the preceding bytes. If the length of `source` is less than the size
2484
    /// of `Self` with `count` elements, if the suffix of `source` is not
2485
    /// appropriately aligned, or if the suffix of `source` does not contain a
2486
    /// valid instance of `Self`, this returns `Err`. If [`Self:
2487
    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
2488
    /// error][ConvertError::from].
2489
    ///
2490
    /// [self-unaligned]: Unaligned
2491
    /// [slice-dst]: KnownLayout#dynamically-sized-types
2492
    ///
2493
    /// # Examples
2494
    ///
2495
    /// ```
2496
    /// # #![allow(non_camel_case_types)] // For C0::xC0
2497
    /// use zerocopy::TryFromBytes;
2498
    /// # use zerocopy_derive::*;
2499
    ///
2500
    /// // The only valid value of this type is the byte `0xC0`
2501
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2502
    /// #[repr(u8)]
2503
    /// enum C0 { xC0 = 0xC0 }
2504
    ///
2505
    /// // The only valid value of this type is the bytes `0xC0C0`.
2506
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2507
    /// #[repr(C)]
2508
    /// struct C0C0(C0, C0);
2509
    ///
2510
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2511
    /// #[repr(C, packed)]
2512
    /// struct Packet {
2513
    ///     magic_number: C0C0,
2514
    ///     mug_size: u8,
2515
    ///     temperature: u8,
2516
    ///     marshmallows: [[u8; 2]],
2517
    /// }
2518
    ///
2519
    /// let bytes = &mut [123, 0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7][..];
2520
    ///
2521
    /// let (prefix, packet) = Packet::try_mut_from_suffix_with_elems(bytes, 3).unwrap();
2522
    ///
2523
    /// assert_eq!(packet.mug_size, 240);
2524
    /// assert_eq!(packet.temperature, 77);
2525
    /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
2526
    /// assert_eq!(prefix, &[123u8][..]);
2527
    ///
2528
    /// prefix[0] = 111;
2529
    /// packet.temperature = 222;
2530
    ///
2531
    /// assert_eq!(bytes, [111, 0xC0, 0xC0, 240, 222, 2, 3, 4, 5, 6, 7]);
2532
    ///
2533
    /// // These bytes are not valid instance of `Packet`.
2534
    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7, 8, 77, 240, 0xC0, 0xC0][..];
2535
    /// assert!(Packet::try_mut_from_suffix_with_elems(bytes, 3).is_err());
2536
    /// ```
2537
    ///
2538
    /// Since an explicit `count` is provided, this method supports types with
2539
    /// zero-sized trailing slice elements. Methods such as [`try_mut_from_prefix`]
2540
    /// which do not take an explicit count do not support such types.
2541
    ///
2542
    /// ```
2543
    /// use core::num::NonZeroU16;
2544
    /// use zerocopy::*;
2545
    /// # use zerocopy_derive::*;
2546
    ///
2547
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2548
    /// #[repr(C, packed)]
2549
    /// struct ZSTy {
2550
    ///     leading_sized: NonZeroU16,
2551
    ///     trailing_dst: [()],
2552
    /// }
2553
    ///
2554
    /// let mut src = 0xCAFEu16;
2555
    /// let src = src.as_mut_bytes();
2556
    /// let (_, zsty) = ZSTy::try_mut_from_suffix_with_elems(src, 42).unwrap();
2557
    /// assert_eq!(zsty.trailing_dst.len(), 42);
2558
    /// ```
2559
    ///
2560
    /// [`try_mut_from_prefix`]: TryFromBytes::try_mut_from_prefix
2561
    #[must_use = "has no side effects"]
2562
    #[inline]
2563
    fn try_mut_from_suffix_with_elems(
2564
        source: &mut [u8],
2565
        count: usize,
2566
    ) -> Result<(&mut [u8], &mut Self), TryCastError<&mut [u8], Self>>
2567
    where
2568
        Self: KnownLayout<PointerMetadata = usize> + IntoBytes,
2569
    {
2570
        try_mut_from_prefix_suffix(source, CastType::Suffix, Some(count)).map(swap)
2571
    }
2572
2573
    /// Attempts to read the given `source` as a `Self`.
2574
    ///
2575
    /// If `source.len() != size_of::<Self>()` or the bytes are not a valid
2576
    /// instance of `Self`, this returns `Err`.
2577
    ///
2578
    /// # Examples
2579
    ///
2580
    /// ```
2581
    /// use zerocopy::TryFromBytes;
2582
    /// # use zerocopy_derive::*;
2583
    ///
2584
    /// // The only valid value of this type is the byte `0xC0`
2585
    /// #[derive(TryFromBytes)]
2586
    /// #[repr(u8)]
2587
    /// enum C0 { xC0 = 0xC0 }
2588
    ///
2589
    /// // The only valid value of this type is the bytes `0xC0C0`.
2590
    /// #[derive(TryFromBytes)]
2591
    /// #[repr(C)]
2592
    /// struct C0C0(C0, C0);
2593
    ///
2594
    /// #[derive(TryFromBytes)]
2595
    /// #[repr(C)]
2596
    /// struct Packet {
2597
    ///     magic_number: C0C0,
2598
    ///     mug_size: u8,
2599
    ///     temperature: u8,
2600
    /// }
2601
    ///
2602
    /// let bytes = &[0xC0, 0xC0, 240, 77][..];
2603
    ///
2604
    /// let packet = Packet::try_read_from_bytes(bytes).unwrap();
2605
    ///
2606
    /// assert_eq!(packet.mug_size, 240);
2607
    /// assert_eq!(packet.temperature, 77);
2608
    ///
2609
    /// // These bytes are not valid instance of `Packet`.
2610
    /// let bytes = &mut [0x10, 0xC0, 240, 77][..];
2611
    /// assert!(Packet::try_read_from_bytes(bytes).is_err());
2612
    /// ```
2613
    #[must_use = "has no side effects"]
2614
    #[inline]
2615
    fn try_read_from_bytes(source: &[u8]) -> Result<Self, TryReadError<&[u8], Self>>
2616
    where
2617
        Self: Sized,
2618
    {
2619
        let candidate = match CoreMaybeUninit::<Self>::read_from_bytes(source) {
2620
            Ok(candidate) => candidate,
2621
            Err(e) => {
2622
                return Err(TryReadError::Size(e.with_dst()));
2623
            }
2624
        };
2625
        // SAFETY: `candidate` was copied from from `source: &[u8]`, so all of
2626
        // its bytes are initialized.
2627
        unsafe { try_read_from(source, candidate) }
2628
    }
2629
2630
    /// Attempts to read a `Self` from the prefix of the given `source`.
2631
    ///
2632
    /// This attempts to read a `Self` from the first `size_of::<Self>()` bytes
2633
    /// of `source`, returning that `Self` and any remaining bytes. If
2634
    /// `source.len() < size_of::<Self>()` or the bytes are not a valid instance
2635
    /// of `Self`, it returns `Err`.
2636
    ///
2637
    /// # Examples
2638
    ///
2639
    /// ```
2640
    /// use zerocopy::TryFromBytes;
2641
    /// # use zerocopy_derive::*;
2642
    ///
2643
    /// // The only valid value of this type is the byte `0xC0`
2644
    /// #[derive(TryFromBytes)]
2645
    /// #[repr(u8)]
2646
    /// enum C0 { xC0 = 0xC0 }
2647
    ///
2648
    /// // The only valid value of this type is the bytes `0xC0C0`.
2649
    /// #[derive(TryFromBytes)]
2650
    /// #[repr(C)]
2651
    /// struct C0C0(C0, C0);
2652
    ///
2653
    /// #[derive(TryFromBytes)]
2654
    /// #[repr(C)]
2655
    /// struct Packet {
2656
    ///     magic_number: C0C0,
2657
    ///     mug_size: u8,
2658
    ///     temperature: u8,
2659
    /// }
2660
    ///
2661
    /// // These are more bytes than are needed to encode a `Packet`.
2662
    /// let bytes = &[0xC0, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5, 6][..];
2663
    ///
2664
    /// let (packet, suffix) = Packet::try_read_from_prefix(bytes).unwrap();
2665
    ///
2666
    /// assert_eq!(packet.mug_size, 240);
2667
    /// assert_eq!(packet.temperature, 77);
2668
    /// assert_eq!(suffix, &[0u8, 1, 2, 3, 4, 5, 6][..]);
2669
    ///
2670
    /// // These bytes are not valid instance of `Packet`.
2671
    /// let bytes = &[0x10, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5, 6][..];
2672
    /// assert!(Packet::try_read_from_prefix(bytes).is_err());
2673
    /// ```
2674
    #[must_use = "has no side effects"]
2675
    #[inline]
2676
    fn try_read_from_prefix(source: &[u8]) -> Result<(Self, &[u8]), TryReadError<&[u8], Self>>
2677
    where
2678
        Self: Sized,
2679
    {
2680
        let (candidate, suffix) = match CoreMaybeUninit::<Self>::read_from_prefix(source) {
2681
            Ok(candidate) => candidate,
2682
            Err(e) => {
2683
                return Err(TryReadError::Size(e.with_dst()));
2684
            }
2685
        };
2686
        // SAFETY: `candidate` was copied from from `source: &[u8]`, so all of
2687
        // its bytes are initialized.
2688
        unsafe { try_read_from(source, candidate).map(|slf| (slf, suffix)) }
2689
    }
2690
2691
    /// Attempts to read a `Self` from the suffix of the given `source`.
2692
    ///
2693
    /// This attempts to read a `Self` from the last `size_of::<Self>()` bytes
2694
    /// of `source`, returning that `Self` and any preceding bytes. If
2695
    /// `source.len() < size_of::<Self>()` or the bytes are not a valid instance
2696
    /// of `Self`, it returns `Err`.
2697
    ///
2698
    /// # Examples
2699
    ///
2700
    /// ```
2701
    /// # #![allow(non_camel_case_types)] // For C0::xC0
2702
    /// use zerocopy::TryFromBytes;
2703
    /// # use zerocopy_derive::*;
2704
    ///
2705
    /// // The only valid value of this type is the byte `0xC0`
2706
    /// #[derive(TryFromBytes)]
2707
    /// #[repr(u8)]
2708
    /// enum C0 { xC0 = 0xC0 }
2709
    ///
2710
    /// // The only valid value of this type is the bytes `0xC0C0`.
2711
    /// #[derive(TryFromBytes)]
2712
    /// #[repr(C)]
2713
    /// struct C0C0(C0, C0);
2714
    ///
2715
    /// #[derive(TryFromBytes)]
2716
    /// #[repr(C)]
2717
    /// struct Packet {
2718
    ///     magic_number: C0C0,
2719
    ///     mug_size: u8,
2720
    ///     temperature: u8,
2721
    /// }
2722
    ///
2723
    /// // These are more bytes than are needed to encode a `Packet`.
2724
    /// let bytes = &[0, 1, 2, 3, 4, 5, 0xC0, 0xC0, 240, 77][..];
2725
    ///
2726
    /// let (prefix, packet) = Packet::try_read_from_suffix(bytes).unwrap();
2727
    ///
2728
    /// assert_eq!(packet.mug_size, 240);
2729
    /// assert_eq!(packet.temperature, 77);
2730
    /// assert_eq!(prefix, &[0u8, 1, 2, 3, 4, 5][..]);
2731
    ///
2732
    /// // These bytes are not valid instance of `Packet`.
2733
    /// let bytes = &[0, 1, 2, 3, 4, 5, 0x10, 0xC0, 240, 77][..];
2734
    /// assert!(Packet::try_read_from_suffix(bytes).is_err());
2735
    /// ```
2736
    #[must_use = "has no side effects"]
2737
    #[inline]
2738
    fn try_read_from_suffix(source: &[u8]) -> Result<(&[u8], Self), TryReadError<&[u8], Self>>
2739
    where
2740
        Self: Sized,
2741
    {
2742
        let (prefix, candidate) = match CoreMaybeUninit::<Self>::read_from_suffix(source) {
2743
            Ok(candidate) => candidate,
2744
            Err(e) => {
2745
                return Err(TryReadError::Size(e.with_dst()));
2746
            }
2747
        };
2748
        // SAFETY: `candidate` was copied from from `source: &[u8]`, so all of
2749
        // its bytes are initialized.
2750
        unsafe { try_read_from(source, candidate).map(|slf| (prefix, slf)) }
2751
    }
2752
}
2753
2754
#[inline(always)]
2755
fn try_ref_from_prefix_suffix<T: TryFromBytes + KnownLayout + Immutable + ?Sized>(
2756
    source: &[u8],
2757
    cast_type: CastType,
2758
    meta: Option<T::PointerMetadata>,
2759
) -> Result<(&T, &[u8]), TryCastError<&[u8], T>> {
2760
    match Ptr::from_ref(source).try_cast_into::<T, BecauseImmutable>(cast_type, meta) {
2761
        Ok((source, prefix_suffix)) => {
2762
            // This call may panic. If that happens, it doesn't cause any soundness
2763
            // issues, as we have not generated any invalid state which we need to
2764
            // fix before returning.
2765
            //
2766
            // Note that one panic or post-monomorphization error condition is
2767
            // calling `try_into_valid` (and thus `is_bit_valid`) with a shared
2768
            // pointer when `Self: !Immutable`. Since `Self: Immutable`, this panic
2769
            // condition will not happen.
2770
            match source.try_into_valid() {
2771
                Ok(valid) => Ok((valid.as_ref(), prefix_suffix.as_ref())),
2772
                Err(e) => Err(e.map_src(|src| src.as_bytes::<BecauseImmutable>().as_ref()).into()),
2773
            }
2774
        }
2775
        Err(e) => Err(e.map_src(Ptr::as_ref).into()),
2776
    }
2777
}
2778
2779
#[inline(always)]
2780
fn try_mut_from_prefix_suffix<T: IntoBytes + TryFromBytes + KnownLayout + ?Sized>(
2781
    candidate: &mut [u8],
2782
    cast_type: CastType,
2783
    meta: Option<T::PointerMetadata>,
2784
) -> Result<(&mut T, &mut [u8]), TryCastError<&mut [u8], T>> {
2785
    match Ptr::from_mut(candidate).try_cast_into::<T, BecauseExclusive>(cast_type, meta) {
2786
        Ok((candidate, prefix_suffix)) => {
2787
            // This call may panic. If that happens, it doesn't cause any soundness
2788
            // issues, as we have not generated any invalid state which we need to
2789
            // fix before returning.
2790
            //
2791
            // Note that one panic or post-monomorphization error condition is
2792
            // calling `try_into_valid` (and thus `is_bit_valid`) with a shared
2793
            // pointer when `Self: !Immutable`. Since `Self: Immutable`, this panic
2794
            // condition will not happen.
2795
            match candidate.try_into_valid() {
2796
                Ok(valid) => Ok((valid.as_mut(), prefix_suffix.as_mut())),
2797
                Err(e) => Err(e.map_src(|src| src.as_bytes::<BecauseExclusive>().as_mut()).into()),
2798
            }
2799
        }
2800
        Err(e) => Err(e.map_src(Ptr::as_mut).into()),
2801
    }
2802
}
2803
2804
#[inline(always)]
2805
fn swap<T, U>((t, u): (T, U)) -> (U, T) {
2806
    (u, t)
2807
}
2808
2809
/// # Safety
2810
///
2811
/// All bytes of `candidate` must be initialized.
2812
#[inline(always)]
2813
unsafe fn try_read_from<S, T: TryFromBytes>(
2814
    source: S,
2815
    mut candidate: CoreMaybeUninit<T>,
2816
) -> Result<T, TryReadError<S, T>> {
2817
    // We use `from_mut` despite not mutating via `c_ptr` so that we don't need
2818
    // to add a `T: Immutable` bound.
2819
    let c_ptr = Ptr::from_mut(&mut candidate);
2820
    let c_ptr = c_ptr.transparent_wrapper_into_inner();
2821
    // SAFETY: `c_ptr` has no uninitialized sub-ranges because it derived from
2822
    // `candidate`, which the caller promises is entirely initialized.
2823
    let c_ptr = unsafe { c_ptr.assume_validity::<invariant::Initialized>() };
2824
2825
    // This call may panic. If that happens, it doesn't cause any soundness
2826
    // issues, as we have not generated any invalid state which we need to
2827
    // fix before returning.
2828
    //
2829
    // Note that one panic or post-monomorphization error condition is
2830
    // calling `try_into_valid` (and thus `is_bit_valid`) with a shared
2831
    // pointer when `Self: !Immutable`. Since `Self: Immutable`, this panic
2832
    // condition will not happen.
2833
    if !T::is_bit_valid(c_ptr.forget_aligned()) {
2834
        return Err(ValidityError::new(source).into());
2835
    }
2836
2837
    // SAFETY: We just validated that `candidate` contains a valid `T`.
2838
    Ok(unsafe { candidate.assume_init() })
2839
}
2840
2841
/// Types for which a sequence of bytes all set to zero represents a valid
2842
/// instance of the type.
2843
///
2844
/// Any memory region of the appropriate length which is guaranteed to contain
2845
/// only zero bytes can be viewed as any `FromZeros` type with no runtime
2846
/// overhead. This is useful whenever memory is known to be in a zeroed state,
2847
/// such memory returned from some allocation routines.
2848
///
2849
/// # Warning: Padding bytes
2850
///
2851
/// Note that, when a value is moved or copied, only the non-padding bytes of
2852
/// that value are guaranteed to be preserved. It is unsound to assume that
2853
/// values written to padding bytes are preserved after a move or copy. For more
2854
/// details, see the [`FromBytes` docs][frombytes-warning-padding-bytes].
2855
///
2856
/// [frombytes-warning-padding-bytes]: FromBytes#warning-padding-bytes
2857
///
2858
/// # Implementation
2859
///
2860
/// **Do not implement this trait yourself!** Instead, use
2861
/// [`#[derive(FromZeros)]`][derive]; e.g.:
2862
///
2863
/// ```
2864
/// # use zerocopy_derive::{FromZeros, Immutable};
2865
/// #[derive(FromZeros)]
2866
/// struct MyStruct {
2867
/// # /*
2868
///     ...
2869
/// # */
2870
/// }
2871
///
2872
/// #[derive(FromZeros)]
2873
/// #[repr(u8)]
2874
/// enum MyEnum {
2875
/// #   Variant0,
2876
/// # /*
2877
///     ...
2878
/// # */
2879
/// }
2880
///
2881
/// #[derive(FromZeros, Immutable)]
2882
/// union MyUnion {
2883
/// #   variant: u8,
2884
/// # /*
2885
///     ...
2886
/// # */
2887
/// }
2888
/// ```
2889
///
2890
/// This derive performs a sophisticated, compile-time safety analysis to
2891
/// determine whether a type is `FromZeros`.
2892
///
2893
/// # Safety
2894
///
2895
/// *This section describes what is required in order for `T: FromZeros`, and
2896
/// what unsafe code may assume of such types. If you don't plan on implementing
2897
/// `FromZeros` manually, and you don't plan on writing unsafe code that
2898
/// operates on `FromZeros` types, then you don't need to read this section.*
2899
///
2900
/// If `T: FromZeros`, then unsafe code may assume that it is sound to produce a
2901
/// `T` whose bytes are all initialized to zero. If a type is marked as
2902
/// `FromZeros` which violates this contract, it may cause undefined behavior.
2903
///
2904
/// `#[derive(FromZeros)]` only permits [types which satisfy these
2905
/// requirements][derive-analysis].
2906
///
2907
#[cfg_attr(
2908
    feature = "derive",
2909
    doc = "[derive]: zerocopy_derive::FromZeros",
2910
    doc = "[derive-analysis]: zerocopy_derive::FromZeros#analysis"
2911
)]
2912
#[cfg_attr(
2913
    not(feature = "derive"),
2914
    doc = concat!("[derive]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.FromZeros.html"),
2915
    doc = concat!("[derive-analysis]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.FromZeros.html#analysis"),
2916
)]
2917
#[cfg_attr(
2918
    zerocopy_diagnostic_on_unimplemented_1_78_0,
2919
    diagnostic::on_unimplemented(note = "Consider adding `#[derive(FromZeros)]` to `{Self}`")
2920
)]
2921
pub unsafe trait FromZeros: TryFromBytes {
2922
    // The `Self: Sized` bound makes it so that `FromZeros` is still object
2923
    // safe.
2924
    #[doc(hidden)]
2925
    fn only_derive_is_allowed_to_implement_this_trait()
2926
    where
2927
        Self: Sized;
2928
2929
    /// Overwrites `self` with zeros.
2930
    ///
2931
    /// Sets every byte in `self` to 0. While this is similar to doing `*self =
2932
    /// Self::new_zeroed()`, it differs in that `zero` does not semantically
2933
    /// drop the current value and replace it with a new one — it simply
2934
    /// modifies the bytes of the existing value.
2935
    ///
2936
    /// # Examples
2937
    ///
2938
    /// ```
2939
    /// # use zerocopy::FromZeros;
2940
    /// # use zerocopy_derive::*;
2941
    /// #
2942
    /// #[derive(FromZeros)]
2943
    /// #[repr(C)]
2944
    /// struct PacketHeader {
2945
    ///     src_port: [u8; 2],
2946
    ///     dst_port: [u8; 2],
2947
    ///     length: [u8; 2],
2948
    ///     checksum: [u8; 2],
2949
    /// }
2950
    ///
2951
    /// let mut header = PacketHeader {
2952
    ///     src_port: 100u16.to_be_bytes(),
2953
    ///     dst_port: 200u16.to_be_bytes(),
2954
    ///     length: 300u16.to_be_bytes(),
2955
    ///     checksum: 400u16.to_be_bytes(),
2956
    /// };
2957
    ///
2958
    /// header.zero();
2959
    ///
2960
    /// assert_eq!(header.src_port, [0, 0]);
2961
    /// assert_eq!(header.dst_port, [0, 0]);
2962
    /// assert_eq!(header.length, [0, 0]);
2963
    /// assert_eq!(header.checksum, [0, 0]);
2964
    /// ```
2965
    #[inline(always)]
2966
    fn zero(&mut self) {
2967
        let slf: *mut Self = self;
2968
        let len = mem::size_of_val(self);
2969
        // SAFETY:
2970
        // - `self` is guaranteed by the type system to be valid for writes of
2971
        //   size `size_of_val(self)`.
2972
        // - `u8`'s alignment is 1, and thus `self` is guaranteed to be aligned
2973
        //   as required by `u8`.
2974
        // - Since `Self: FromZeros`, the all-zeros instance is a valid instance
2975
        //   of `Self.`
2976
        //
2977
        // TODO(#429): Add references to docs and quotes.
2978
        unsafe { ptr::write_bytes(slf.cast::<u8>(), 0, len) };
2979
    }
2980
2981
    /// Creates an instance of `Self` from zeroed bytes.
2982
    ///
2983
    /// # Examples
2984
    ///
2985
    /// ```
2986
    /// # use zerocopy::FromZeros;
2987
    /// # use zerocopy_derive::*;
2988
    /// #
2989
    /// #[derive(FromZeros)]
2990
    /// #[repr(C)]
2991
    /// struct PacketHeader {
2992
    ///     src_port: [u8; 2],
2993
    ///     dst_port: [u8; 2],
2994
    ///     length: [u8; 2],
2995
    ///     checksum: [u8; 2],
2996
    /// }
2997
    ///
2998
    /// let header: PacketHeader = FromZeros::new_zeroed();
2999
    ///
3000
    /// assert_eq!(header.src_port, [0, 0]);
3001
    /// assert_eq!(header.dst_port, [0, 0]);
3002
    /// assert_eq!(header.length, [0, 0]);
3003
    /// assert_eq!(header.checksum, [0, 0]);
3004
    /// ```
3005
    #[must_use = "has no side effects"]
3006
    #[inline(always)]
3007
    fn new_zeroed() -> Self
3008
    where
3009
        Self: Sized,
3010
    {
3011
        // SAFETY: `FromZeros` says that the all-zeros bit pattern is legal.
3012
        unsafe { mem::zeroed() }
3013
    }
3014
3015
    /// Creates a `Box<Self>` from zeroed bytes.
3016
    ///
3017
    /// This function is useful for allocating large values on the heap and
3018
    /// zero-initializing them, without ever creating a temporary instance of
3019
    /// `Self` on the stack. For example, `<[u8; 1048576]>::new_box_zeroed()`
3020
    /// will allocate `[u8; 1048576]` directly on the heap; it does not require
3021
    /// storing `[u8; 1048576]` in a temporary variable on the stack.
3022
    ///
3023
    /// On systems that use a heap implementation that supports allocating from
3024
    /// pre-zeroed memory, using `new_box_zeroed` (or related functions) may
3025
    /// have performance benefits.
3026
    ///
3027
    /// # Errors
3028
    ///
3029
    /// Returns an error on allocation failure. Allocation failure is guaranteed
3030
    /// never to cause a panic or an abort.
3031
    #[must_use = "has no side effects (other than allocation)"]
3032
    #[cfg(any(feature = "alloc", test))]
3033
    #[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))]
3034
    #[inline]
3035
    fn new_box_zeroed() -> Result<Box<Self>, AllocError>
3036
    where
3037
        Self: Sized,
3038
    {
3039
        // If `T` is a ZST, then return a proper boxed instance of it. There is
3040
        // no allocation, but `Box` does require a correct dangling pointer.
3041
        let layout = Layout::new::<Self>();
3042
        if layout.size() == 0 {
3043
            // Construct the `Box` from a dangling pointer to avoid calling
3044
            // `Self::new_zeroed`. This ensures that stack space is never
3045
            // allocated for `Self` even on lower opt-levels where this branch
3046
            // might not get optimized out.
3047
3048
            // SAFETY: Per [1], when `T` is a ZST, `Box<T>`'s only validity
3049
            // requirements are that the pointer is non-null and sufficiently
3050
            // aligned. Per [2], `NonNull::dangling` produces a pointer which
3051
            // is sufficiently aligned. Since the produced pointer is a
3052
            // `NonNull`, it is non-null.
3053
            //
3054
            // [1] Per https://doc.rust-lang.org/nightly/std/boxed/index.html#memory-layout:
3055
            //
3056
            //   For zero-sized values, the `Box` pointer has to be non-null and sufficiently aligned.
3057
            //
3058
            // [2] Per https://doc.rust-lang.org/std/ptr/struct.NonNull.html#method.dangling:
3059
            //
3060
            //   Creates a new `NonNull` that is dangling, but well-aligned.
3061
            return Ok(unsafe { Box::from_raw(NonNull::dangling().as_ptr()) });
3062
        }
3063
3064
        // TODO(#429): Add a "SAFETY" comment and remove this `allow`.
3065
        #[allow(clippy::undocumented_unsafe_blocks)]
3066
        let ptr = unsafe { alloc::alloc::alloc_zeroed(layout).cast::<Self>() };
3067
        if ptr.is_null() {
3068
            return Err(AllocError);
3069
        }
3070
        // TODO(#429): Add a "SAFETY" comment and remove this `allow`.
3071
        #[allow(clippy::undocumented_unsafe_blocks)]
3072
        Ok(unsafe { Box::from_raw(ptr) })
3073
    }
3074
3075
    /// Creates a `Box<[Self]>` (a boxed slice) from zeroed bytes.
3076
    ///
3077
    /// This function is useful for allocating large values of `[Self]` on the
3078
    /// heap and zero-initializing them, without ever creating a temporary
3079
    /// instance of `[Self; _]` on the stack. For example,
3080
    /// `u8::new_box_slice_zeroed(1048576)` will allocate the slice directly on
3081
    /// the heap; it does not require storing the slice on the stack.
3082
    ///
3083
    /// On systems that use a heap implementation that supports allocating from
3084
    /// pre-zeroed memory, using `new_box_slice_zeroed` may have performance
3085
    /// benefits.
3086
    ///
3087
    /// If `Self` is a zero-sized type, then this function will return a
3088
    /// `Box<[Self]>` that has the correct `len`. Such a box cannot contain any
3089
    /// actual information, but its `len()` property will report the correct
3090
    /// value.
3091
    ///
3092
    /// # Errors
3093
    ///
3094
    /// Returns an error on allocation failure. Allocation failure is
3095
    /// guaranteed never to cause a panic or an abort.
3096
    #[must_use = "has no side effects (other than allocation)"]
3097
    #[cfg(feature = "alloc")]
3098
    #[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))]
3099
    #[inline]
3100
    fn new_box_zeroed_with_elems(count: usize) -> Result<Box<Self>, AllocError>
3101
    where
3102
        Self: KnownLayout<PointerMetadata = usize>,
3103
    {
3104
        // SAFETY: `alloc::alloc::alloc_zeroed` is a valid argument of
3105
        // `new_box`. The referent of the pointer returned by `alloc_zeroed`
3106
        // (and, consequently, the `Box` derived from it) is a valid instance of
3107
        // `Self`, because `Self` is `FromZeros`.
3108
        unsafe { crate::util::new_box(count, alloc::alloc::alloc_zeroed) }
3109
    }
3110
3111
    #[deprecated(since = "0.8.0", note = "renamed to `FromZeros::new_box_zeroed_with_elems`")]
3112
    #[doc(hidden)]
3113
    #[cfg(feature = "alloc")]
3114
    #[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))]
3115
    #[must_use = "has no side effects (other than allocation)"]
3116
    #[inline(always)]
3117
    fn new_box_slice_zeroed(len: usize) -> Result<Box<[Self]>, AllocError>
3118
    where
3119
        Self: Sized,
3120
    {
3121
        <[Self]>::new_box_zeroed_with_elems(len)
3122
    }
3123
3124
    /// Creates a `Vec<Self>` from zeroed bytes.
3125
    ///
3126
    /// This function is useful for allocating large values of `Vec`s and
3127
    /// zero-initializing them, without ever creating a temporary instance of
3128
    /// `[Self; _]` (or many temporary instances of `Self`) on the stack. For
3129
    /// example, `u8::new_vec_zeroed(1048576)` will allocate directly on the
3130
    /// heap; it does not require storing intermediate values on the stack.
3131
    ///
3132
    /// On systems that use a heap implementation that supports allocating from
3133
    /// pre-zeroed memory, using `new_vec_zeroed` may have performance benefits.
3134
    ///
3135
    /// If `Self` is a zero-sized type, then this function will return a
3136
    /// `Vec<Self>` that has the correct `len`. Such a `Vec` cannot contain any
3137
    /// actual information, but its `len()` property will report the correct
3138
    /// value.
3139
    ///
3140
    /// # Errors
3141
    ///
3142
    /// Returns an error on allocation failure. Allocation failure is
3143
    /// guaranteed never to cause a panic or an abort.
3144
    #[must_use = "has no side effects (other than allocation)"]
3145
    #[cfg(feature = "alloc")]
3146
    #[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))]
3147
    #[inline(always)]
3148
    fn new_vec_zeroed(len: usize) -> Result<Vec<Self>, AllocError>
3149
    where
3150
        Self: Sized,
3151
    {
3152
        <[Self]>::new_box_zeroed_with_elems(len).map(Into::into)
3153
    }
3154
3155
    /// Extends a `Vec<Self>` by pushing `additional` new items onto the end of
3156
    /// the vector. The new items are initialized with zeros.
3157
    #[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
3158
    #[cfg(feature = "alloc")]
3159
    #[cfg_attr(doc_cfg, doc(cfg(all(rust = "1.57.0", feature = "alloc"))))]
3160
    #[inline(always)]
3161
    fn extend_vec_zeroed(v: &mut Vec<Self>, additional: usize) -> Result<(), AllocError>
3162
    where
3163
        Self: Sized,
3164
    {
3165
        // PANICS: We pass `v.len()` for `position`, so the `position > v.len()`
3166
        // panic condition is not satisfied.
3167
        <Self as FromZeros>::insert_vec_zeroed(v, v.len(), additional)
3168
    }
3169
3170
    /// Inserts `additional` new items into `Vec<Self>` at `position`. The new
3171
    /// items are initialized with zeros.
3172
    ///
3173
    /// # Panics
3174
    ///
3175
    /// Panics if `position > v.len()`.
3176
    #[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
3177
    #[cfg(feature = "alloc")]
3178
    #[cfg_attr(doc_cfg, doc(cfg(all(rust = "1.57.0", feature = "alloc"))))]
3179
    #[inline]
3180
    fn insert_vec_zeroed(
3181
        v: &mut Vec<Self>,
3182
        position: usize,
3183
        additional: usize,
3184
    ) -> Result<(), AllocError>
3185
    where
3186
        Self: Sized,
3187
    {
3188
        assert!(position <= v.len());
3189
        // We only conditionally compile on versions on which `try_reserve` is
3190
        // stable; the Clippy lint is a false positive.
3191
        #[allow(clippy::incompatible_msrv)]
3192
        v.try_reserve(additional).map_err(|_| AllocError)?;
3193
        // SAFETY: The `try_reserve` call guarantees that these cannot overflow:
3194
        // * `ptr.add(position)`
3195
        // * `position + additional`
3196
        // * `v.len() + additional`
3197
        //
3198
        // `v.len() - position` cannot overflow because we asserted that
3199
        // `position <= v.len()`.
3200
        unsafe {
3201
            // This is a potentially overlapping copy.
3202
            let ptr = v.as_mut_ptr();
3203
            #[allow(clippy::arithmetic_side_effects)]
3204
            ptr.add(position).copy_to(ptr.add(position + additional), v.len() - position);
3205
            ptr.add(position).write_bytes(0, additional);
3206
            #[allow(clippy::arithmetic_side_effects)]
3207
            v.set_len(v.len() + additional);
3208
        }
3209
3210
        Ok(())
3211
    }
3212
}
3213
3214
/// Analyzes whether a type is [`FromBytes`].
3215
///
3216
/// This derive analyzes, at compile time, whether the annotated type satisfies
3217
/// the [safety conditions] of `FromBytes` and implements `FromBytes` and its
3218
/// supertraits if it is sound to do so. This derive can be applied to structs,
3219
/// enums, and unions;
3220
/// e.g.:
3221
///
3222
/// ```
3223
/// # use zerocopy_derive::{FromBytes, FromZeros, Immutable};
3224
/// #[derive(FromBytes)]
3225
/// struct MyStruct {
3226
/// # /*
3227
///     ...
3228
/// # */
3229
/// }
3230
///
3231
/// #[derive(FromBytes)]
3232
/// #[repr(u8)]
3233
/// enum MyEnum {
3234
/// #   V00, V01, V02, V03, V04, V05, V06, V07, V08, V09, V0A, V0B, V0C, V0D, V0E,
3235
/// #   V0F, V10, V11, V12, V13, V14, V15, V16, V17, V18, V19, V1A, V1B, V1C, V1D,
3236
/// #   V1E, V1F, V20, V21, V22, V23, V24, V25, V26, V27, V28, V29, V2A, V2B, V2C,
3237
/// #   V2D, V2E, V2F, V30, V31, V32, V33, V34, V35, V36, V37, V38, V39, V3A, V3B,
3238
/// #   V3C, V3D, V3E, V3F, V40, V41, V42, V43, V44, V45, V46, V47, V48, V49, V4A,
3239
/// #   V4B, V4C, V4D, V4E, V4F, V50, V51, V52, V53, V54, V55, V56, V57, V58, V59,
3240
/// #   V5A, V5B, V5C, V5D, V5E, V5F, V60, V61, V62, V63, V64, V65, V66, V67, V68,
3241
/// #   V69, V6A, V6B, V6C, V6D, V6E, V6F, V70, V71, V72, V73, V74, V75, V76, V77,
3242
/// #   V78, V79, V7A, V7B, V7C, V7D, V7E, V7F, V80, V81, V82, V83, V84, V85, V86,
3243
/// #   V87, V88, V89, V8A, V8B, V8C, V8D, V8E, V8F, V90, V91, V92, V93, V94, V95,
3244
/// #   V96, V97, V98, V99, V9A, V9B, V9C, V9D, V9E, V9F, VA0, VA1, VA2, VA3, VA4,
3245
/// #   VA5, VA6, VA7, VA8, VA9, VAA, VAB, VAC, VAD, VAE, VAF, VB0, VB1, VB2, VB3,
3246
/// #   VB4, VB5, VB6, VB7, VB8, VB9, VBA, VBB, VBC, VBD, VBE, VBF, VC0, VC1, VC2,
3247
/// #   VC3, VC4, VC5, VC6, VC7, VC8, VC9, VCA, VCB, VCC, VCD, VCE, VCF, VD0, VD1,
3248
/// #   VD2, VD3, VD4, VD5, VD6, VD7, VD8, VD9, VDA, VDB, VDC, VDD, VDE, VDF, VE0,
3249
/// #   VE1, VE2, VE3, VE4, VE5, VE6, VE7, VE8, VE9, VEA, VEB, VEC, VED, VEE, VEF,
3250
/// #   VF0, VF1, VF2, VF3, VF4, VF5, VF6, VF7, VF8, VF9, VFA, VFB, VFC, VFD, VFE,
3251
/// #   VFF,
3252
/// # /*
3253
///     ...
3254
/// # */
3255
/// }
3256
///
3257
/// #[derive(FromBytes, Immutable)]
3258
/// union MyUnion {
3259
/// #   variant: u8,
3260
/// # /*
3261
///     ...
3262
/// # */
3263
/// }
3264
/// ```
3265
///
3266
/// [safety conditions]: trait@FromBytes#safety
3267
///
3268
/// # Analysis
3269
///
3270
/// *This section describes, roughly, the analysis performed by this derive to
3271
/// determine whether it is sound to implement `FromBytes` for a given type.
3272
/// Unless you are modifying the implementation of this derive, or attempting to
3273
/// manually implement `FromBytes` for a type yourself, you don't need to read
3274
/// this section.*
3275
///
3276
/// If a type has the following properties, then this derive can implement
3277
/// `FromBytes` for that type:
3278
///
3279
/// - If the type is a struct, all of its fields must be `FromBytes`.
3280
/// - If the type is an enum:
3281
///   - It must have a defined representation (`repr`s `C`, `u8`, `u16`, `u32`,
3282
///     `u64`, `usize`, `i8`, `i16`, `i32`, `i64`, or `isize`).
3283
///   - The maximum number of discriminants must be used (so that every possible
3284
///     bit pattern is a valid one). Be very careful when using the `C`,
3285
///     `usize`, or `isize` representations, as their size is
3286
///     platform-dependent.
3287
///   - Its fields must be `FromBytes`.
3288
///
3289
/// This analysis is subject to change. Unsafe code may *only* rely on the
3290
/// documented [safety conditions] of `FromBytes`, and must *not* rely on the
3291
/// implementation details of this derive.
3292
///
3293
/// ## Why isn't an explicit representation required for structs?
3294
///
3295
/// Neither this derive, nor the [safety conditions] of `FromBytes`, requires
3296
/// that structs are marked with `#[repr(C)]`.
3297
///
3298
/// Per the [Rust reference](reference),
3299
///
3300
/// > The representation of a type can change the padding between fields, but
3301
/// > does not change the layout of the fields themselves.
3302
///
3303
/// [reference]: https://doc.rust-lang.org/reference/type-layout.html#representations
3304
///
3305
/// Since the layout of structs only consists of padding bytes and field bytes,
3306
/// a struct is soundly `FromBytes` if:
3307
/// 1. its padding is soundly `FromBytes`, and
3308
/// 2. its fields are soundly `FromBytes`.
3309
///
3310
/// The answer to the first question is always yes: padding bytes do not have
3311
/// any validity constraints. A [discussion] of this question in the Unsafe Code
3312
/// Guidelines Working Group concluded that it would be virtually unimaginable
3313
/// for future versions of rustc to add validity constraints to padding bytes.
3314
///
3315
/// [discussion]: https://github.com/rust-lang/unsafe-code-guidelines/issues/174
3316
///
3317
/// Whether a struct is soundly `FromBytes` therefore solely depends on whether
3318
/// its fields are `FromBytes`.
3319
// TODO(#146): Document why we don't require an enum to have an explicit `repr`
3320
// attribute.
3321
#[cfg(any(feature = "derive", test))]
3322
#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
3323
pub use zerocopy_derive::FromBytes;
3324
3325
/// Types for which any bit pattern is valid.
3326
///
3327
/// Any memory region of the appropriate length which contains initialized bytes
3328
/// can be viewed as any `FromBytes` type with no runtime overhead. This is
3329
/// useful for efficiently parsing bytes as structured data.
3330
///
3331
/// # Warning: Padding bytes
3332
///
3333
/// Note that, when a value is moved or copied, only the non-padding bytes of
3334
/// that value are guaranteed to be preserved. It is unsound to assume that
3335
/// values written to padding bytes are preserved after a move or copy. For
3336
/// example, the following is unsound:
3337
///
3338
/// ```rust,no_run
3339
/// use core::mem::{size_of, transmute};
3340
/// use zerocopy::FromZeros;
3341
/// # use zerocopy_derive::*;
3342
///
3343
/// // Assume `Foo` is a type with padding bytes.
3344
/// #[derive(FromZeros, Default)]
3345
/// struct Foo {
3346
/// # /*
3347
///     ...
3348
/// # */
3349
/// }
3350
///
3351
/// let mut foo: Foo = Foo::default();
3352
/// FromZeros::zero(&mut foo);
3353
/// // UNSOUND: Although `FromZeros::zero` writes zeros to all bytes of `foo`,
3354
/// // those writes are not guaranteed to be preserved in padding bytes when
3355
/// // `foo` is moved, so this may expose padding bytes as `u8`s.
3356
/// let foo_bytes: [u8; size_of::<Foo>()] = unsafe { transmute(foo) };
3357
/// ```
3358
///
3359
/// # Implementation
3360
///
3361
/// **Do not implement this trait yourself!** Instead, use
3362
/// [`#[derive(FromBytes)]`][derive]; e.g.:
3363
///
3364
/// ```
3365
/// # use zerocopy_derive::{FromBytes, Immutable};
3366
/// #[derive(FromBytes)]
3367
/// struct MyStruct {
3368
/// # /*
3369
///     ...
3370
/// # */
3371
/// }
3372
///
3373
/// #[derive(FromBytes)]
3374
/// #[repr(u8)]
3375
/// enum MyEnum {
3376
/// #   V00, V01, V02, V03, V04, V05, V06, V07, V08, V09, V0A, V0B, V0C, V0D, V0E,
3377
/// #   V0F, V10, V11, V12, V13, V14, V15, V16, V17, V18, V19, V1A, V1B, V1C, V1D,
3378
/// #   V1E, V1F, V20, V21, V22, V23, V24, V25, V26, V27, V28, V29, V2A, V2B, V2C,
3379
/// #   V2D, V2E, V2F, V30, V31, V32, V33, V34, V35, V36, V37, V38, V39, V3A, V3B,
3380
/// #   V3C, V3D, V3E, V3F, V40, V41, V42, V43, V44, V45, V46, V47, V48, V49, V4A,
3381
/// #   V4B, V4C, V4D, V4E, V4F, V50, V51, V52, V53, V54, V55, V56, V57, V58, V59,
3382
/// #   V5A, V5B, V5C, V5D, V5E, V5F, V60, V61, V62, V63, V64, V65, V66, V67, V68,
3383
/// #   V69, V6A, V6B, V6C, V6D, V6E, V6F, V70, V71, V72, V73, V74, V75, V76, V77,
3384
/// #   V78, V79, V7A, V7B, V7C, V7D, V7E, V7F, V80, V81, V82, V83, V84, V85, V86,
3385
/// #   V87, V88, V89, V8A, V8B, V8C, V8D, V8E, V8F, V90, V91, V92, V93, V94, V95,
3386
/// #   V96, V97, V98, V99, V9A, V9B, V9C, V9D, V9E, V9F, VA0, VA1, VA2, VA3, VA4,
3387
/// #   VA5, VA6, VA7, VA8, VA9, VAA, VAB, VAC, VAD, VAE, VAF, VB0, VB1, VB2, VB3,
3388
/// #   VB4, VB5, VB6, VB7, VB8, VB9, VBA, VBB, VBC, VBD, VBE, VBF, VC0, VC1, VC2,
3389
/// #   VC3, VC4, VC5, VC6, VC7, VC8, VC9, VCA, VCB, VCC, VCD, VCE, VCF, VD0, VD1,
3390
/// #   VD2, VD3, VD4, VD5, VD6, VD7, VD8, VD9, VDA, VDB, VDC, VDD, VDE, VDF, VE0,
3391
/// #   VE1, VE2, VE3, VE4, VE5, VE6, VE7, VE8, VE9, VEA, VEB, VEC, VED, VEE, VEF,
3392
/// #   VF0, VF1, VF2, VF3, VF4, VF5, VF6, VF7, VF8, VF9, VFA, VFB, VFC, VFD, VFE,
3393
/// #   VFF,
3394
/// # /*
3395
///     ...
3396
/// # */
3397
/// }
3398
///
3399
/// #[derive(FromBytes, Immutable)]
3400
/// union MyUnion {
3401
/// #   variant: u8,
3402
/// # /*
3403
///     ...
3404
/// # */
3405
/// }
3406
/// ```
3407
///
3408
/// This derive performs a sophisticated, compile-time safety analysis to
3409
/// determine whether a type is `FromBytes`.
3410
///
3411
/// # Safety
3412
///
3413
/// *This section describes what is required in order for `T: FromBytes`, and
3414
/// what unsafe code may assume of such types. If you don't plan on implementing
3415
/// `FromBytes` manually, and you don't plan on writing unsafe code that
3416
/// operates on `FromBytes` types, then you don't need to read this section.*
3417
///
3418
/// If `T: FromBytes`, then unsafe code may assume that it is sound to produce a
3419
/// `T` whose bytes are initialized to any sequence of valid `u8`s (in other
3420
/// words, any byte value which is not uninitialized). If a type is marked as
3421
/// `FromBytes` which violates this contract, it may cause undefined behavior.
3422
///
3423
/// `#[derive(FromBytes)]` only permits [types which satisfy these
3424
/// requirements][derive-analysis].
3425
///
3426
#[cfg_attr(
3427
    feature = "derive",
3428
    doc = "[derive]: zerocopy_derive::FromBytes",
3429
    doc = "[derive-analysis]: zerocopy_derive::FromBytes#analysis"
3430
)]
3431
#[cfg_attr(
3432
    not(feature = "derive"),
3433
    doc = concat!("[derive]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.FromBytes.html"),
3434
    doc = concat!("[derive-analysis]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.FromBytes.html#analysis"),
3435
)]
3436
#[cfg_attr(
3437
    zerocopy_diagnostic_on_unimplemented_1_78_0,
3438
    diagnostic::on_unimplemented(note = "Consider adding `#[derive(FromBytes)]` to `{Self}`")
3439
)]
3440
pub unsafe trait FromBytes: FromZeros {
3441
    // The `Self: Sized` bound makes it so that `FromBytes` is still object
3442
    // safe.
3443
    #[doc(hidden)]
3444
    fn only_derive_is_allowed_to_implement_this_trait()
3445
    where
3446
        Self: Sized;
3447
3448
    /// Interprets the given `source` as a `&Self`.
3449
    ///
3450
    /// This method attempts to return a reference to `source` interpreted as a
3451
    /// `Self`. If the length of `source` is not a [valid size of
3452
    /// `Self`][valid-size], or if `source` is not appropriately aligned, this
3453
    /// returns `Err`. If [`Self: Unaligned`][self-unaligned], you can
3454
    /// [infallibly discard the alignment error][size-error-from].
3455
    ///
3456
    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
3457
    ///
3458
    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
3459
    /// [self-unaligned]: Unaligned
3460
    /// [size-error-from]: error/struct.SizeError.html#method.from-1
3461
    /// [slice-dst]: KnownLayout#dynamically-sized-types
3462
    ///
3463
    /// # Compile-Time Assertions
3464
    ///
3465
    /// This method cannot yet be used on unsized types whose dynamically-sized
3466
    /// component is zero-sized. Attempting to use this method on such types
3467
    /// results in a compile-time assertion error; e.g.:
3468
    ///
3469
    /// ```compile_fail,E0080
3470
    /// use zerocopy::*;
3471
    /// # use zerocopy_derive::*;
3472
    ///
3473
    /// #[derive(FromBytes, Immutable, KnownLayout)]
3474
    /// #[repr(C)]
3475
    /// struct ZSTy {
3476
    ///     leading_sized: u16,
3477
    ///     trailing_dst: [()],
3478
    /// }
3479
    ///
3480
    /// let _ = ZSTy::ref_from_bytes(0u16.as_bytes()); // âš  Compile Error!
3481
    /// ```
3482
    ///
3483
    /// # Examples
3484
    ///
3485
    /// ```
3486
    /// use zerocopy::FromBytes;
3487
    /// # use zerocopy_derive::*;
3488
    ///
3489
    /// #[derive(FromBytes, KnownLayout, Immutable)]
3490
    /// #[repr(C)]
3491
    /// struct PacketHeader {
3492
    ///     src_port: [u8; 2],
3493
    ///     dst_port: [u8; 2],
3494
    ///     length: [u8; 2],
3495
    ///     checksum: [u8; 2],
3496
    /// }
3497
    ///
3498
    /// #[derive(FromBytes, KnownLayout, Immutable)]
3499
    /// #[repr(C)]
3500
    /// struct Packet {
3501
    ///     header: PacketHeader,
3502
    ///     body: [u8],
3503
    /// }
3504
    ///
3505
    /// // These bytes encode a `Packet`.
3506
    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11][..];
3507
    ///
3508
    /// let packet = Packet::ref_from_bytes(bytes).unwrap();
3509
    ///
3510
    /// assert_eq!(packet.header.src_port, [0, 1]);
3511
    /// assert_eq!(packet.header.dst_port, [2, 3]);
3512
    /// assert_eq!(packet.header.length, [4, 5]);
3513
    /// assert_eq!(packet.header.checksum, [6, 7]);
3514
    /// assert_eq!(packet.body, [8, 9, 10, 11]);
3515
    /// ```
3516
    #[must_use = "has no side effects"]
3517
    #[inline]
3518
    fn ref_from_bytes(source: &[u8]) -> Result<&Self, CastError<&[u8], Self>>
3519
    where
3520
        Self: KnownLayout + Immutable,
3521
    {
3522
        static_assert_dst_is_not_zst!(Self);
3523
        match Ptr::from_ref(source).try_cast_into_no_leftover::<_, BecauseImmutable>(None) {
3524
            Ok(ptr) => Ok(ptr.bikeshed_recall_valid().as_ref()),
3525
            Err(err) => Err(err.map_src(|src| src.as_ref())),
3526
        }
3527
    }
3528
3529
    /// Interprets the prefix of the given `source` as a `&Self` without
3530
    /// copying.
3531
    ///
3532
    /// This method computes the [largest possible size of `Self`][valid-size]
3533
    /// that can fit in the leading bytes of `source`, then attempts to return
3534
    /// both a reference to those bytes interpreted as a `Self`, and a reference
3535
    /// to the remaining bytes. If there are insufficient bytes, or if `source`
3536
    /// is not appropriately aligned, this returns `Err`. If [`Self:
3537
    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
3538
    /// error][size-error-from].
3539
    ///
3540
    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
3541
    ///
3542
    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
3543
    /// [self-unaligned]: Unaligned
3544
    /// [size-error-from]: error/struct.SizeError.html#method.from-1
3545
    /// [slice-dst]: KnownLayout#dynamically-sized-types
3546
    ///
3547
    /// # Compile-Time Assertions
3548
    ///
3549
    /// This method cannot yet be used on unsized types whose dynamically-sized
3550
    /// component is zero-sized. See [`ref_from_prefix_with_elems`], which does
3551
    /// support such types. Attempting to use this method on such types results
3552
    /// in a compile-time assertion error; e.g.:
3553
    ///
3554
    /// ```compile_fail,E0080
3555
    /// use zerocopy::*;
3556
    /// # use zerocopy_derive::*;
3557
    ///
3558
    /// #[derive(FromBytes, Immutable, KnownLayout)]
3559
    /// #[repr(C)]
3560
    /// struct ZSTy {
3561
    ///     leading_sized: u16,
3562
    ///     trailing_dst: [()],
3563
    /// }
3564
    ///
3565
    /// let _ = ZSTy::ref_from_prefix(0u16.as_bytes()); // âš  Compile Error!
3566
    /// ```
3567
    ///
3568
    /// [`ref_from_prefix_with_elems`]: FromBytes::ref_from_prefix_with_elems
3569
    ///
3570
    /// # Examples
3571
    ///
3572
    /// ```
3573
    /// use zerocopy::FromBytes;
3574
    /// # use zerocopy_derive::*;
3575
    ///
3576
    /// #[derive(FromBytes, KnownLayout, Immutable)]
3577
    /// #[repr(C)]
3578
    /// struct PacketHeader {
3579
    ///     src_port: [u8; 2],
3580
    ///     dst_port: [u8; 2],
3581
    ///     length: [u8; 2],
3582
    ///     checksum: [u8; 2],
3583
    /// }
3584
    ///
3585
    /// #[derive(FromBytes, KnownLayout, Immutable)]
3586
    /// #[repr(C)]
3587
    /// struct Packet {
3588
    ///     header: PacketHeader,
3589
    ///     body: [[u8; 2]],
3590
    /// }
3591
    ///
3592
    /// // These are more bytes than are needed to encode a `Packet`.
3593
    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14][..];
3594
    ///
3595
    /// let (packet, suffix) = Packet::ref_from_prefix(bytes).unwrap();
3596
    ///
3597
    /// assert_eq!(packet.header.src_port, [0, 1]);
3598
    /// assert_eq!(packet.header.dst_port, [2, 3]);
3599
    /// assert_eq!(packet.header.length, [4, 5]);
3600
    /// assert_eq!(packet.header.checksum, [6, 7]);
3601
    /// assert_eq!(packet.body, [[8, 9], [10, 11], [12, 13]]);
3602
    /// assert_eq!(suffix, &[14u8][..]);
3603
    /// ```
3604
    #[must_use = "has no side effects"]
3605
    #[inline]
3606
    fn ref_from_prefix(source: &[u8]) -> Result<(&Self, &[u8]), CastError<&[u8], Self>>
3607
    where
3608
        Self: KnownLayout + Immutable,
3609
    {
3610
        static_assert_dst_is_not_zst!(Self);
3611
        ref_from_prefix_suffix(source, None, CastType::Prefix)
3612
    }
3613
3614
    /// Interprets the suffix of the given bytes as a `&Self`.
3615
    ///
3616
    /// This method computes the [largest possible size of `Self`][valid-size]
3617
    /// that can fit in the trailing bytes of `source`, then attempts to return
3618
    /// both a reference to those bytes interpreted as a `Self`, and a reference
3619
    /// to the preceding bytes. If there are insufficient bytes, or if that
3620
    /// suffix of `source` is not appropriately aligned, this returns `Err`. If
3621
    /// [`Self: Unaligned`][self-unaligned], you can [infallibly discard the
3622
    /// alignment error][size-error-from].
3623
    ///
3624
    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
3625
    ///
3626
    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
3627
    /// [self-unaligned]: Unaligned
3628
    /// [size-error-from]: error/struct.SizeError.html#method.from-1
3629
    /// [slice-dst]: KnownLayout#dynamically-sized-types
3630
    ///
3631
    /// # Compile-Time Assertions
3632
    ///
3633
    /// This method cannot yet be used on unsized types whose dynamically-sized
3634
    /// component is zero-sized. See [`ref_from_suffix_with_elems`], which does
3635
    /// support such types. Attempting to use this method on such types results
3636
    /// in a compile-time assertion error; e.g.:
3637
    ///
3638
    /// ```compile_fail,E0080
3639
    /// use zerocopy::*;
3640
    /// # use zerocopy_derive::*;
3641
    ///
3642
    /// #[derive(FromBytes, Immutable, KnownLayout)]
3643
    /// #[repr(C)]
3644
    /// struct ZSTy {
3645
    ///     leading_sized: u16,
3646
    ///     trailing_dst: [()],
3647
    /// }
3648
    ///
3649
    /// let _ = ZSTy::ref_from_suffix(0u16.as_bytes()); // âš  Compile Error!
3650
    /// ```
3651
    ///
3652
    /// [`ref_from_suffix_with_elems`]: FromBytes::ref_from_suffix_with_elems
3653
    ///
3654
    /// # Examples
3655
    ///
3656
    /// ```
3657
    /// use zerocopy::FromBytes;
3658
    /// # use zerocopy_derive::*;
3659
    ///
3660
    /// #[derive(FromBytes, Immutable, KnownLayout)]
3661
    /// #[repr(C)]
3662
    /// struct PacketTrailer {
3663
    ///     frame_check_sequence: [u8; 4],
3664
    /// }
3665
    ///
3666
    /// // These are more bytes than are needed to encode a `PacketTrailer`.
3667
    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
3668
    ///
3669
    /// let (prefix, trailer) = PacketTrailer::ref_from_suffix(bytes).unwrap();
3670
    ///
3671
    /// assert_eq!(prefix, &[0, 1, 2, 3, 4, 5][..]);
3672
    /// assert_eq!(trailer.frame_check_sequence, [6, 7, 8, 9]);
3673
    /// ```
3674
    #[must_use = "has no side effects"]
3675
    #[inline]
3676
    fn ref_from_suffix(source: &[u8]) -> Result<(&[u8], &Self), CastError<&[u8], Self>>
3677
    where
3678
        Self: Immutable + KnownLayout,
3679
    {
3680
        static_assert_dst_is_not_zst!(Self);
3681
        ref_from_prefix_suffix(source, None, CastType::Suffix).map(swap)
3682
    }
3683
3684
    /// Interprets the given `source` as a `&mut Self`.
3685
    ///
3686
    /// This method attempts to return a reference to `source` interpreted as a
3687
    /// `Self`. If the length of `source` is not a [valid size of
3688
    /// `Self`][valid-size], or if `source` is not appropriately aligned, this
3689
    /// returns `Err`. If [`Self: Unaligned`][self-unaligned], you can
3690
    /// [infallibly discard the alignment error][size-error-from].
3691
    ///
3692
    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
3693
    ///
3694
    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
3695
    /// [self-unaligned]: Unaligned
3696
    /// [size-error-from]: error/struct.SizeError.html#method.from-1
3697
    /// [slice-dst]: KnownLayout#dynamically-sized-types
3698
    ///
3699
    /// # Compile-Time Assertions
3700
    ///
3701
    /// This method cannot yet be used on unsized types whose dynamically-sized
3702
    /// component is zero-sized. See [`mut_from_prefix_with_elems`], which does
3703
    /// support such types. Attempting to use this method on such types results
3704
    /// in a compile-time assertion error; e.g.:
3705
    ///
3706
    /// ```compile_fail,E0080
3707
    /// use zerocopy::*;
3708
    /// # use zerocopy_derive::*;
3709
    ///
3710
    /// #[derive(FromBytes, Immutable, IntoBytes, KnownLayout)]
3711
    /// #[repr(C, packed)]
3712
    /// struct ZSTy {
3713
    ///     leading_sized: [u8; 2],
3714
    ///     trailing_dst: [()],
3715
    /// }
3716
    ///
3717
    /// let mut source = [85, 85];
3718
    /// let _ = ZSTy::mut_from_bytes(&mut source[..]); // âš  Compile Error!
3719
    /// ```
3720
    ///
3721
    /// [`mut_from_prefix_with_elems`]: FromBytes::mut_from_prefix_with_elems
3722
    ///
3723
    /// # Examples
3724
    ///
3725
    /// ```
3726
    /// use zerocopy::FromBytes;
3727
    /// # use zerocopy_derive::*;
3728
    ///
3729
    /// #[derive(FromBytes, IntoBytes, KnownLayout, Immutable)]
3730
    /// #[repr(C)]
3731
    /// struct PacketHeader {
3732
    ///     src_port: [u8; 2],
3733
    ///     dst_port: [u8; 2],
3734
    ///     length: [u8; 2],
3735
    ///     checksum: [u8; 2],
3736
    /// }
3737
    ///
3738
    /// // These bytes encode a `PacketHeader`.
3739
    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7][..];
3740
    ///
3741
    /// let header = PacketHeader::mut_from_bytes(bytes).unwrap();
3742
    ///
3743
    /// assert_eq!(header.src_port, [0, 1]);
3744
    /// assert_eq!(header.dst_port, [2, 3]);
3745
    /// assert_eq!(header.length, [4, 5]);
3746
    /// assert_eq!(header.checksum, [6, 7]);
3747
    ///
3748
    /// header.checksum = [0, 0];
3749
    ///
3750
    /// assert_eq!(bytes, [0, 1, 2, 3, 4, 5, 0, 0]);
3751
    /// ```
3752
    #[must_use = "has no side effects"]
3753
    #[inline]
3754
    fn mut_from_bytes(source: &mut [u8]) -> Result<&mut Self, CastError<&mut [u8], Self>>
3755
    where
3756
        Self: IntoBytes + KnownLayout,
3757
    {
3758
        static_assert_dst_is_not_zst!(Self);
3759
        match Ptr::from_mut(source).try_cast_into_no_leftover::<_, BecauseExclusive>(None) {
3760
            Ok(ptr) => Ok(ptr.bikeshed_recall_valid().as_mut()),
3761
            Err(err) => Err(err.map_src(|src| src.as_mut())),
3762
        }
3763
    }
3764
3765
    /// Interprets the prefix of the given `source` as a `&mut Self` without
3766
    /// copying.
3767
    ///
3768
    /// This method computes the [largest possible size of `Self`][valid-size]
3769
    /// that can fit in the leading bytes of `source`, then attempts to return
3770
    /// both a reference to those bytes interpreted as a `Self`, and a reference
3771
    /// to the remaining bytes. If there are insufficient bytes, or if `source`
3772
    /// is not appropriately aligned, this returns `Err`. If [`Self:
3773
    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
3774
    /// error][size-error-from].
3775
    ///
3776
    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
3777
    ///
3778
    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
3779
    /// [self-unaligned]: Unaligned
3780
    /// [size-error-from]: error/struct.SizeError.html#method.from-1
3781
    /// [slice-dst]: KnownLayout#dynamically-sized-types
3782
    ///
3783
    /// # Compile-Time Assertions
3784
    ///
3785
    /// This method cannot yet be used on unsized types whose dynamically-sized
3786
    /// component is zero-sized. See [`mut_from_suffix_with_elems`], which does
3787
    /// support such types. Attempting to use this method on such types results
3788
    /// in a compile-time assertion error; e.g.:
3789
    ///
3790
    /// ```compile_fail,E0080
3791
    /// use zerocopy::*;
3792
    /// # use zerocopy_derive::*;
3793
    ///
3794
    /// #[derive(FromBytes, Immutable, IntoBytes, KnownLayout)]
3795
    /// #[repr(C, packed)]
3796
    /// struct ZSTy {
3797
    ///     leading_sized: [u8; 2],
3798
    ///     trailing_dst: [()],
3799
    /// }
3800
    ///
3801
    /// let mut source = [85, 85];
3802
    /// let _ = ZSTy::mut_from_prefix(&mut source[..]); // âš  Compile Error!
3803
    /// ```
3804
    ///
3805
    /// [`mut_from_suffix_with_elems`]: FromBytes::mut_from_suffix_with_elems
3806
    ///
3807
    /// # Examples
3808
    ///
3809
    /// ```
3810
    /// use zerocopy::FromBytes;
3811
    /// # use zerocopy_derive::*;
3812
    ///
3813
    /// #[derive(FromBytes, IntoBytes, KnownLayout, Immutable)]
3814
    /// #[repr(C)]
3815
    /// struct PacketHeader {
3816
    ///     src_port: [u8; 2],
3817
    ///     dst_port: [u8; 2],
3818
    ///     length: [u8; 2],
3819
    ///     checksum: [u8; 2],
3820
    /// }
3821
    ///
3822
    /// // These are more bytes than are needed to encode a `PacketHeader`.
3823
    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
3824
    ///
3825
    /// let (header, body) = PacketHeader::mut_from_prefix(bytes).unwrap();
3826
    ///
3827
    /// assert_eq!(header.src_port, [0, 1]);
3828
    /// assert_eq!(header.dst_port, [2, 3]);
3829
    /// assert_eq!(header.length, [4, 5]);
3830
    /// assert_eq!(header.checksum, [6, 7]);
3831
    /// assert_eq!(body, &[8, 9][..]);
3832
    ///
3833
    /// header.checksum = [0, 0];
3834
    /// body.fill(1);
3835
    ///
3836
    /// assert_eq!(bytes, [0, 1, 2, 3, 4, 5, 0, 0, 1, 1]);
3837
    /// ```
3838
    #[must_use = "has no side effects"]
3839
    #[inline]
3840
    fn mut_from_prefix(
3841
        source: &mut [u8],
3842
    ) -> Result<(&mut Self, &mut [u8]), CastError<&mut [u8], Self>>
3843
    where
3844
        Self: IntoBytes + KnownLayout,
3845
    {
3846
        static_assert_dst_is_not_zst!(Self);
3847
        mut_from_prefix_suffix(source, None, CastType::Prefix)
3848
    }
3849
3850
    /// Interprets the suffix of the given `source` as a `&mut Self` without
3851
    /// copying.
3852
    ///
3853
    /// This method computes the [largest possible size of `Self`][valid-size]
3854
    /// that can fit in the trailing bytes of `source`, then attempts to return
3855
    /// both a reference to those bytes interpreted as a `Self`, and a reference
3856
    /// to the preceding bytes. If there are insufficient bytes, or if that
3857
    /// suffix of `source` is not appropriately aligned, this returns `Err`. If
3858
    /// [`Self: Unaligned`][self-unaligned], you can [infallibly discard the
3859
    /// alignment error][size-error-from].
3860
    ///
3861
    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
3862
    ///
3863
    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
3864
    /// [self-unaligned]: Unaligned
3865
    /// [size-error-from]: error/struct.SizeError.html#method.from-1
3866
    /// [slice-dst]: KnownLayout#dynamically-sized-types
3867
    ///
3868
    /// # Compile-Time Assertions
3869
    ///
3870
    /// This method cannot yet be used on unsized types whose dynamically-sized
3871
    /// component is zero-sized. Attempting to use this method on such types
3872
    /// results in a compile-time assertion error; e.g.:
3873
    ///
3874
    /// ```compile_fail,E0080
3875
    /// use zerocopy::*;
3876
    /// # use zerocopy_derive::*;
3877
    ///
3878
    /// #[derive(FromBytes, Immutable, IntoBytes, KnownLayout)]
3879
    /// #[repr(C, packed)]
3880
    /// struct ZSTy {
3881
    ///     leading_sized: [u8; 2],
3882
    ///     trailing_dst: [()],
3883
    /// }
3884
    ///
3885
    /// let mut source = [85, 85];
3886
    /// let _ = ZSTy::mut_from_suffix(&mut source[..]); // âš  Compile Error!
3887
    /// ```
3888
    ///
3889
    /// # Examples
3890
    ///
3891
    /// ```
3892
    /// use zerocopy::FromBytes;
3893
    /// # use zerocopy_derive::*;
3894
    ///
3895
    /// #[derive(FromBytes, IntoBytes, KnownLayout, Immutable)]
3896
    /// #[repr(C)]
3897
    /// struct PacketTrailer {
3898
    ///     frame_check_sequence: [u8; 4],
3899
    /// }
3900
    ///
3901
    /// // These are more bytes than are needed to encode a `PacketTrailer`.
3902
    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
3903
    ///
3904
    /// let (prefix, trailer) = PacketTrailer::mut_from_suffix(bytes).unwrap();
3905
    ///
3906
    /// assert_eq!(prefix, &[0u8, 1, 2, 3, 4, 5][..]);
3907
    /// assert_eq!(trailer.frame_check_sequence, [6, 7, 8, 9]);
3908
    ///
3909
    /// prefix.fill(0);
3910
    /// trailer.frame_check_sequence.fill(1);
3911
    ///
3912
    /// assert_eq!(bytes, [0, 0, 0, 0, 0, 0, 1, 1, 1, 1]);
3913
    /// ```
3914
    #[must_use = "has no side effects"]
3915
    #[inline]
3916
    fn mut_from_suffix(
3917
        source: &mut [u8],
3918
    ) -> Result<(&mut [u8], &mut Self), CastError<&mut [u8], Self>>
3919
    where
3920
        Self: IntoBytes + KnownLayout,
3921
    {
3922
        static_assert_dst_is_not_zst!(Self);
3923
        mut_from_prefix_suffix(source, None, CastType::Suffix).map(swap)
3924
    }
3925
3926
    /// Interprets the given `source` as a `&Self` with a DST length equal to
3927
    /// `count`.
3928
    ///
3929
    /// This method attempts to return a reference to `source` interpreted as a
3930
    /// `Self` with `count` trailing elements. If the length of `source` is not
3931
    /// equal to the size of `Self` with `count` elements, or if `source` is not
3932
    /// appropriately aligned, this returns `Err`. If [`Self:
3933
    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
3934
    /// error][size-error-from].
3935
    ///
3936
    /// [self-unaligned]: Unaligned
3937
    /// [size-error-from]: error/struct.SizeError.html#method.from-1
3938
    ///
3939
    /// # Examples
3940
    ///
3941
    /// ```
3942
    /// use zerocopy::FromBytes;
3943
    /// # use zerocopy_derive::*;
3944
    ///
3945
    /// # #[derive(Debug, PartialEq, Eq)]
3946
    /// #[derive(FromBytes, Immutable)]
3947
    /// #[repr(C)]
3948
    /// struct Pixel {
3949
    ///     r: u8,
3950
    ///     g: u8,
3951
    ///     b: u8,
3952
    ///     a: u8,
3953
    /// }
3954
    ///
3955
    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7][..];
3956
    ///
3957
    /// let pixels = <[Pixel]>::ref_from_bytes_with_elems(bytes, 2).unwrap();
3958
    ///
3959
    /// assert_eq!(pixels, &[
3960
    ///     Pixel { r: 0, g: 1, b: 2, a: 3 },
3961
    ///     Pixel { r: 4, g: 5, b: 6, a: 7 },
3962
    /// ]);
3963
    ///
3964
    /// ```
3965
    ///
3966
    /// Since an explicit `count` is provided, this method supports types with
3967
    /// zero-sized trailing slice elements. Methods such as [`ref_from_bytes`]
3968
    /// which do not take an explicit count do not support such types.
3969
    ///
3970
    /// ```
3971
    /// use zerocopy::*;
3972
    /// # use zerocopy_derive::*;
3973
    ///
3974
    /// #[derive(FromBytes, Immutable, KnownLayout)]
3975
    /// #[repr(C)]
3976
    /// struct ZSTy {
3977
    ///     leading_sized: [u8; 2],
3978
    ///     trailing_dst: [()],
3979
    /// }
3980
    ///
3981
    /// let src = &[85, 85][..];
3982
    /// let zsty = ZSTy::ref_from_bytes_with_elems(src, 42).unwrap();
3983
    /// assert_eq!(zsty.trailing_dst.len(), 42);
3984
    /// ```
3985
    ///
3986
    /// [`ref_from_bytes`]: FromBytes::ref_from_bytes
3987
    #[must_use = "has no side effects"]
3988
    #[inline]
3989
    fn ref_from_bytes_with_elems(
3990
        source: &[u8],
3991
        count: usize,
3992
    ) -> Result<&Self, CastError<&[u8], Self>>
3993
    where
3994
        Self: KnownLayout<PointerMetadata = usize> + Immutable,
3995
    {
3996
        let source = Ptr::from_ref(source);
3997
        let maybe_slf = source.try_cast_into_no_leftover::<_, BecauseImmutable>(Some(count));
3998
        match maybe_slf {
3999
            Ok(slf) => Ok(slf.bikeshed_recall_valid().as_ref()),
4000
            Err(err) => Err(err.map_src(|s| s.as_ref())),
4001
        }
4002
    }
4003
4004
    /// Interprets the prefix of the given `source` as a DST `&Self` with length
4005
    /// equal to `count`.
4006
    ///
4007
    /// This method attempts to return a reference to the prefix of `source`
4008
    /// interpreted as a `Self` with `count` trailing elements, and a reference
4009
    /// to the remaining bytes. If there are insufficient bytes, or if `source`
4010
    /// is not appropriately aligned, this returns `Err`. If [`Self:
4011
    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
4012
    /// error][size-error-from].
4013
    ///
4014
    /// [self-unaligned]: Unaligned
4015
    /// [size-error-from]: error/struct.SizeError.html#method.from-1
4016
    ///
4017
    /// # Examples
4018
    ///
4019
    /// ```
4020
    /// use zerocopy::FromBytes;
4021
    /// # use zerocopy_derive::*;
4022
    ///
4023
    /// # #[derive(Debug, PartialEq, Eq)]
4024
    /// #[derive(FromBytes, Immutable)]
4025
    /// #[repr(C)]
4026
    /// struct Pixel {
4027
    ///     r: u8,
4028
    ///     g: u8,
4029
    ///     b: u8,
4030
    ///     a: u8,
4031
    /// }
4032
    ///
4033
    /// // These are more bytes than are needed to encode two `Pixel`s.
4034
    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
4035
    ///
4036
    /// let (pixels, suffix) = <[Pixel]>::ref_from_prefix_with_elems(bytes, 2).unwrap();
4037
    ///
4038
    /// assert_eq!(pixels, &[
4039
    ///     Pixel { r: 0, g: 1, b: 2, a: 3 },
4040
    ///     Pixel { r: 4, g: 5, b: 6, a: 7 },
4041
    /// ]);
4042
    ///
4043
    /// assert_eq!(suffix, &[8, 9]);
4044
    /// ```
4045
    ///
4046
    /// Since an explicit `count` is provided, this method supports types with
4047
    /// zero-sized trailing slice elements. Methods such as [`ref_from_prefix`]
4048
    /// which do not take an explicit count do not support such types.
4049
    ///
4050
    /// ```
4051
    /// use zerocopy::*;
4052
    /// # use zerocopy_derive::*;
4053
    ///
4054
    /// #[derive(FromBytes, Immutable, KnownLayout)]
4055
    /// #[repr(C)]
4056
    /// struct ZSTy {
4057
    ///     leading_sized: [u8; 2],
4058
    ///     trailing_dst: [()],
4059
    /// }
4060
    ///
4061
    /// let src = &[85, 85][..];
4062
    /// let (zsty, _) = ZSTy::ref_from_prefix_with_elems(src, 42).unwrap();
4063
    /// assert_eq!(zsty.trailing_dst.len(), 42);
4064
    /// ```
4065
    ///
4066
    /// [`ref_from_prefix`]: FromBytes::ref_from_prefix
4067
    #[must_use = "has no side effects"]
4068
    #[inline]
4069
    fn ref_from_prefix_with_elems(
4070
        source: &[u8],
4071
        count: usize,
4072
    ) -> Result<(&Self, &[u8]), CastError<&[u8], Self>>
4073
    where
4074
        Self: KnownLayout<PointerMetadata = usize> + Immutable,
4075
    {
4076
        ref_from_prefix_suffix(source, Some(count), CastType::Prefix)
4077
    }
4078
4079
    /// Interprets the suffix of the given `source` as a DST `&Self` with length
4080
    /// equal to `count`.
4081
    ///
4082
    /// This method attempts to return a reference to the suffix of `source`
4083
    /// interpreted as a `Self` with `count` trailing elements, and a reference
4084
    /// to the preceding bytes. If there are insufficient bytes, or if that
4085
    /// suffix of `source` is not appropriately aligned, this returns `Err`. If
4086
    /// [`Self: Unaligned`][self-unaligned], you can [infallibly discard the
4087
    /// alignment error][size-error-from].
4088
    ///
4089
    /// [self-unaligned]: Unaligned
4090
    /// [size-error-from]: error/struct.SizeError.html#method.from-1
4091
    ///
4092
    /// # Examples
4093
    ///
4094
    /// ```
4095
    /// use zerocopy::FromBytes;
4096
    /// # use zerocopy_derive::*;
4097
    ///
4098
    /// # #[derive(Debug, PartialEq, Eq)]
4099
    /// #[derive(FromBytes, Immutable)]
4100
    /// #[repr(C)]
4101
    /// struct Pixel {
4102
    ///     r: u8,
4103
    ///     g: u8,
4104
    ///     b: u8,
4105
    ///     a: u8,
4106
    /// }
4107
    ///
4108
    /// // These are more bytes than are needed to encode two `Pixel`s.
4109
    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
4110
    ///
4111
    /// let (prefix, pixels) = <[Pixel]>::ref_from_suffix_with_elems(bytes, 2).unwrap();
4112
    ///
4113
    /// assert_eq!(prefix, &[0, 1]);
4114
    ///
4115
    /// assert_eq!(pixels, &[
4116
    ///     Pixel { r: 2, g: 3, b: 4, a: 5 },
4117
    ///     Pixel { r: 6, g: 7, b: 8, a: 9 },
4118
    /// ]);
4119
    /// ```
4120
    ///
4121
    /// Since an explicit `count` is provided, this method supports types with
4122
    /// zero-sized trailing slice elements. Methods such as [`ref_from_suffix`]
4123
    /// which do not take an explicit count do not support such types.
4124
    ///
4125
    /// ```
4126
    /// use zerocopy::*;
4127
    /// # use zerocopy_derive::*;
4128
    ///
4129
    /// #[derive(FromBytes, Immutable, KnownLayout)]
4130
    /// #[repr(C)]
4131
    /// struct ZSTy {
4132
    ///     leading_sized: [u8; 2],
4133
    ///     trailing_dst: [()],
4134
    /// }
4135
    ///
4136
    /// let src = &[85, 85][..];
4137
    /// let (_, zsty) = ZSTy::ref_from_suffix_with_elems(src, 42).unwrap();
4138
    /// assert_eq!(zsty.trailing_dst.len(), 42);
4139
    /// ```
4140
    ///
4141
    /// [`ref_from_suffix`]: FromBytes::ref_from_suffix
4142
    #[must_use = "has no side effects"]
4143
    #[inline]
4144
    fn ref_from_suffix_with_elems(
4145
        source: &[u8],
4146
        count: usize,
4147
    ) -> Result<(&[u8], &Self), CastError<&[u8], Self>>
4148
    where
4149
        Self: KnownLayout<PointerMetadata = usize> + Immutable,
4150
    {
4151
        ref_from_prefix_suffix(source, Some(count), CastType::Suffix).map(swap)
4152
    }
4153
4154
    /// Interprets the given `source` as a `&mut Self` with a DST length equal
4155
    /// to `count`.
4156
    ///
4157
    /// This method attempts to return a reference to `source` interpreted as a
4158
    /// `Self` with `count` trailing elements. If the length of `source` is not
4159
    /// equal to the size of `Self` with `count` elements, or if `source` is not
4160
    /// appropriately aligned, this returns `Err`. If [`Self:
4161
    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
4162
    /// error][size-error-from].
4163
    ///
4164
    /// [self-unaligned]: Unaligned
4165
    /// [size-error-from]: error/struct.SizeError.html#method.from-1
4166
    ///
4167
    /// # Examples
4168
    ///
4169
    /// ```
4170
    /// use zerocopy::FromBytes;
4171
    /// # use zerocopy_derive::*;
4172
    ///
4173
    /// # #[derive(Debug, PartialEq, Eq)]
4174
    /// #[derive(KnownLayout, FromBytes, IntoBytes, Immutable)]
4175
    /// #[repr(C)]
4176
    /// struct Pixel {
4177
    ///     r: u8,
4178
    ///     g: u8,
4179
    ///     b: u8,
4180
    ///     a: u8,
4181
    /// }
4182
    ///
4183
    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7][..];
4184
    ///
4185
    /// let pixels = <[Pixel]>::mut_from_bytes_with_elems(bytes, 2).unwrap();
4186
    ///
4187
    /// assert_eq!(pixels, &[
4188
    ///     Pixel { r: 0, g: 1, b: 2, a: 3 },
4189
    ///     Pixel { r: 4, g: 5, b: 6, a: 7 },
4190
    /// ]);
4191
    ///
4192
    /// pixels[1] = Pixel { r: 0, g: 0, b: 0, a: 0 };
4193
    ///
4194
    /// assert_eq!(bytes, [0, 1, 2, 3, 0, 0, 0, 0]);
4195
    /// ```
4196
    ///
4197
    /// Since an explicit `count` is provided, this method supports types with
4198
    /// zero-sized trailing slice elements. Methods such as [`mut_from`] which
4199
    /// do not take an explicit count do not support such types.
4200
    ///
4201
    /// ```
4202
    /// use zerocopy::*;
4203
    /// # use zerocopy_derive::*;
4204
    ///
4205
    /// #[derive(FromBytes, IntoBytes, Immutable, KnownLayout)]
4206
    /// #[repr(C, packed)]
4207
    /// struct ZSTy {
4208
    ///     leading_sized: [u8; 2],
4209
    ///     trailing_dst: [()],
4210
    /// }
4211
    ///
4212
    /// let src = &mut [85, 85][..];
4213
    /// let zsty = ZSTy::mut_from_bytes_with_elems(src, 42).unwrap();
4214
    /// assert_eq!(zsty.trailing_dst.len(), 42);
4215
    /// ```
4216
    ///
4217
    /// [`mut_from`]: FromBytes::mut_from
4218
    #[must_use = "has no side effects"]
4219
    #[inline]
4220
    fn mut_from_bytes_with_elems(
4221
        source: &mut [u8],
4222
        count: usize,
4223
    ) -> Result<&mut Self, CastError<&mut [u8], Self>>
4224
    where
4225
        Self: IntoBytes + KnownLayout<PointerMetadata = usize> + Immutable,
4226
    {
4227
        let source = Ptr::from_mut(source);
4228
        let maybe_slf = source.try_cast_into_no_leftover::<_, BecauseImmutable>(Some(count));
4229
        match maybe_slf {
4230
            Ok(slf) => Ok(slf.bikeshed_recall_valid().as_mut()),
4231
            Err(err) => Err(err.map_src(|s| s.as_mut())),
4232
        }
4233
    }
4234
4235
    /// Interprets the prefix of the given `source` as a `&mut Self` with DST
4236
    /// length equal to `count`.
4237
    ///
4238
    /// This method attempts to return a reference to the prefix of `source`
4239
    /// interpreted as a `Self` with `count` trailing elements, and a reference
4240
    /// to the preceding bytes. If there are insufficient bytes, or if `source`
4241
    /// is not appropriately aligned, this returns `Err`. If [`Self:
4242
    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
4243
    /// error][size-error-from].
4244
    ///
4245
    /// [self-unaligned]: Unaligned
4246
    /// [size-error-from]: error/struct.SizeError.html#method.from-1
4247
    ///
4248
    /// # Examples
4249
    ///
4250
    /// ```
4251
    /// use zerocopy::FromBytes;
4252
    /// # use zerocopy_derive::*;
4253
    ///
4254
    /// # #[derive(Debug, PartialEq, Eq)]
4255
    /// #[derive(KnownLayout, FromBytes, IntoBytes, Immutable)]
4256
    /// #[repr(C)]
4257
    /// struct Pixel {
4258
    ///     r: u8,
4259
    ///     g: u8,
4260
    ///     b: u8,
4261
    ///     a: u8,
4262
    /// }
4263
    ///
4264
    /// // These are more bytes than are needed to encode two `Pixel`s.
4265
    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
4266
    ///
4267
    /// let (pixels, suffix) = <[Pixel]>::mut_from_prefix_with_elems(bytes, 2).unwrap();
4268
    ///
4269
    /// assert_eq!(pixels, &[
4270
    ///     Pixel { r: 0, g: 1, b: 2, a: 3 },
4271
    ///     Pixel { r: 4, g: 5, b: 6, a: 7 },
4272
    /// ]);
4273
    ///
4274
    /// assert_eq!(suffix, &[8, 9]);
4275
    ///
4276
    /// pixels[1] = Pixel { r: 0, g: 0, b: 0, a: 0 };
4277
    /// suffix.fill(1);
4278
    ///
4279
    /// assert_eq!(bytes, [0, 1, 2, 3, 0, 0, 0, 0, 1, 1]);
4280
    /// ```
4281
    ///
4282
    /// Since an explicit `count` is provided, this method supports types with
4283
    /// zero-sized trailing slice elements. Methods such as [`mut_from_prefix`]
4284
    /// which do not take an explicit count do not support such types.
4285
    ///
4286
    /// ```
4287
    /// use zerocopy::*;
4288
    /// # use zerocopy_derive::*;
4289
    ///
4290
    /// #[derive(FromBytes, IntoBytes, Immutable, KnownLayout)]
4291
    /// #[repr(C, packed)]
4292
    /// struct ZSTy {
4293
    ///     leading_sized: [u8; 2],
4294
    ///     trailing_dst: [()],
4295
    /// }
4296
    ///
4297
    /// let src = &mut [85, 85][..];
4298
    /// let (zsty, _) = ZSTy::mut_from_prefix_with_elems(src, 42).unwrap();
4299
    /// assert_eq!(zsty.trailing_dst.len(), 42);
4300
    /// ```
4301
    ///
4302
    /// [`mut_from_prefix`]: FromBytes::mut_from_prefix
4303
    #[must_use = "has no side effects"]
4304
    #[inline]
4305
    fn mut_from_prefix_with_elems(
4306
        source: &mut [u8],
4307
        count: usize,
4308
    ) -> Result<(&mut Self, &mut [u8]), CastError<&mut [u8], Self>>
4309
    where
4310
        Self: IntoBytes + KnownLayout<PointerMetadata = usize>,
4311
    {
4312
        mut_from_prefix_suffix(source, Some(count), CastType::Prefix)
4313
    }
4314
4315
    /// Interprets the suffix of the given `source` as a `&mut Self` with DST
4316
    /// length equal to `count`.
4317
    ///
4318
    /// This method attempts to return a reference to the suffix of `source`
4319
    /// interpreted as a `Self` with `count` trailing elements, and a reference
4320
    /// to the remaining bytes. If there are insufficient bytes, or if that
4321
    /// suffix of `source` is not appropriately aligned, this returns `Err`. If
4322
    /// [`Self: Unaligned`][self-unaligned], you can [infallibly discard the
4323
    /// alignment error][size-error-from].
4324
    ///
4325
    /// [self-unaligned]: Unaligned
4326
    /// [size-error-from]: error/struct.SizeError.html#method.from-1
4327
    ///
4328
    /// # Examples
4329
    ///
4330
    /// ```
4331
    /// use zerocopy::FromBytes;
4332
    /// # use zerocopy_derive::*;
4333
    ///
4334
    /// # #[derive(Debug, PartialEq, Eq)]
4335
    /// #[derive(FromBytes, IntoBytes, Immutable)]
4336
    /// #[repr(C)]
4337
    /// struct Pixel {
4338
    ///     r: u8,
4339
    ///     g: u8,
4340
    ///     b: u8,
4341
    ///     a: u8,
4342
    /// }
4343
    ///
4344
    /// // These are more bytes than are needed to encode two `Pixel`s.
4345
    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
4346
    ///
4347
    /// let (prefix, pixels) = <[Pixel]>::mut_from_suffix_with_elems(bytes, 2).unwrap();
4348
    ///
4349
    /// assert_eq!(prefix, &[0, 1]);
4350
    ///
4351
    /// assert_eq!(pixels, &[
4352
    ///     Pixel { r: 2, g: 3, b: 4, a: 5 },
4353
    ///     Pixel { r: 6, g: 7, b: 8, a: 9 },
4354
    /// ]);
4355
    ///
4356
    /// prefix.fill(9);
4357
    /// pixels[1] = Pixel { r: 0, g: 0, b: 0, a: 0 };
4358
    ///
4359
    /// assert_eq!(bytes, [9, 9, 2, 3, 4, 5, 0, 0, 0, 0]);
4360
    /// ```
4361
    ///
4362
    /// Since an explicit `count` is provided, this method supports types with
4363
    /// zero-sized trailing slice elements. Methods such as [`mut_from_suffix`]
4364
    /// which do not take an explicit count do not support such types.
4365
    ///
4366
    /// ```
4367
    /// use zerocopy::*;
4368
    /// # use zerocopy_derive::*;
4369
    ///
4370
    /// #[derive(FromBytes, IntoBytes, Immutable, KnownLayout)]
4371
    /// #[repr(C, packed)]
4372
    /// struct ZSTy {
4373
    ///     leading_sized: [u8; 2],
4374
    ///     trailing_dst: [()],
4375
    /// }
4376
    ///
4377
    /// let src = &mut [85, 85][..];
4378
    /// let (_, zsty) = ZSTy::mut_from_suffix_with_elems(src, 42).unwrap();
4379
    /// assert_eq!(zsty.trailing_dst.len(), 42);
4380
    /// ```
4381
    ///
4382
    /// [`mut_from_suffix`]: FromBytes::mut_from_suffix
4383
    #[must_use = "has no side effects"]
4384
    #[inline]
4385
    fn mut_from_suffix_with_elems(
4386
        source: &mut [u8],
4387
        count: usize,
4388
    ) -> Result<(&mut [u8], &mut Self), CastError<&mut [u8], Self>>
4389
    where
4390
        Self: IntoBytes + KnownLayout<PointerMetadata = usize>,
4391
    {
4392
        mut_from_prefix_suffix(source, Some(count), CastType::Suffix).map(swap)
4393
    }
4394
4395
    /// Reads a copy of `Self` from the given `source`.
4396
    ///
4397
    /// If `source.len() != size_of::<Self>()`, `read_from_bytes` returns `Err`.
4398
    ///
4399
    /// # Examples
4400
    ///
4401
    /// ```
4402
    /// use zerocopy::FromBytes;
4403
    /// # use zerocopy_derive::*;
4404
    ///
4405
    /// #[derive(FromBytes)]
4406
    /// #[repr(C)]
4407
    /// struct PacketHeader {
4408
    ///     src_port: [u8; 2],
4409
    ///     dst_port: [u8; 2],
4410
    ///     length: [u8; 2],
4411
    ///     checksum: [u8; 2],
4412
    /// }
4413
    ///
4414
    /// // These bytes encode a `PacketHeader`.
4415
    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7][..];
4416
    ///
4417
    /// let header = PacketHeader::read_from_bytes(bytes).unwrap();
4418
    ///
4419
    /// assert_eq!(header.src_port, [0, 1]);
4420
    /// assert_eq!(header.dst_port, [2, 3]);
4421
    /// assert_eq!(header.length, [4, 5]);
4422
    /// assert_eq!(header.checksum, [6, 7]);
4423
    /// ```
4424
    #[must_use = "has no side effects"]
4425
    #[inline]
4426
    fn read_from_bytes(source: &[u8]) -> Result<Self, SizeError<&[u8], Self>>
4427
    where
4428
        Self: Sized,
4429
    {
4430
        match Ref::<_, Unalign<Self>>::sized_from(source) {
4431
            Ok(r) => Ok(Ref::read(&r).into_inner()),
4432
            Err(CastError::Size(e)) => Err(e.with_dst()),
4433
            Err(CastError::Alignment(_)) => {
4434
                // SAFETY: `Unalign<Self>` is trivially aligned, so
4435
                // `Ref::sized_from` cannot fail due to unmet alignment
4436
                // requirements.
4437
                unsafe { core::hint::unreachable_unchecked() }
4438
            }
4439
            Err(CastError::Validity(i)) => match i {},
4440
        }
4441
    }
4442
4443
    /// Reads a copy of `Self` from the prefix of the given `source`.
4444
    ///
4445
    /// This attempts to read a `Self` from the first `size_of::<Self>()` bytes
4446
    /// of `source`, returning that `Self` and any remaining bytes. If
4447
    /// `source.len() < size_of::<Self>()`, it returns `Err`.
4448
    ///
4449
    /// # Examples
4450
    ///
4451
    /// ```
4452
    /// use zerocopy::FromBytes;
4453
    /// # use zerocopy_derive::*;
4454
    ///
4455
    /// #[derive(FromBytes)]
4456
    /// #[repr(C)]
4457
    /// struct PacketHeader {
4458
    ///     src_port: [u8; 2],
4459
    ///     dst_port: [u8; 2],
4460
    ///     length: [u8; 2],
4461
    ///     checksum: [u8; 2],
4462
    /// }
4463
    ///
4464
    /// // These are more bytes than are needed to encode a `PacketHeader`.
4465
    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
4466
    ///
4467
    /// let (header, body) = PacketHeader::read_from_prefix(bytes).unwrap();
4468
    ///
4469
    /// assert_eq!(header.src_port, [0, 1]);
4470
    /// assert_eq!(header.dst_port, [2, 3]);
4471
    /// assert_eq!(header.length, [4, 5]);
4472
    /// assert_eq!(header.checksum, [6, 7]);
4473
    /// assert_eq!(body, [8, 9]);
4474
    /// ```
4475
    #[must_use = "has no side effects"]
4476
    #[inline]
4477
    fn read_from_prefix(source: &[u8]) -> Result<(Self, &[u8]), SizeError<&[u8], Self>>
4478
    where
4479
        Self: Sized,
4480
    {
4481
        match Ref::<_, Unalign<Self>>::sized_from_prefix(source) {
4482
            Ok((r, suffix)) => Ok((Ref::read(&r).into_inner(), suffix)),
4483
            Err(CastError::Size(e)) => Err(e.with_dst()),
4484
            Err(CastError::Alignment(_)) => {
4485
                // SAFETY: `Unalign<Self>` is trivially aligned, so
4486
                // `Ref::sized_from_prefix` cannot fail due to unmet alignment
4487
                // requirements.
4488
                unsafe { core::hint::unreachable_unchecked() }
4489
            }
4490
            Err(CastError::Validity(i)) => match i {},
4491
        }
4492
    }
4493
4494
    /// Reads a copy of `Self` from the suffix of the given `source`.
4495
    ///
4496
    /// This attempts to read a `Self` from the last `size_of::<Self>()` bytes
4497
    /// of `source`, returning that `Self` and any preceding bytes. If
4498
    /// `source.len() < size_of::<Self>()`, it returns `Err`.
4499
    ///
4500
    /// # Examples
4501
    ///
4502
    /// ```
4503
    /// use zerocopy::FromBytes;
4504
    /// # use zerocopy_derive::*;
4505
    ///
4506
    /// #[derive(FromBytes)]
4507
    /// #[repr(C)]
4508
    /// struct PacketTrailer {
4509
    ///     frame_check_sequence: [u8; 4],
4510
    /// }
4511
    ///
4512
    /// // These are more bytes than are needed to encode a `PacketTrailer`.
4513
    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
4514
    ///
4515
    /// let (prefix, trailer) = PacketTrailer::read_from_suffix(bytes).unwrap();
4516
    ///
4517
    /// assert_eq!(prefix, [0, 1, 2, 3, 4, 5]);
4518
    /// assert_eq!(trailer.frame_check_sequence, [6, 7, 8, 9]);
4519
    /// ```
4520
    #[must_use = "has no side effects"]
4521
    #[inline]
4522
    fn read_from_suffix(source: &[u8]) -> Result<(&[u8], Self), SizeError<&[u8], Self>>
4523
    where
4524
        Self: Sized,
4525
    {
4526
        match Ref::<_, Unalign<Self>>::sized_from_suffix(source) {
4527
            Ok((prefix, r)) => Ok((prefix, Ref::read(&r).into_inner())),
4528
            Err(CastError::Size(e)) => Err(e.with_dst()),
4529
            Err(CastError::Alignment(_)) => {
4530
                // SAFETY: `Unalign<Self>` is trivially aligned, so
4531
                // `Ref::sized_from_suffix` cannot fail due to unmet alignment
4532
                // requirements.
4533
                unsafe { core::hint::unreachable_unchecked() }
4534
            }
4535
            Err(CastError::Validity(i)) => match i {},
4536
        }
4537
    }
4538
4539
    /// Reads a copy of `self` from an `io::Read`.
4540
    ///
4541
    /// This is useful for interfacing with operating system byte sinks (files,
4542
    /// sockets, etc.).
4543
    ///
4544
    /// # Examples
4545
    ///
4546
    /// ```no_run
4547
    /// use zerocopy::{byteorder::big_endian::*, FromBytes};
4548
    /// use std::fs::File;
4549
    /// # use zerocopy_derive::*;
4550
    ///
4551
    /// #[derive(FromBytes)]
4552
    /// #[repr(C)]
4553
    /// struct BitmapFileHeader {
4554
    ///     signature: [u8; 2],
4555
    ///     size: U32,
4556
    ///     reserved: U64,
4557
    ///     offset: U64,
4558
    /// }
4559
    ///
4560
    /// let mut file = File::open("image.bin").unwrap();
4561
    /// let header = BitmapFileHeader::read_from_io(&mut file).unwrap();
4562
    /// ```
4563
    #[cfg(feature = "std")]
4564
    #[inline(always)]
4565
    fn read_from_io<R>(mut src: R) -> io::Result<Self>
4566
    where
4567
        Self: Sized,
4568
        R: io::Read,
4569
    {
4570
        let mut buf = CoreMaybeUninit::<Self>::zeroed();
4571
        let ptr = Ptr::from_mut(&mut buf);
4572
        // SAFETY: `buf` consists entirely of initialized, zeroed bytes.
4573
        let ptr = unsafe { ptr.assume_validity::<invariant::Initialized>() };
4574
        let ptr = ptr.as_bytes::<BecauseExclusive>();
4575
        src.read_exact(ptr.as_mut())?;
4576
        // SAFETY: `buf` entirely consists of initialized bytes, and `Self` is
4577
        // `FromBytes`.
4578
        Ok(unsafe { buf.assume_init() })
4579
    }
4580
4581
    #[deprecated(since = "0.8.0", note = "renamed to `FromBytes::ref_from_bytes`")]
4582
    #[doc(hidden)]
4583
    #[must_use = "has no side effects"]
4584
    #[inline(always)]
4585
    fn ref_from(source: &[u8]) -> Option<&Self>
4586
    where
4587
        Self: KnownLayout + Immutable,
4588
    {
4589
        Self::ref_from_bytes(source).ok()
4590
    }
4591
4592
    #[deprecated(since = "0.8.0", note = "renamed to `FromBytes::mut_from_bytes`")]
4593
    #[doc(hidden)]
4594
    #[must_use = "has no side effects"]
4595
    #[inline(always)]
4596
    fn mut_from(source: &mut [u8]) -> Option<&mut Self>
4597
    where
4598
        Self: KnownLayout + IntoBytes,
4599
    {
4600
        Self::mut_from_bytes(source).ok()
4601
    }
4602
4603
    #[deprecated(since = "0.8.0", note = "renamed to `FromBytes::ref_from_prefix_with_elems`")]
4604
    #[doc(hidden)]
4605
    #[must_use = "has no side effects"]
4606
    #[inline(always)]
4607
    fn slice_from_prefix(source: &[u8], count: usize) -> Option<(&[Self], &[u8])>
4608
    where
4609
        Self: Sized + Immutable,
4610
    {
4611
        <[Self]>::ref_from_prefix_with_elems(source, count).ok()
4612
    }
4613
4614
    #[deprecated(since = "0.8.0", note = "renamed to `FromBytes::ref_from_suffix_with_elems`")]
4615
    #[doc(hidden)]
4616
    #[must_use = "has no side effects"]
4617
    #[inline(always)]
4618
    fn slice_from_suffix(source: &[u8], count: usize) -> Option<(&[u8], &[Self])>
4619
    where
4620
        Self: Sized + Immutable,
4621
    {
4622
        <[Self]>::ref_from_suffix_with_elems(source, count).ok()
4623
    }
4624
4625
    #[deprecated(since = "0.8.0", note = "renamed to `FromBytes::mut_from_prefix_with_elems`")]
4626
    #[doc(hidden)]
4627
    #[must_use = "has no side effects"]
4628
    #[inline(always)]
4629
    fn mut_slice_from_prefix(source: &mut [u8], count: usize) -> Option<(&mut [Self], &mut [u8])>
4630
    where
4631
        Self: Sized + IntoBytes,
4632
    {
4633
        <[Self]>::mut_from_prefix_with_elems(source, count).ok()
4634
    }
4635
4636
    #[deprecated(since = "0.8.0", note = "renamed to `FromBytes::mut_from_suffix_with_elems`")]
4637
    #[doc(hidden)]
4638
    #[must_use = "has no side effects"]
4639
    #[inline(always)]
4640
    fn mut_slice_from_suffix(source: &mut [u8], count: usize) -> Option<(&mut [u8], &mut [Self])>
4641
    where
4642
        Self: Sized + IntoBytes,
4643
    {
4644
        <[Self]>::mut_from_suffix_with_elems(source, count).ok()
4645
    }
4646
4647
    #[deprecated(since = "0.8.0", note = "renamed to `FromBytes::read_from_bytes`")]
4648
    #[doc(hidden)]
4649
    #[must_use = "has no side effects"]
4650
    #[inline(always)]
4651
    fn read_from(source: &[u8]) -> Option<Self>
4652
    where
4653
        Self: Sized,
4654
    {
4655
        Self::read_from_bytes(source).ok()
4656
    }
4657
}
4658
4659
/// Interprets the given affix of the given bytes as a `&Self`.
4660
///
4661
/// This method computes the largest possible size of `Self` that can fit in the
4662
/// prefix or suffix bytes of `source`, then attempts to return both a reference
4663
/// to those bytes interpreted as a `Self`, and a reference to the excess bytes.
4664
/// If there are insufficient bytes, or if that affix of `source` is not
4665
/// appropriately aligned, this returns `Err`.
4666
#[inline(always)]
4667
fn ref_from_prefix_suffix<T: FromBytes + KnownLayout + Immutable + ?Sized>(
4668
    source: &[u8],
4669
    meta: Option<T::PointerMetadata>,
4670
    cast_type: CastType,
4671
) -> Result<(&T, &[u8]), CastError<&[u8], T>> {
4672
    let (slf, prefix_suffix) = Ptr::from_ref(source)
4673
        .try_cast_into::<_, BecauseImmutable>(cast_type, meta)
4674
        .map_err(|err| err.map_src(|s| s.as_ref()))?;
4675
    Ok((slf.bikeshed_recall_valid().as_ref(), prefix_suffix.as_ref()))
4676
}
4677
4678
/// Interprets the given affix of the given bytes as a `&mut Self` without
4679
/// copying.
4680
///
4681
/// This method computes the largest possible size of `Self` that can fit in the
4682
/// prefix or suffix bytes of `source`, then attempts to return both a reference
4683
/// to those bytes interpreted as a `Self`, and a reference to the excess bytes.
4684
/// If there are insufficient bytes, or if that affix of `source` is not
4685
/// appropriately aligned, this returns `Err`.
4686
#[inline(always)]
4687
fn mut_from_prefix_suffix<T: FromBytes + KnownLayout + ?Sized>(
4688
    source: &mut [u8],
4689
    meta: Option<T::PointerMetadata>,
4690
    cast_type: CastType,
4691
) -> Result<(&mut T, &mut [u8]), CastError<&mut [u8], T>> {
4692
    let (slf, prefix_suffix) = Ptr::from_mut(source)
4693
        .try_cast_into::<_, BecauseExclusive>(cast_type, meta)
4694
        .map_err(|err| err.map_src(|s| s.as_mut()))?;
4695
    Ok((slf.bikeshed_recall_valid().as_mut(), prefix_suffix.as_mut()))
4696
}
4697
4698
/// Analyzes whether a type is [`IntoBytes`].
4699
///
4700
/// This derive analyzes, at compile time, whether the annotated type satisfies
4701
/// the [safety conditions] of `IntoBytes` and implements `IntoBytes` if it is
4702
/// sound to do so. This derive can be applied to structs and enums (see below
4703
/// for union support); e.g.:
4704
///
4705
/// ```
4706
/// # use zerocopy_derive::{IntoBytes};
4707
/// #[derive(IntoBytes)]
4708
/// #[repr(C)]
4709
/// struct MyStruct {
4710
/// # /*
4711
///     ...
4712
/// # */
4713
/// }
4714
///
4715
/// #[derive(IntoBytes)]
4716
/// #[repr(u8)]
4717
/// enum MyEnum {
4718
/// #   Variant,
4719
/// # /*
4720
///     ...
4721
/// # */
4722
/// }
4723
/// ```
4724
///
4725
/// [safety conditions]: trait@IntoBytes#safety
4726
///
4727
/// # Error Messages
4728
///
4729
/// On Rust toolchains prior to 1.78.0, due to the way that the custom derive
4730
/// for `IntoBytes` is implemented, you may get an error like this:
4731
///
4732
/// ```text
4733
/// error[E0277]: the trait bound `(): PaddingFree<Foo, true>` is not satisfied
4734
///   --> lib.rs:23:10
4735
///    |
4736
///  1 | #[derive(IntoBytes)]
4737
///    |          ^^^^^^^^^ the trait `PaddingFree<Foo, true>` is not implemented for `()`
4738
///    |
4739
///    = help: the following implementations were found:
4740
///                   <() as PaddingFree<T, false>>
4741
/// ```
4742
///
4743
/// This error indicates that the type being annotated has padding bytes, which
4744
/// is illegal for `IntoBytes` types. Consider reducing the alignment of some
4745
/// fields by using types in the [`byteorder`] module, wrapping field types in
4746
/// [`Unalign`], adding explicit struct fields where those padding bytes would
4747
/// be, or using `#[repr(packed)]`. See the Rust Reference's page on [type
4748
/// layout] for more information about type layout and padding.
4749
///
4750
/// [type layout]: https://doc.rust-lang.org/reference/type-layout.html
4751
///
4752
/// # Unions
4753
///
4754
/// Currently, union bit validity is [up in the air][union-validity], and so
4755
/// zerocopy does not support `#[derive(IntoBytes)]` on unions by default.
4756
/// However, implementing `IntoBytes` on a union type is likely sound on all
4757
/// existing Rust toolchains - it's just that it may become unsound in the
4758
/// future. You can opt-in to `#[derive(IntoBytes)]` support on unions by
4759
/// passing the unstable `zerocopy_derive_union_into_bytes` cfg:
4760
///
4761
/// ```shell
4762
/// $ RUSTFLAGS='--cfg zerocopy_derive_union_into_bytes' cargo build
4763
/// ```
4764
///
4765
/// However, it is your responsibility to ensure that this derive is sound on
4766
/// the specific versions of the Rust toolchain you are using! We make no
4767
/// stability or soundness guarantees regarding this cfg, and may remove it at
4768
/// any point.
4769
///
4770
/// We are actively working with Rust to stabilize the necessary language
4771
/// guarantees to support this in a forwards-compatible way, which will enable
4772
/// us to remove the cfg gate. As part of this effort, we need to know how much
4773
/// demand there is for this feature. If you would like to use `IntoBytes` on
4774
/// unions, [please let us know][discussion].
4775
///
4776
/// [union-validity]: https://github.com/rust-lang/unsafe-code-guidelines/issues/438
4777
/// [discussion]: https://github.com/google/zerocopy/discussions/1802
4778
///
4779
/// # Analysis
4780
///
4781
/// *This section describes, roughly, the analysis performed by this derive to
4782
/// determine whether it is sound to implement `IntoBytes` for a given type.
4783
/// Unless you are modifying the implementation of this derive, or attempting to
4784
/// manually implement `IntoBytes` for a type yourself, you don't need to read
4785
/// this section.*
4786
///
4787
/// If a type has the following properties, then this derive can implement
4788
/// `IntoBytes` for that type:
4789
///
4790
/// - If the type is a struct, its fields must be [`IntoBytes`]. Additionally:
4791
///     - if the type is `repr(transparent)` or `repr(packed)`, it is
4792
///       [`IntoBytes`] if its fields are [`IntoBytes`]; else,
4793
///     - if the type is `repr(C)` with at most one field, it is [`IntoBytes`]
4794
///       if its field is [`IntoBytes`]; else,
4795
///     - if the type has no generic parameters, it is [`IntoBytes`] if the type
4796
///       is sized and has no padding bytes; else,
4797
///     - if the type is `repr(C)`, its fields must be [`Unaligned`].
4798
/// - If the type is an enum:
4799
///   - It must have a defined representation (`repr`s `C`, `u8`, `u16`, `u32`,
4800
///     `u64`, `usize`, `i8`, `i16`, `i32`, `i64`, or `isize`).
4801
///   - It must have no padding bytes.
4802
///   - Its fields must be [`IntoBytes`].
4803
///
4804
/// This analysis is subject to change. Unsafe code may *only* rely on the
4805
/// documented [safety conditions] of `FromBytes`, and must *not* rely on the
4806
/// implementation details of this derive.
4807
///
4808
/// [Rust Reference]: https://doc.rust-lang.org/reference/type-layout.html
4809
#[cfg(any(feature = "derive", test))]
4810
#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
4811
pub use zerocopy_derive::IntoBytes;
4812
4813
/// Types that can be converted to an immutable slice of initialized bytes.
4814
///
4815
/// Any `IntoBytes` type can be converted to a slice of initialized bytes of the
4816
/// same size. This is useful for efficiently serializing structured data as raw
4817
/// bytes.
4818
///
4819
/// # Implementation
4820
///
4821
/// **Do not implement this trait yourself!** Instead, use
4822
/// [`#[derive(IntoBytes)]`][derive]; e.g.:
4823
///
4824
/// ```
4825
/// # use zerocopy_derive::IntoBytes;
4826
/// #[derive(IntoBytes)]
4827
/// #[repr(C)]
4828
/// struct MyStruct {
4829
/// # /*
4830
///     ...
4831
/// # */
4832
/// }
4833
///
4834
/// #[derive(IntoBytes)]
4835
/// #[repr(u8)]
4836
/// enum MyEnum {
4837
/// #   Variant0,
4838
/// # /*
4839
///     ...
4840
/// # */
4841
/// }
4842
/// ```
4843
///
4844
/// This derive performs a sophisticated, compile-time safety analysis to
4845
/// determine whether a type is `IntoBytes`. See the [derive
4846
/// documentation][derive] for guidance on how to interpret error messages
4847
/// produced by the derive's analysis.
4848
///
4849
/// # Safety
4850
///
4851
/// *This section describes what is required in order for `T: IntoBytes`, and
4852
/// what unsafe code may assume of such types. If you don't plan on implementing
4853
/// `IntoBytes` manually, and you don't plan on writing unsafe code that
4854
/// operates on `IntoBytes` types, then you don't need to read this section.*
4855
///
4856
/// If `T: IntoBytes`, then unsafe code may assume that it is sound to treat any
4857
/// `t: T` as an immutable `[u8]` of length `size_of_val(t)`. If a type is
4858
/// marked as `IntoBytes` which violates this contract, it may cause undefined
4859
/// behavior.
4860
///
4861
/// `#[derive(IntoBytes)]` only permits [types which satisfy these
4862
/// requirements][derive-analysis].
4863
///
4864
#[cfg_attr(
4865
    feature = "derive",
4866
    doc = "[derive]: zerocopy_derive::IntoBytes",
4867
    doc = "[derive-analysis]: zerocopy_derive::IntoBytes#analysis"
4868
)]
4869
#[cfg_attr(
4870
    not(feature = "derive"),
4871
    doc = concat!("[derive]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.IntoBytes.html"),
4872
    doc = concat!("[derive-analysis]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.IntoBytes.html#analysis"),
4873
)]
4874
#[cfg_attr(
4875
    zerocopy_diagnostic_on_unimplemented_1_78_0,
4876
    diagnostic::on_unimplemented(note = "Consider adding `#[derive(IntoBytes)]` to `{Self}`")
4877
)]
4878
pub unsafe trait IntoBytes {
4879
    // The `Self: Sized` bound makes it so that this function doesn't prevent
4880
    // `IntoBytes` from being object safe. Note that other `IntoBytes` methods
4881
    // prevent object safety, but those provide a benefit in exchange for object
4882
    // safety. If at some point we remove those methods, change their type
4883
    // signatures, or move them out of this trait so that `IntoBytes` is object
4884
    // safe again, it's important that this function not prevent object safety.
4885
    #[doc(hidden)]
4886
    fn only_derive_is_allowed_to_implement_this_trait()
4887
    where
4888
        Self: Sized;
4889
4890
    /// Gets the bytes of this value.
4891
    ///
4892
    /// # Examples
4893
    ///
4894
    /// ```
4895
    /// use zerocopy::IntoBytes;
4896
    /// # use zerocopy_derive::*;
4897
    ///
4898
    /// #[derive(IntoBytes, Immutable)]
4899
    /// #[repr(C)]
4900
    /// struct PacketHeader {
4901
    ///     src_port: [u8; 2],
4902
    ///     dst_port: [u8; 2],
4903
    ///     length: [u8; 2],
4904
    ///     checksum: [u8; 2],
4905
    /// }
4906
    ///
4907
    /// let header = PacketHeader {
4908
    ///     src_port: [0, 1],
4909
    ///     dst_port: [2, 3],
4910
    ///     length: [4, 5],
4911
    ///     checksum: [6, 7],
4912
    /// };
4913
    ///
4914
    /// let bytes = header.as_bytes();
4915
    ///
4916
    /// assert_eq!(bytes, [0, 1, 2, 3, 4, 5, 6, 7]);
4917
    /// ```
4918
    #[must_use = "has no side effects"]
4919
    #[inline(always)]
4920
0
    fn as_bytes(&self) -> &[u8]
4921
0
    where
4922
0
        Self: Immutable,
4923
0
    {
4924
0
        // Note that this method does not have a `Self: Sized` bound;
4925
0
        // `size_of_val` works for unsized values too.
4926
0
        let len = mem::size_of_val(self);
4927
0
        let slf: *const Self = self;
4928
0
4929
0
        // SAFETY:
4930
0
        // - `slf.cast::<u8>()` is valid for reads for `len * size_of::<u8>()`
4931
0
        //   many bytes because...
4932
0
        //   - `slf` is the same pointer as `self`, and `self` is a reference
4933
0
        //     which points to an object whose size is `len`. Thus...
4934
0
        //     - The entire region of `len` bytes starting at `slf` is contained
4935
0
        //       within a single allocation.
4936
0
        //     - `slf` is non-null.
4937
0
        //   - `slf` is trivially aligned to `align_of::<u8>() == 1`.
4938
0
        // - `Self: IntoBytes` ensures that all of the bytes of `slf` are
4939
0
        //   initialized.
4940
0
        // - Since `slf` is derived from `self`, and `self` is an immutable
4941
0
        //   reference, the only other references to this memory region that
4942
0
        //   could exist are other immutable references, and those don't allow
4943
0
        //   mutation. `Self: Immutable` prohibits types which contain
4944
0
        //   `UnsafeCell`s, which are the only types for which this rule
4945
0
        //   wouldn't be sufficient.
4946
0
        // - The total size of the resulting slice is no larger than
4947
0
        //   `isize::MAX` because no allocation produced by safe code can be
4948
0
        //   larger than `isize::MAX`.
4949
0
        //
4950
0
        // TODO(#429): Add references to docs and quotes.
4951
0
        unsafe { slice::from_raw_parts(slf.cast::<u8>(), len) }
4952
0
    }
Unexecuted instantiation: <[u32] as zerocopy::IntoBytes>::as_bytes
Unexecuted instantiation: <[u64] as zerocopy::IntoBytes>::as_bytes
4953
4954
    /// Gets the bytes of this value mutably.
4955
    ///
4956
    /// # Examples
4957
    ///
4958
    /// ```
4959
    /// use zerocopy::IntoBytes;
4960
    /// # use zerocopy_derive::*;
4961
    ///
4962
    /// # #[derive(Eq, PartialEq, Debug)]
4963
    /// #[derive(FromBytes, IntoBytes, Immutable)]
4964
    /// #[repr(C)]
4965
    /// struct PacketHeader {
4966
    ///     src_port: [u8; 2],
4967
    ///     dst_port: [u8; 2],
4968
    ///     length: [u8; 2],
4969
    ///     checksum: [u8; 2],
4970
    /// }
4971
    ///
4972
    /// let mut header = PacketHeader {
4973
    ///     src_port: [0, 1],
4974
    ///     dst_port: [2, 3],
4975
    ///     length: [4, 5],
4976
    ///     checksum: [6, 7],
4977
    /// };
4978
    ///
4979
    /// let bytes = header.as_mut_bytes();
4980
    ///
4981
    /// assert_eq!(bytes, [0, 1, 2, 3, 4, 5, 6, 7]);
4982
    ///
4983
    /// bytes.reverse();
4984
    ///
4985
    /// assert_eq!(header, PacketHeader {
4986
    ///     src_port: [7, 6],
4987
    ///     dst_port: [5, 4],
4988
    ///     length: [3, 2],
4989
    ///     checksum: [1, 0],
4990
    /// });
4991
    /// ```
4992
    #[must_use = "has no side effects"]
4993
    #[inline(always)]
4994
    fn as_mut_bytes(&mut self) -> &mut [u8]
4995
    where
4996
        Self: FromBytes,
4997
    {
4998
        // Note that this method does not have a `Self: Sized` bound;
4999
        // `size_of_val` works for unsized values too.
5000
        let len = mem::size_of_val(self);
5001
        let slf: *mut Self = self;
5002
5003
        // SAFETY:
5004
        // - `slf.cast::<u8>()` is valid for reads and writes for `len *
5005
        //   size_of::<u8>()` many bytes because...
5006
        //   - `slf` is the same pointer as `self`, and `self` is a reference
5007
        //     which points to an object whose size is `len`. Thus...
5008
        //     - The entire region of `len` bytes starting at `slf` is contained
5009
        //       within a single allocation.
5010
        //     - `slf` is non-null.
5011
        //   - `slf` is trivially aligned to `align_of::<u8>() == 1`.
5012
        // - `Self: IntoBytes` ensures that all of the bytes of `slf` are
5013
        //   initialized.
5014
        // - `Self: FromBytes` ensures that no write to this memory region
5015
        //   could result in it containing an invalid `Self`.
5016
        // - Since `slf` is derived from `self`, and `self` is a mutable
5017
        //   reference, no other references to this memory region can exist.
5018
        // - The total size of the resulting slice is no larger than
5019
        //   `isize::MAX` because no allocation produced by safe code can be
5020
        //   larger than `isize::MAX`.
5021
        //
5022
        // TODO(#429): Add references to docs and quotes.
5023
        unsafe { slice::from_raw_parts_mut(slf.cast::<u8>(), len) }
5024
    }
5025
5026
    /// Writes a copy of `self` to `dst`.
5027
    ///
5028
    /// If `dst.len() != size_of_val(self)`, `write_to` returns `Err`.
5029
    ///
5030
    /// # Examples
5031
    ///
5032
    /// ```
5033
    /// use zerocopy::IntoBytes;
5034
    /// # use zerocopy_derive::*;
5035
    ///
5036
    /// #[derive(IntoBytes, Immutable)]
5037
    /// #[repr(C)]
5038
    /// struct PacketHeader {
5039
    ///     src_port: [u8; 2],
5040
    ///     dst_port: [u8; 2],
5041
    ///     length: [u8; 2],
5042
    ///     checksum: [u8; 2],
5043
    /// }
5044
    ///
5045
    /// let header = PacketHeader {
5046
    ///     src_port: [0, 1],
5047
    ///     dst_port: [2, 3],
5048
    ///     length: [4, 5],
5049
    ///     checksum: [6, 7],
5050
    /// };
5051
    ///
5052
    /// let mut bytes = [0, 0, 0, 0, 0, 0, 0, 0];
5053
    ///
5054
    /// header.write_to(&mut bytes[..]);
5055
    ///
5056
    /// assert_eq!(bytes, [0, 1, 2, 3, 4, 5, 6, 7]);
5057
    /// ```
5058
    ///
5059
    /// If too many or too few target bytes are provided, `write_to` returns
5060
    /// `Err` and leaves the target bytes unmodified:
5061
    ///
5062
    /// ```
5063
    /// # use zerocopy::IntoBytes;
5064
    /// # let header = u128::MAX;
5065
    /// let mut excessive_bytes = &mut [0u8; 128][..];
5066
    ///
5067
    /// let write_result = header.write_to(excessive_bytes);
5068
    ///
5069
    /// assert!(write_result.is_err());
5070
    /// assert_eq!(excessive_bytes, [0u8; 128]);
5071
    /// ```
5072
    #[must_use = "callers should check the return value to see if the operation succeeded"]
5073
    #[inline]
5074
    fn write_to(&self, dst: &mut [u8]) -> Result<(), SizeError<&Self, &mut [u8]>>
5075
    where
5076
        Self: Immutable,
5077
    {
5078
        let src = self.as_bytes();
5079
        if dst.len() == src.len() {
5080
            // SAFETY: Within this branch of the conditional, we have ensured
5081
            // that `dst.len()` is equal to `src.len()`. Neither the size of the
5082
            // source nor the size of the destination change between the above
5083
            // size check and the invocation of `copy_unchecked`.
5084
            unsafe { util::copy_unchecked(src, dst) }
5085
            Ok(())
5086
        } else {
5087
            Err(SizeError::new(self))
5088
        }
5089
    }
5090
5091
    /// Writes a copy of `self` to the prefix of `dst`.
5092
    ///
5093
    /// `write_to_prefix` writes `self` to the first `size_of_val(self)` bytes
5094
    /// of `dst`. If `dst.len() < size_of_val(self)`, it returns `Err`.
5095
    ///
5096
    /// # Examples
5097
    ///
5098
    /// ```
5099
    /// use zerocopy::IntoBytes;
5100
    /// # use zerocopy_derive::*;
5101
    ///
5102
    /// #[derive(IntoBytes, Immutable)]
5103
    /// #[repr(C)]
5104
    /// struct PacketHeader {
5105
    ///     src_port: [u8; 2],
5106
    ///     dst_port: [u8; 2],
5107
    ///     length: [u8; 2],
5108
    ///     checksum: [u8; 2],
5109
    /// }
5110
    ///
5111
    /// let header = PacketHeader {
5112
    ///     src_port: [0, 1],
5113
    ///     dst_port: [2, 3],
5114
    ///     length: [4, 5],
5115
    ///     checksum: [6, 7],
5116
    /// };
5117
    ///
5118
    /// let mut bytes = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0];
5119
    ///
5120
    /// header.write_to_prefix(&mut bytes[..]);
5121
    ///
5122
    /// assert_eq!(bytes, [0, 1, 2, 3, 4, 5, 6, 7, 0, 0]);
5123
    /// ```
5124
    ///
5125
    /// If insufficient target bytes are provided, `write_to_prefix` returns
5126
    /// `Err` and leaves the target bytes unmodified:
5127
    ///
5128
    /// ```
5129
    /// # use zerocopy::IntoBytes;
5130
    /// # let header = u128::MAX;
5131
    /// let mut insufficent_bytes = &mut [0, 0][..];
5132
    ///
5133
    /// let write_result = header.write_to_suffix(insufficent_bytes);
5134
    ///
5135
    /// assert!(write_result.is_err());
5136
    /// assert_eq!(insufficent_bytes, [0, 0]);
5137
    /// ```
5138
    #[must_use = "callers should check the return value to see if the operation succeeded"]
5139
    #[inline]
5140
    fn write_to_prefix(&self, dst: &mut [u8]) -> Result<(), SizeError<&Self, &mut [u8]>>
5141
    where
5142
        Self: Immutable,
5143
    {
5144
        let src = self.as_bytes();
5145
        match dst.get_mut(..src.len()) {
5146
            Some(dst) => {
5147
                // SAFETY: Within this branch of the `match`, we have ensured
5148
                // through fallible subslicing that `dst.len()` is equal to
5149
                // `src.len()`. Neither the size of the source nor the size of
5150
                // the destination change between the above subslicing operation
5151
                // and the invocation of `copy_unchecked`.
5152
                unsafe { util::copy_unchecked(src, dst) }
5153
                Ok(())
5154
            }
5155
            None => Err(SizeError::new(self)),
5156
        }
5157
    }
5158
5159
    /// Writes a copy of `self` to the suffix of `dst`.
5160
    ///
5161
    /// `write_to_suffix` writes `self` to the last `size_of_val(self)` bytes of
5162
    /// `dst`. If `dst.len() < size_of_val(self)`, it returns `Err`.
5163
    ///
5164
    /// # Examples
5165
    ///
5166
    /// ```
5167
    /// use zerocopy::IntoBytes;
5168
    /// # use zerocopy_derive::*;
5169
    ///
5170
    /// #[derive(IntoBytes, Immutable)]
5171
    /// #[repr(C)]
5172
    /// struct PacketHeader {
5173
    ///     src_port: [u8; 2],
5174
    ///     dst_port: [u8; 2],
5175
    ///     length: [u8; 2],
5176
    ///     checksum: [u8; 2],
5177
    /// }
5178
    ///
5179
    /// let header = PacketHeader {
5180
    ///     src_port: [0, 1],
5181
    ///     dst_port: [2, 3],
5182
    ///     length: [4, 5],
5183
    ///     checksum: [6, 7],
5184
    /// };
5185
    ///
5186
    /// let mut bytes = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0];
5187
    ///
5188
    /// header.write_to_suffix(&mut bytes[..]);
5189
    ///
5190
    /// assert_eq!(bytes, [0, 0, 0, 1, 2, 3, 4, 5, 6, 7]);
5191
    ///
5192
    /// let mut insufficent_bytes = &mut [0, 0][..];
5193
    ///
5194
    /// let write_result = header.write_to_suffix(insufficent_bytes);
5195
    ///
5196
    /// assert!(write_result.is_err());
5197
    /// assert_eq!(insufficent_bytes, [0, 0]);
5198
    /// ```
5199
    ///
5200
    /// If insufficient target bytes are provided, `write_to_suffix` returns
5201
    /// `Err` and leaves the target bytes unmodified:
5202
    ///
5203
    /// ```
5204
    /// # use zerocopy::IntoBytes;
5205
    /// # let header = u128::MAX;
5206
    /// let mut insufficent_bytes = &mut [0, 0][..];
5207
    ///
5208
    /// let write_result = header.write_to_suffix(insufficent_bytes);
5209
    ///
5210
    /// assert!(write_result.is_err());
5211
    /// assert_eq!(insufficent_bytes, [0, 0]);
5212
    /// ```
5213
    #[must_use = "callers should check the return value to see if the operation succeeded"]
5214
    #[inline]
5215
    fn write_to_suffix(&self, dst: &mut [u8]) -> Result<(), SizeError<&Self, &mut [u8]>>
5216
    where
5217
        Self: Immutable,
5218
    {
5219
        let src = self.as_bytes();
5220
        let start = if let Some(start) = dst.len().checked_sub(src.len()) {
5221
            start
5222
        } else {
5223
            return Err(SizeError::new(self));
5224
        };
5225
        let dst = if let Some(dst) = dst.get_mut(start..) {
5226
            dst
5227
        } else {
5228
            // get_mut() should never return None here. We return a `SizeError`
5229
            // rather than .unwrap() because in the event the branch is not
5230
            // optimized away, returning a value is generally lighter-weight
5231
            // than panicking.
5232
            return Err(SizeError::new(self));
5233
        };
5234
        // SAFETY: Through fallible subslicing of `dst`, we have ensured that
5235
        // `dst.len()` is equal to `src.len()`. Neither the size of the source
5236
        // nor the size of the destination change between the above subslicing
5237
        // operation and the invocation of `copy_unchecked`.
5238
        unsafe {
5239
            util::copy_unchecked(src, dst);
5240
        }
5241
        Ok(())
5242
    }
5243
5244
    /// Writes a copy of `self` to an `io::Write`.
5245
    ///
5246
    /// This is a shorthand for `dst.write_all(self.as_bytes())`, and is useful
5247
    /// for interfacing with operating system byte sinks (files, sockets, etc.).
5248
    ///
5249
    /// # Examples
5250
    ///
5251
    /// ```no_run
5252
    /// use zerocopy::{byteorder::big_endian::U16, FromBytes, IntoBytes};
5253
    /// use std::fs::File;
5254
    /// # use zerocopy_derive::*;
5255
    ///
5256
    /// #[derive(FromBytes, IntoBytes, Immutable, KnownLayout)]
5257
    /// #[repr(C, packed)]
5258
    /// struct GrayscaleImage {
5259
    ///     height: U16,
5260
    ///     width: U16,
5261
    ///     pixels: [U16],
5262
    /// }
5263
    ///
5264
    /// let image = GrayscaleImage::ref_from_bytes(&[0, 0, 0, 0][..]).unwrap();
5265
    /// let mut file = File::create("image.bin").unwrap();
5266
    /// image.write_to_io(&mut file).unwrap();
5267
    /// ```
5268
    ///
5269
    /// If the write fails, `write_to_io` returns `Err` and a partial write may
5270
    /// have occured; e.g.:
5271
    ///
5272
    /// ```
5273
    /// # use zerocopy::IntoBytes;
5274
    ///
5275
    /// let src = u128::MAX;
5276
    /// let mut dst = [0u8; 2];
5277
    ///
5278
    /// let write_result = src.write_to_io(&mut dst[..]);
5279
    ///
5280
    /// assert!(write_result.is_err());
5281
    /// assert_eq!(dst, [255, 255]);
5282
    /// ```
5283
    #[cfg(feature = "std")]
5284
    #[inline(always)]
5285
    fn write_to_io<W>(&self, mut dst: W) -> io::Result<()>
5286
    where
5287
        Self: Immutable,
5288
        W: io::Write,
5289
    {
5290
        dst.write_all(self.as_bytes())
5291
    }
5292
5293
    #[deprecated(since = "0.8.0", note = "`IntoBytes::as_bytes_mut` was renamed to `as_mut_bytes`")]
5294
    #[doc(hidden)]
5295
    #[inline]
5296
    fn as_bytes_mut(&mut self) -> &mut [u8]
5297
    where
5298
        Self: FromBytes,
5299
    {
5300
        self.as_mut_bytes()
5301
    }
5302
}
5303
5304
/// Analyzes whether a type is [`Unaligned`].
5305
///
5306
/// This derive analyzes, at compile time, whether the annotated type satisfies
5307
/// the [safety conditions] of `Unaligned` and implements `Unaligned` if it is
5308
/// sound to do so. This derive can be applied to structs, enums, and unions;
5309
/// e.g.:
5310
///
5311
/// ```
5312
/// # use zerocopy_derive::Unaligned;
5313
/// #[derive(Unaligned)]
5314
/// #[repr(C)]
5315
/// struct MyStruct {
5316
/// # /*
5317
///     ...
5318
/// # */
5319
/// }
5320
///
5321
/// #[derive(Unaligned)]
5322
/// #[repr(u8)]
5323
/// enum MyEnum {
5324
/// #   Variant0,
5325
/// # /*
5326
///     ...
5327
/// # */
5328
/// }
5329
///
5330
/// #[derive(Unaligned)]
5331
/// #[repr(packed)]
5332
/// union MyUnion {
5333
/// #   variant: u8,
5334
/// # /*
5335
///     ...
5336
/// # */
5337
/// }
5338
/// ```
5339
///
5340
/// # Analysis
5341
///
5342
/// *This section describes, roughly, the analysis performed by this derive to
5343
/// determine whether it is sound to implement `Unaligned` for a given type.
5344
/// Unless you are modifying the implementation of this derive, or attempting to
5345
/// manually implement `Unaligned` for a type yourself, you don't need to read
5346
/// this section.*
5347
///
5348
/// If a type has the following properties, then this derive can implement
5349
/// `Unaligned` for that type:
5350
///
5351
/// - If the type is a struct or union:
5352
///   - If `repr(align(N))` is provided, `N` must equal 1.
5353
///   - If the type is `repr(C)` or `repr(transparent)`, all fields must be
5354
///     [`Unaligned`].
5355
///   - If the type is not `repr(C)` or `repr(transparent)`, it must be
5356
///     `repr(packed)` or `repr(packed(1))`.
5357
/// - If the type is an enum:
5358
///   - If `repr(align(N))` is provided, `N` must equal 1.
5359
///   - It must be a field-less enum (meaning that all variants have no fields).
5360
///   - It must be `repr(i8)` or `repr(u8)`.
5361
///
5362
/// [safety conditions]: trait@Unaligned#safety
5363
#[cfg(any(feature = "derive", test))]
5364
#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
5365
pub use zerocopy_derive::Unaligned;
5366
5367
/// Types with no alignment requirement.
5368
///
5369
/// If `T: Unaligned`, then `align_of::<T>() == 1`.
5370
///
5371
/// # Implementation
5372
///
5373
/// **Do not implement this trait yourself!** Instead, use
5374
/// [`#[derive(Unaligned)]`][derive]; e.g.:
5375
///
5376
/// ```
5377
/// # use zerocopy_derive::Unaligned;
5378
/// #[derive(Unaligned)]
5379
/// #[repr(C)]
5380
/// struct MyStruct {
5381
/// # /*
5382
///     ...
5383
/// # */
5384
/// }
5385
///
5386
/// #[derive(Unaligned)]
5387
/// #[repr(u8)]
5388
/// enum MyEnum {
5389
/// #   Variant0,
5390
/// # /*
5391
///     ...
5392
/// # */
5393
/// }
5394
///
5395
/// #[derive(Unaligned)]
5396
/// #[repr(packed)]
5397
/// union MyUnion {
5398
/// #   variant: u8,
5399
/// # /*
5400
///     ...
5401
/// # */
5402
/// }
5403
/// ```
5404
///
5405
/// This derive performs a sophisticated, compile-time safety analysis to
5406
/// determine whether a type is `Unaligned`.
5407
///
5408
/// # Safety
5409
///
5410
/// *This section describes what is required in order for `T: Unaligned`, and
5411
/// what unsafe code may assume of such types. If you don't plan on implementing
5412
/// `Unaligned` manually, and you don't plan on writing unsafe code that
5413
/// operates on `Unaligned` types, then you don't need to read this section.*
5414
///
5415
/// If `T: Unaligned`, then unsafe code may assume that it is sound to produce a
5416
/// reference to `T` at any memory location regardless of alignment. If a type
5417
/// is marked as `Unaligned` which violates this contract, it may cause
5418
/// undefined behavior.
5419
///
5420
/// `#[derive(Unaligned)]` only permits [types which satisfy these
5421
/// requirements][derive-analysis].
5422
///
5423
#[cfg_attr(
5424
    feature = "derive",
5425
    doc = "[derive]: zerocopy_derive::Unaligned",
5426
    doc = "[derive-analysis]: zerocopy_derive::Unaligned#analysis"
5427
)]
5428
#[cfg_attr(
5429
    not(feature = "derive"),
5430
    doc = concat!("[derive]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.Unaligned.html"),
5431
    doc = concat!("[derive-analysis]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.Unaligned.html#analysis"),
5432
)]
5433
#[cfg_attr(
5434
    zerocopy_diagnostic_on_unimplemented_1_78_0,
5435
    diagnostic::on_unimplemented(note = "Consider adding `#[derive(Unaligned)]` to `{Self}`")
5436
)]
5437
pub unsafe trait Unaligned {
5438
    // The `Self: Sized` bound makes it so that `Unaligned` is still object
5439
    // safe.
5440
    #[doc(hidden)]
5441
    fn only_derive_is_allowed_to_implement_this_trait()
5442
    where
5443
        Self: Sized;
5444
}
5445
5446
/// Derives an optimized implementation of [`Hash`] for types that implement
5447
/// [`IntoBytes`] and [`Immutable`].
5448
///
5449
/// The standard library's derive for `Hash` generates a recursive descent
5450
/// into the fields of the type it is applied to. Instead, the implementation
5451
/// derived by this macro makes a single call to [`Hasher::write()`] for both
5452
/// [`Hash::hash()`] and [`Hash::hash_slice()`], feeding the hasher the bytes
5453
/// of the type or slice all at once.
5454
///
5455
/// [`Hash`]: core::hash::Hash
5456
/// [`Hash::hash()`]: core::hash::Hash::hash()
5457
/// [`Hash::hash_slice()`]: core::hash::Hash::hash_slice()
5458
#[cfg(any(feature = "derive", test))]
5459
#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
5460
pub use zerocopy_derive::ByteHash;
5461
5462
/// Derives an optimized implementation of [`PartialEq`] and [`Eq`] for types
5463
/// that implement [`IntoBytes`] and [`Immutable`].
5464
///
5465
/// The standard library's derive for [`PartialEq`] generates a recursive
5466
/// descent into the fields of the type it is applied to. Instead, the
5467
/// implementation derived by this macro performs a single slice comparison of
5468
/// the bytes of the two values being compared.
5469
#[cfg(any(feature = "derive", test))]
5470
#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
5471
pub use zerocopy_derive::ByteEq;
5472
5473
#[cfg(feature = "alloc")]
5474
#[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))]
5475
#[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
5476
mod alloc_support {
5477
    use super::*;
5478
5479
    /// Extends a `Vec<T>` by pushing `additional` new items onto the end of the
5480
    /// vector. The new items are initialized with zeros.
5481
    #[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
5482
    #[doc(hidden)]
5483
    #[deprecated(since = "0.8.0", note = "moved to `FromZeros`")]
5484
    #[inline(always)]
5485
    pub fn extend_vec_zeroed<T: FromZeros>(
5486
        v: &mut Vec<T>,
5487
        additional: usize,
5488
    ) -> Result<(), AllocError> {
5489
        <T as FromZeros>::extend_vec_zeroed(v, additional)
5490
    }
5491
5492
    /// Inserts `additional` new items into `Vec<T>` at `position`. The new
5493
    /// items are initialized with zeros.
5494
    ///
5495
    /// # Panics
5496
    ///
5497
    /// Panics if `position > v.len()`.
5498
    #[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
5499
    #[doc(hidden)]
5500
    #[deprecated(since = "0.8.0", note = "moved to `FromZeros`")]
5501
    #[inline(always)]
5502
    pub fn insert_vec_zeroed<T: FromZeros>(
5503
        v: &mut Vec<T>,
5504
        position: usize,
5505
        additional: usize,
5506
    ) -> Result<(), AllocError> {
5507
        <T as FromZeros>::insert_vec_zeroed(v, position, additional)
5508
    }
5509
}
5510
5511
#[cfg(feature = "alloc")]
5512
#[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
5513
#[doc(hidden)]
5514
pub use alloc_support::*;
5515
5516
#[cfg(test)]
5517
#[allow(clippy::assertions_on_result_states, clippy::unreadable_literal)]
5518
mod tests {
5519
    use static_assertions::assert_impl_all;
5520
5521
    use super::*;
5522
    use crate::util::testutil::*;
5523
5524
    // An unsized type.
5525
    //
5526
    // This is used to test the custom derives of our traits. The `[u8]` type
5527
    // gets a hand-rolled impl, so it doesn't exercise our custom derives.
5528
    #[derive(Debug, Eq, PartialEq, FromBytes, IntoBytes, Unaligned, Immutable)]
5529
    #[repr(transparent)]
5530
    struct Unsized([u8]);
5531
5532
    impl Unsized {
5533
        fn from_mut_slice(slc: &mut [u8]) -> &mut Unsized {
5534
            // SAFETY: This *probably* sound - since the layouts of `[u8]` and
5535
            // `Unsized` are the same, so are the layouts of `&mut [u8]` and
5536
            // `&mut Unsized`. [1] Even if it turns out that this isn't actually
5537
            // guaranteed by the language spec, we can just change this since
5538
            // it's in test code.
5539
            //
5540
            // [1] https://github.com/rust-lang/unsafe-code-guidelines/issues/375
5541
            unsafe { mem::transmute(slc) }
5542
        }
5543
    }
5544
5545
    #[test]
5546
    fn test_known_layout() {
5547
        // Test that `$ty` and `ManuallyDrop<$ty>` have the expected layout.
5548
        // Test that `PhantomData<$ty>` has the same layout as `()` regardless
5549
        // of `$ty`.
5550
        macro_rules! test {
5551
            ($ty:ty, $expect:expr) => {
5552
                let expect = $expect;
5553
                assert_eq!(<$ty as KnownLayout>::LAYOUT, expect);
5554
                assert_eq!(<ManuallyDrop<$ty> as KnownLayout>::LAYOUT, expect);
5555
                assert_eq!(<PhantomData<$ty> as KnownLayout>::LAYOUT, <() as KnownLayout>::LAYOUT);
5556
            };
5557
        }
5558
5559
        let layout = |offset, align, _trailing_slice_elem_size| DstLayout {
5560
            align: NonZeroUsize::new(align).unwrap(),
5561
            size_info: match _trailing_slice_elem_size {
5562
                None => SizeInfo::Sized { size: offset },
5563
                Some(elem_size) => SizeInfo::SliceDst(TrailingSliceLayout { offset, elem_size }),
5564
            },
5565
        };
5566
5567
        test!((), layout(0, 1, None));
5568
        test!(u8, layout(1, 1, None));
5569
        // Use `align_of` because `u64` alignment may be smaller than 8 on some
5570
        // platforms.
5571
        test!(u64, layout(8, mem::align_of::<u64>(), None));
5572
        test!(AU64, layout(8, 8, None));
5573
5574
        test!(Option<&'static ()>, usize::LAYOUT);
5575
5576
        test!([()], layout(0, 1, Some(0)));
5577
        test!([u8], layout(0, 1, Some(1)));
5578
        test!(str, layout(0, 1, Some(1)));
5579
    }
5580
5581
    #[cfg(feature = "derive")]
5582
    #[test]
5583
    fn test_known_layout_derive() {
5584
        // In this and other files (`late_compile_pass.rs`,
5585
        // `mid_compile_pass.rs`, and `struct.rs`), we test success and failure
5586
        // modes of `derive(KnownLayout)` for the following combination of
5587
        // properties:
5588
        //
5589
        // +------------+--------------------------------------+-----------+
5590
        // |            |      trailing field properties       |           |
5591
        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
5592
        // |------------+----------+----------------+----------+-----------|
5593
        // |          N |        N |              N |        N |      KL00 |
5594
        // |          N |        N |              N |        Y |      KL01 |
5595
        // |          N |        N |              Y |        N |      KL02 |
5596
        // |          N |        N |              Y |        Y |      KL03 |
5597
        // |          N |        Y |              N |        N |      KL04 |
5598
        // |          N |        Y |              N |        Y |      KL05 |
5599
        // |          N |        Y |              Y |        N |      KL06 |
5600
        // |          N |        Y |              Y |        Y |      KL07 |
5601
        // |          Y |        N |              N |        N |      KL08 |
5602
        // |          Y |        N |              N |        Y |      KL09 |
5603
        // |          Y |        N |              Y |        N |      KL10 |
5604
        // |          Y |        N |              Y |        Y |      KL11 |
5605
        // |          Y |        Y |              N |        N |      KL12 |
5606
        // |          Y |        Y |              N |        Y |      KL13 |
5607
        // |          Y |        Y |              Y |        N |      KL14 |
5608
        // |          Y |        Y |              Y |        Y |      KL15 |
5609
        // +------------+----------+----------------+----------+-----------+
5610
5611
        struct NotKnownLayout<T = ()> {
5612
            _t: T,
5613
        }
5614
5615
        #[derive(KnownLayout)]
5616
        #[repr(C)]
5617
        struct AlignSize<const ALIGN: usize, const SIZE: usize>
5618
        where
5619
            elain::Align<ALIGN>: elain::Alignment,
5620
        {
5621
            _align: elain::Align<ALIGN>,
5622
            size: [u8; SIZE],
5623
        }
5624
5625
        type AU16 = AlignSize<2, 2>;
5626
        type AU32 = AlignSize<4, 4>;
5627
5628
        fn _assert_kl<T: ?Sized + KnownLayout>(_: &T) {}
5629
5630
        let sized_layout = |align, size| DstLayout {
5631
            align: NonZeroUsize::new(align).unwrap(),
5632
            size_info: SizeInfo::Sized { size },
5633
        };
5634
5635
        let unsized_layout = |align, elem_size, offset| DstLayout {
5636
            align: NonZeroUsize::new(align).unwrap(),
5637
            size_info: SizeInfo::SliceDst(TrailingSliceLayout { offset, elem_size }),
5638
        };
5639
5640
        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
5641
        // |          N |        N |              N |        Y |      KL01 |
5642
        #[allow(dead_code)]
5643
        #[derive(KnownLayout)]
5644
        struct KL01(NotKnownLayout<AU32>, NotKnownLayout<AU16>);
5645
5646
        let expected = DstLayout::for_type::<KL01>();
5647
5648
        assert_eq!(<KL01 as KnownLayout>::LAYOUT, expected);
5649
        assert_eq!(<KL01 as KnownLayout>::LAYOUT, sized_layout(4, 8));
5650
5651
        // ...with `align(N)`:
5652
        #[allow(dead_code)]
5653
        #[derive(KnownLayout)]
5654
        #[repr(align(64))]
5655
        struct KL01Align(NotKnownLayout<AU32>, NotKnownLayout<AU16>);
5656
5657
        let expected = DstLayout::for_type::<KL01Align>();
5658
5659
        assert_eq!(<KL01Align as KnownLayout>::LAYOUT, expected);
5660
        assert_eq!(<KL01Align as KnownLayout>::LAYOUT, sized_layout(64, 64));
5661
5662
        // ...with `packed`:
5663
        #[allow(dead_code)]
5664
        #[derive(KnownLayout)]
5665
        #[repr(packed)]
5666
        struct KL01Packed(NotKnownLayout<AU32>, NotKnownLayout<AU16>);
5667
5668
        let expected = DstLayout::for_type::<KL01Packed>();
5669
5670
        assert_eq!(<KL01Packed as KnownLayout>::LAYOUT, expected);
5671
        assert_eq!(<KL01Packed as KnownLayout>::LAYOUT, sized_layout(1, 6));
5672
5673
        // ...with `packed(N)`:
5674
        #[allow(dead_code)]
5675
        #[derive(KnownLayout)]
5676
        #[repr(packed(2))]
5677
        struct KL01PackedN(NotKnownLayout<AU32>, NotKnownLayout<AU16>);
5678
5679
        assert_impl_all!(KL01PackedN: KnownLayout);
5680
5681
        let expected = DstLayout::for_type::<KL01PackedN>();
5682
5683
        assert_eq!(<KL01PackedN as KnownLayout>::LAYOUT, expected);
5684
        assert_eq!(<KL01PackedN as KnownLayout>::LAYOUT, sized_layout(2, 6));
5685
5686
        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
5687
        // |          N |        N |              Y |        Y |      KL03 |
5688
        #[allow(dead_code)]
5689
        #[derive(KnownLayout)]
5690
        struct KL03(NotKnownLayout, u8);
5691
5692
        let expected = DstLayout::for_type::<KL03>();
5693
5694
        assert_eq!(<KL03 as KnownLayout>::LAYOUT, expected);
5695
        assert_eq!(<KL03 as KnownLayout>::LAYOUT, sized_layout(1, 1));
5696
5697
        // ... with `align(N)`
5698
        #[allow(dead_code)]
5699
        #[derive(KnownLayout)]
5700
        #[repr(align(64))]
5701
        struct KL03Align(NotKnownLayout<AU32>, u8);
5702
5703
        let expected = DstLayout::for_type::<KL03Align>();
5704
5705
        assert_eq!(<KL03Align as KnownLayout>::LAYOUT, expected);
5706
        assert_eq!(<KL03Align as KnownLayout>::LAYOUT, sized_layout(64, 64));
5707
5708
        // ... with `packed`:
5709
        #[allow(dead_code)]
5710
        #[derive(KnownLayout)]
5711
        #[repr(packed)]
5712
        struct KL03Packed(NotKnownLayout<AU32>, u8);
5713
5714
        let expected = DstLayout::for_type::<KL03Packed>();
5715
5716
        assert_eq!(<KL03Packed as KnownLayout>::LAYOUT, expected);
5717
        assert_eq!(<KL03Packed as KnownLayout>::LAYOUT, sized_layout(1, 5));
5718
5719
        // ... with `packed(N)`
5720
        #[allow(dead_code)]
5721
        #[derive(KnownLayout)]
5722
        #[repr(packed(2))]
5723
        struct KL03PackedN(NotKnownLayout<AU32>, u8);
5724
5725
        assert_impl_all!(KL03PackedN: KnownLayout);
5726
5727
        let expected = DstLayout::for_type::<KL03PackedN>();
5728
5729
        assert_eq!(<KL03PackedN as KnownLayout>::LAYOUT, expected);
5730
        assert_eq!(<KL03PackedN as KnownLayout>::LAYOUT, sized_layout(2, 6));
5731
5732
        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
5733
        // |          N |        Y |              N |        Y |      KL05 |
5734
        #[allow(dead_code)]
5735
        #[derive(KnownLayout)]
5736
        struct KL05<T>(u8, T);
5737
5738
        fn _test_kl05<T>(t: T) -> impl KnownLayout {
5739
            KL05(0u8, t)
5740
        }
5741
5742
        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
5743
        // |          N |        Y |              Y |        Y |      KL07 |
5744
        #[allow(dead_code)]
5745
        #[derive(KnownLayout)]
5746
        struct KL07<T: KnownLayout>(u8, T);
5747
5748
        fn _test_kl07<T: KnownLayout>(t: T) -> impl KnownLayout {
5749
            let _ = KL07(0u8, t);
5750
        }
5751
5752
        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
5753
        // |          Y |        N |              Y |        N |      KL10 |
5754
        #[allow(dead_code)]
5755
        #[derive(KnownLayout)]
5756
        #[repr(C)]
5757
        struct KL10(NotKnownLayout<AU32>, [u8]);
5758
5759
        let expected = DstLayout::new_zst(None)
5760
            .extend(DstLayout::for_type::<NotKnownLayout<AU32>>(), None)
5761
            .extend(<[u8] as KnownLayout>::LAYOUT, None)
5762
            .pad_to_align();
5763
5764
        assert_eq!(<KL10 as KnownLayout>::LAYOUT, expected);
5765
        assert_eq!(<KL10 as KnownLayout>::LAYOUT, unsized_layout(4, 1, 4));
5766
5767
        // ...with `align(N)`:
5768
        #[allow(dead_code)]
5769
        #[derive(KnownLayout)]
5770
        #[repr(C, align(64))]
5771
        struct KL10Align(NotKnownLayout<AU32>, [u8]);
5772
5773
        let repr_align = NonZeroUsize::new(64);
5774
5775
        let expected = DstLayout::new_zst(repr_align)
5776
            .extend(DstLayout::for_type::<NotKnownLayout<AU32>>(), None)
5777
            .extend(<[u8] as KnownLayout>::LAYOUT, None)
5778
            .pad_to_align();
5779
5780
        assert_eq!(<KL10Align as KnownLayout>::LAYOUT, expected);
5781
        assert_eq!(<KL10Align as KnownLayout>::LAYOUT, unsized_layout(64, 1, 4));
5782
5783
        // ...with `packed`:
5784
        #[allow(dead_code)]
5785
        #[derive(KnownLayout)]
5786
        #[repr(C, packed)]
5787
        struct KL10Packed(NotKnownLayout<AU32>, [u8]);
5788
5789
        let repr_packed = NonZeroUsize::new(1);
5790
5791
        let expected = DstLayout::new_zst(None)
5792
            .extend(DstLayout::for_type::<NotKnownLayout<AU32>>(), repr_packed)
5793
            .extend(<[u8] as KnownLayout>::LAYOUT, repr_packed)
5794
            .pad_to_align();
5795
5796
        assert_eq!(<KL10Packed as KnownLayout>::LAYOUT, expected);
5797
        assert_eq!(<KL10Packed as KnownLayout>::LAYOUT, unsized_layout(1, 1, 4));
5798
5799
        // ...with `packed(N)`:
5800
        #[allow(dead_code)]
5801
        #[derive(KnownLayout)]
5802
        #[repr(C, packed(2))]
5803
        struct KL10PackedN(NotKnownLayout<AU32>, [u8]);
5804
5805
        let repr_packed = NonZeroUsize::new(2);
5806
5807
        let expected = DstLayout::new_zst(None)
5808
            .extend(DstLayout::for_type::<NotKnownLayout<AU32>>(), repr_packed)
5809
            .extend(<[u8] as KnownLayout>::LAYOUT, repr_packed)
5810
            .pad_to_align();
5811
5812
        assert_eq!(<KL10PackedN as KnownLayout>::LAYOUT, expected);
5813
        assert_eq!(<KL10PackedN as KnownLayout>::LAYOUT, unsized_layout(2, 1, 4));
5814
5815
        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
5816
        // |          Y |        N |              Y |        Y |      KL11 |
5817
        #[allow(dead_code)]
5818
        #[derive(KnownLayout)]
5819
        #[repr(C)]
5820
        struct KL11(NotKnownLayout<AU64>, u8);
5821
5822
        let expected = DstLayout::new_zst(None)
5823
            .extend(DstLayout::for_type::<NotKnownLayout<AU64>>(), None)
5824
            .extend(<u8 as KnownLayout>::LAYOUT, None)
5825
            .pad_to_align();
5826
5827
        assert_eq!(<KL11 as KnownLayout>::LAYOUT, expected);
5828
        assert_eq!(<KL11 as KnownLayout>::LAYOUT, sized_layout(8, 16));
5829
5830
        // ...with `align(N)`:
5831
        #[allow(dead_code)]
5832
        #[derive(KnownLayout)]
5833
        #[repr(C, align(64))]
5834
        struct KL11Align(NotKnownLayout<AU64>, u8);
5835
5836
        let repr_align = NonZeroUsize::new(64);
5837
5838
        let expected = DstLayout::new_zst(repr_align)
5839
            .extend(DstLayout::for_type::<NotKnownLayout<AU64>>(), None)
5840
            .extend(<u8 as KnownLayout>::LAYOUT, None)
5841
            .pad_to_align();
5842
5843
        assert_eq!(<KL11Align as KnownLayout>::LAYOUT, expected);
5844
        assert_eq!(<KL11Align as KnownLayout>::LAYOUT, sized_layout(64, 64));
5845
5846
        // ...with `packed`:
5847
        #[allow(dead_code)]
5848
        #[derive(KnownLayout)]
5849
        #[repr(C, packed)]
5850
        struct KL11Packed(NotKnownLayout<AU64>, u8);
5851
5852
        let repr_packed = NonZeroUsize::new(1);
5853
5854
        let expected = DstLayout::new_zst(None)
5855
            .extend(DstLayout::for_type::<NotKnownLayout<AU64>>(), repr_packed)
5856
            .extend(<u8 as KnownLayout>::LAYOUT, repr_packed)
5857
            .pad_to_align();
5858
5859
        assert_eq!(<KL11Packed as KnownLayout>::LAYOUT, expected);
5860
        assert_eq!(<KL11Packed as KnownLayout>::LAYOUT, sized_layout(1, 9));
5861
5862
        // ...with `packed(N)`:
5863
        #[allow(dead_code)]
5864
        #[derive(KnownLayout)]
5865
        #[repr(C, packed(2))]
5866
        struct KL11PackedN(NotKnownLayout<AU64>, u8);
5867
5868
        let repr_packed = NonZeroUsize::new(2);
5869
5870
        let expected = DstLayout::new_zst(None)
5871
            .extend(DstLayout::for_type::<NotKnownLayout<AU64>>(), repr_packed)
5872
            .extend(<u8 as KnownLayout>::LAYOUT, repr_packed)
5873
            .pad_to_align();
5874
5875
        assert_eq!(<KL11PackedN as KnownLayout>::LAYOUT, expected);
5876
        assert_eq!(<KL11PackedN as KnownLayout>::LAYOUT, sized_layout(2, 10));
5877
5878
        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
5879
        // |          Y |        Y |              Y |        N |      KL14 |
5880
        #[allow(dead_code)]
5881
        #[derive(KnownLayout)]
5882
        #[repr(C)]
5883
        struct KL14<T: ?Sized + KnownLayout>(u8, T);
5884
5885
        fn _test_kl14<T: ?Sized + KnownLayout>(kl: &KL14<T>) {
5886
            _assert_kl(kl)
5887
        }
5888
5889
        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
5890
        // |          Y |        Y |              Y |        Y |      KL15 |
5891
        #[allow(dead_code)]
5892
        #[derive(KnownLayout)]
5893
        #[repr(C)]
5894
        struct KL15<T: KnownLayout>(u8, T);
5895
5896
        fn _test_kl15<T: KnownLayout>(t: T) -> impl KnownLayout {
5897
            let _ = KL15(0u8, t);
5898
        }
5899
5900
        // Test a variety of combinations of field types:
5901
        //  - ()
5902
        //  - u8
5903
        //  - AU16
5904
        //  - [()]
5905
        //  - [u8]
5906
        //  - [AU16]
5907
5908
        #[allow(clippy::upper_case_acronyms, dead_code)]
5909
        #[derive(KnownLayout)]
5910
        #[repr(C)]
5911
        struct KLTU<T, U: ?Sized>(T, U);
5912
5913
        assert_eq!(<KLTU<(), ()> as KnownLayout>::LAYOUT, sized_layout(1, 0));
5914
5915
        assert_eq!(<KLTU<(), u8> as KnownLayout>::LAYOUT, sized_layout(1, 1));
5916
5917
        assert_eq!(<KLTU<(), AU16> as KnownLayout>::LAYOUT, sized_layout(2, 2));
5918
5919
        assert_eq!(<KLTU<(), [()]> as KnownLayout>::LAYOUT, unsized_layout(1, 0, 0));
5920
5921
        assert_eq!(<KLTU<(), [u8]> as KnownLayout>::LAYOUT, unsized_layout(1, 1, 0));
5922
5923
        assert_eq!(<KLTU<(), [AU16]> as KnownLayout>::LAYOUT, unsized_layout(2, 2, 0));
5924
5925
        assert_eq!(<KLTU<u8, ()> as KnownLayout>::LAYOUT, sized_layout(1, 1));
5926
5927
        assert_eq!(<KLTU<u8, u8> as KnownLayout>::LAYOUT, sized_layout(1, 2));
5928
5929
        assert_eq!(<KLTU<u8, AU16> as KnownLayout>::LAYOUT, sized_layout(2, 4));
5930
5931
        assert_eq!(<KLTU<u8, [()]> as KnownLayout>::LAYOUT, unsized_layout(1, 0, 1));
5932
5933
        assert_eq!(<KLTU<u8, [u8]> as KnownLayout>::LAYOUT, unsized_layout(1, 1, 1));
5934
5935
        assert_eq!(<KLTU<u8, [AU16]> as KnownLayout>::LAYOUT, unsized_layout(2, 2, 2));
5936
5937
        assert_eq!(<KLTU<AU16, ()> as KnownLayout>::LAYOUT, sized_layout(2, 2));
5938
5939
        assert_eq!(<KLTU<AU16, u8> as KnownLayout>::LAYOUT, sized_layout(2, 4));
5940
5941
        assert_eq!(<KLTU<AU16, AU16> as KnownLayout>::LAYOUT, sized_layout(2, 4));
5942
5943
        assert_eq!(<KLTU<AU16, [()]> as KnownLayout>::LAYOUT, unsized_layout(2, 0, 2));
5944
5945
        assert_eq!(<KLTU<AU16, [u8]> as KnownLayout>::LAYOUT, unsized_layout(2, 1, 2));
5946
5947
        assert_eq!(<KLTU<AU16, [AU16]> as KnownLayout>::LAYOUT, unsized_layout(2, 2, 2));
5948
5949
        // Test a variety of field counts.
5950
5951
        #[derive(KnownLayout)]
5952
        #[repr(C)]
5953
        struct KLF0;
5954
5955
        assert_eq!(<KLF0 as KnownLayout>::LAYOUT, sized_layout(1, 0));
5956
5957
        #[derive(KnownLayout)]
5958
        #[repr(C)]
5959
        struct KLF1([u8]);
5960
5961
        assert_eq!(<KLF1 as KnownLayout>::LAYOUT, unsized_layout(1, 1, 0));
5962
5963
        #[derive(KnownLayout)]
5964
        #[repr(C)]
5965
        struct KLF2(NotKnownLayout<u8>, [u8]);
5966
5967
        assert_eq!(<KLF2 as KnownLayout>::LAYOUT, unsized_layout(1, 1, 1));
5968
5969
        #[derive(KnownLayout)]
5970
        #[repr(C)]
5971
        struct KLF3(NotKnownLayout<u8>, NotKnownLayout<AU16>, [u8]);
5972
5973
        assert_eq!(<KLF3 as KnownLayout>::LAYOUT, unsized_layout(2, 1, 4));
5974
5975
        #[derive(KnownLayout)]
5976
        #[repr(C)]
5977
        struct KLF4(NotKnownLayout<u8>, NotKnownLayout<AU16>, NotKnownLayout<AU32>, [u8]);
5978
5979
        assert_eq!(<KLF4 as KnownLayout>::LAYOUT, unsized_layout(4, 1, 8));
5980
    }
5981
5982
    #[test]
5983
    fn test_object_safety() {
5984
        fn _takes_no_cell(_: &dyn Immutable) {}
5985
        fn _takes_unaligned(_: &dyn Unaligned) {}
5986
    }
5987
5988
    #[test]
5989
    fn test_from_zeros_only() {
5990
        // Test types that implement `FromZeros` but not `FromBytes`.
5991
5992
        assert!(!bool::new_zeroed());
5993
        assert_eq!(char::new_zeroed(), '\0');
5994
5995
        #[cfg(feature = "alloc")]
5996
        {
5997
            assert_eq!(bool::new_box_zeroed(), Ok(Box::new(false)));
5998
            assert_eq!(char::new_box_zeroed(), Ok(Box::new('\0')));
5999
6000
            assert_eq!(
6001
                <[bool]>::new_box_zeroed_with_elems(3).unwrap().as_ref(),
6002
                [false, false, false]
6003
            );
6004
            assert_eq!(
6005
                <[char]>::new_box_zeroed_with_elems(3).unwrap().as_ref(),
6006
                ['\0', '\0', '\0']
6007
            );
6008
6009
            assert_eq!(bool::new_vec_zeroed(3).unwrap().as_ref(), [false, false, false]);
6010
            assert_eq!(char::new_vec_zeroed(3).unwrap().as_ref(), ['\0', '\0', '\0']);
6011
        }
6012
6013
        let mut string = "hello".to_string();
6014
        let s: &mut str = string.as_mut();
6015
        assert_eq!(s, "hello");
6016
        s.zero();
6017
        assert_eq!(s, "\0\0\0\0\0");
6018
    }
6019
6020
    #[test]
6021
    fn test_zst_count_preserved() {
6022
        // Test that, when an explicit count is provided to for a type with a
6023
        // ZST trailing slice element, that count is preserved. This is
6024
        // important since, for such types, all element counts result in objects
6025
        // of the same size, and so the correct behavior is ambiguous. However,
6026
        // preserving the count as requested by the user is the behavior that we
6027
        // document publicly.
6028
6029
        // FromZeros methods
6030
        #[cfg(feature = "alloc")]
6031
        assert_eq!(<[()]>::new_box_zeroed_with_elems(3).unwrap().len(), 3);
6032
        #[cfg(feature = "alloc")]
6033
        assert_eq!(<()>::new_vec_zeroed(3).unwrap().len(), 3);
6034
6035
        // FromBytes methods
6036
        assert_eq!(<[()]>::ref_from_bytes_with_elems(&[][..], 3).unwrap().len(), 3);
6037
        assert_eq!(<[()]>::ref_from_prefix_with_elems(&[][..], 3).unwrap().0.len(), 3);
6038
        assert_eq!(<[()]>::ref_from_suffix_with_elems(&[][..], 3).unwrap().1.len(), 3);
6039
        assert_eq!(<[()]>::mut_from_bytes_with_elems(&mut [][..], 3).unwrap().len(), 3);
6040
        assert_eq!(<[()]>::mut_from_prefix_with_elems(&mut [][..], 3).unwrap().0.len(), 3);
6041
        assert_eq!(<[()]>::mut_from_suffix_with_elems(&mut [][..], 3).unwrap().1.len(), 3);
6042
    }
6043
6044
    #[test]
6045
    fn test_read_write() {
6046
        const VAL: u64 = 0x12345678;
6047
        #[cfg(target_endian = "big")]
6048
        const VAL_BYTES: [u8; 8] = VAL.to_be_bytes();
6049
        #[cfg(target_endian = "little")]
6050
        const VAL_BYTES: [u8; 8] = VAL.to_le_bytes();
6051
        const ZEROS: [u8; 8] = [0u8; 8];
6052
6053
        // Test `FromBytes::{read_from, read_from_prefix, read_from_suffix}`.
6054
6055
        assert_eq!(u64::read_from_bytes(&VAL_BYTES[..]), Ok(VAL));
6056
        // The first 8 bytes are from `VAL_BYTES` and the second 8 bytes are all
6057
        // zeros.
6058
        let bytes_with_prefix: [u8; 16] = transmute!([VAL_BYTES, [0; 8]]);
6059
        assert_eq!(u64::read_from_prefix(&bytes_with_prefix[..]), Ok((VAL, &ZEROS[..])));
6060
        assert_eq!(u64::read_from_suffix(&bytes_with_prefix[..]), Ok((&VAL_BYTES[..], 0)));
6061
        // The first 8 bytes are all zeros and the second 8 bytes are from
6062
        // `VAL_BYTES`
6063
        let bytes_with_suffix: [u8; 16] = transmute!([[0; 8], VAL_BYTES]);
6064
        assert_eq!(u64::read_from_prefix(&bytes_with_suffix[..]), Ok((0, &VAL_BYTES[..])));
6065
        assert_eq!(u64::read_from_suffix(&bytes_with_suffix[..]), Ok((&ZEROS[..], VAL)));
6066
6067
        // Test `IntoBytes::{write_to, write_to_prefix, write_to_suffix}`.
6068
6069
        let mut bytes = [0u8; 8];
6070
        assert_eq!(VAL.write_to(&mut bytes[..]), Ok(()));
6071
        assert_eq!(bytes, VAL_BYTES);
6072
        let mut bytes = [0u8; 16];
6073
        assert_eq!(VAL.write_to_prefix(&mut bytes[..]), Ok(()));
6074
        let want: [u8; 16] = transmute!([VAL_BYTES, [0; 8]]);
6075
        assert_eq!(bytes, want);
6076
        let mut bytes = [0u8; 16];
6077
        assert_eq!(VAL.write_to_suffix(&mut bytes[..]), Ok(()));
6078
        let want: [u8; 16] = transmute!([[0; 8], VAL_BYTES]);
6079
        assert_eq!(bytes, want);
6080
    }
6081
6082
    #[test]
6083
    #[cfg(feature = "std")]
6084
    fn test_read_write_io() {
6085
        let mut long_buffer = [0, 0, 0, 0];
6086
        assert!(matches!(u16::MAX.write_to_io(&mut long_buffer[..]), Ok(())));
6087
        assert_eq!(long_buffer, [255, 255, 0, 0]);
6088
        assert!(matches!(u16::read_from_io(&long_buffer[..]), Ok(u16::MAX)));
6089
6090
        let mut short_buffer = [0, 0];
6091
        assert!(u32::MAX.write_to_io(&mut short_buffer[..]).is_err());
6092
        assert_eq!(short_buffer, [255, 255]);
6093
        assert!(u32::read_from_io(&short_buffer[..]).is_err());
6094
    }
6095
6096
    #[test]
6097
    fn test_try_from_bytes_try_read_from() {
6098
        assert_eq!(<bool as TryFromBytes>::try_read_from_bytes(&[0]), Ok(false));
6099
        assert_eq!(<bool as TryFromBytes>::try_read_from_bytes(&[1]), Ok(true));
6100
6101
        assert_eq!(<bool as TryFromBytes>::try_read_from_prefix(&[0, 2]), Ok((false, &[2][..])));
6102
        assert_eq!(<bool as TryFromBytes>::try_read_from_prefix(&[1, 2]), Ok((true, &[2][..])));
6103
6104
        assert_eq!(<bool as TryFromBytes>::try_read_from_suffix(&[2, 0]), Ok((&[2][..], false)));
6105
        assert_eq!(<bool as TryFromBytes>::try_read_from_suffix(&[2, 1]), Ok((&[2][..], true)));
6106
6107
        // If we don't pass enough bytes, it fails.
6108
        assert!(matches!(
6109
            <u8 as TryFromBytes>::try_read_from_bytes(&[]),
6110
            Err(TryReadError::Size(_))
6111
        ));
6112
        assert!(matches!(
6113
            <u8 as TryFromBytes>::try_read_from_prefix(&[]),
6114
            Err(TryReadError::Size(_))
6115
        ));
6116
        assert!(matches!(
6117
            <u8 as TryFromBytes>::try_read_from_suffix(&[]),
6118
            Err(TryReadError::Size(_))
6119
        ));
6120
6121
        // If we pass too many bytes, it fails.
6122
        assert!(matches!(
6123
            <u8 as TryFromBytes>::try_read_from_bytes(&[0, 0]),
6124
            Err(TryReadError::Size(_))
6125
        ));
6126
6127
        // If we pass an invalid value, it fails.
6128
        assert!(matches!(
6129
            <bool as TryFromBytes>::try_read_from_bytes(&[2]),
6130
            Err(TryReadError::Validity(_))
6131
        ));
6132
        assert!(matches!(
6133
            <bool as TryFromBytes>::try_read_from_prefix(&[2, 0]),
6134
            Err(TryReadError::Validity(_))
6135
        ));
6136
        assert!(matches!(
6137
            <bool as TryFromBytes>::try_read_from_suffix(&[0, 2]),
6138
            Err(TryReadError::Validity(_))
6139
        ));
6140
6141
        // Reading from a misaligned buffer should still succeed. Since `AU64`'s
6142
        // alignment is 8, and since we read from two adjacent addresses one
6143
        // byte apart, it is guaranteed that at least one of them (though
6144
        // possibly both) will be misaligned.
6145
        let bytes: [u8; 9] = [0, 0, 0, 0, 0, 0, 0, 0, 0];
6146
        assert_eq!(<AU64 as TryFromBytes>::try_read_from_bytes(&bytes[..8]), Ok(AU64(0)));
6147
        assert_eq!(<AU64 as TryFromBytes>::try_read_from_bytes(&bytes[1..9]), Ok(AU64(0)));
6148
6149
        assert_eq!(
6150
            <AU64 as TryFromBytes>::try_read_from_prefix(&bytes[..8]),
6151
            Ok((AU64(0), &[][..]))
6152
        );
6153
        assert_eq!(
6154
            <AU64 as TryFromBytes>::try_read_from_prefix(&bytes[1..9]),
6155
            Ok((AU64(0), &[][..]))
6156
        );
6157
6158
        assert_eq!(
6159
            <AU64 as TryFromBytes>::try_read_from_suffix(&bytes[..8]),
6160
            Ok((&[][..], AU64(0)))
6161
        );
6162
        assert_eq!(
6163
            <AU64 as TryFromBytes>::try_read_from_suffix(&bytes[1..9]),
6164
            Ok((&[][..], AU64(0)))
6165
        );
6166
    }
6167
6168
    #[test]
6169
    fn test_ref_from_mut_from() {
6170
        // Test `FromBytes::{ref_from, mut_from}{,_prefix,Suffix}` success cases
6171
        // Exhaustive coverage for these methods is covered by the `Ref` tests above,
6172
        // which these helper methods defer to.
6173
6174
        let mut buf =
6175
            Align::<[u8; 16], AU64>::new([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]);
6176
6177
        assert_eq!(
6178
            AU64::ref_from_bytes(&buf.t[8..]).unwrap().0.to_ne_bytes(),
6179
            [8, 9, 10, 11, 12, 13, 14, 15]
6180
        );
6181
        let suffix = AU64::mut_from_bytes(&mut buf.t[8..]).unwrap();
6182
        suffix.0 = 0x0101010101010101;
6183
        // The `[u8:9]` is a non-half size of the full buffer, which would catch
6184
        // `from_prefix` having the same implementation as `from_suffix` (issues #506, #511).
6185
        assert_eq!(
6186
            <[u8; 9]>::ref_from_suffix(&buf.t[..]).unwrap(),
6187
            (&[0, 1, 2, 3, 4, 5, 6][..], &[7u8, 1, 1, 1, 1, 1, 1, 1, 1])
6188
        );
6189
        let (prefix, suffix) = AU64::mut_from_suffix(&mut buf.t[1..]).unwrap();
6190
        assert_eq!(prefix, &mut [1u8, 2, 3, 4, 5, 6, 7][..]);
6191
        suffix.0 = 0x0202020202020202;
6192
        let (prefix, suffix) = <[u8; 10]>::mut_from_suffix(&mut buf.t[..]).unwrap();
6193
        assert_eq!(prefix, &mut [0u8, 1, 2, 3, 4, 5][..]);
6194
        suffix[0] = 42;
6195
        assert_eq!(
6196
            <[u8; 9]>::ref_from_prefix(&buf.t[..]).unwrap(),
6197
            (&[0u8, 1, 2, 3, 4, 5, 42, 7, 2], &[2u8, 2, 2, 2, 2, 2, 2][..])
6198
        );
6199
        <[u8; 2]>::mut_from_prefix(&mut buf.t[..]).unwrap().0[1] = 30;
6200
        assert_eq!(buf.t, [0, 30, 2, 3, 4, 5, 42, 7, 2, 2, 2, 2, 2, 2, 2, 2]);
6201
    }
6202
6203
    #[test]
6204
    fn test_ref_from_mut_from_error() {
6205
        // Test `FromBytes::{ref_from, mut_from}{,_prefix,Suffix}` error cases.
6206
6207
        // Fail because the buffer is too large.
6208
        let mut buf = Align::<[u8; 16], AU64>::default();
6209
        // `buf.t` should be aligned to 8, so only the length check should fail.
6210
        assert!(AU64::ref_from_bytes(&buf.t[..]).is_err());
6211
        assert!(AU64::mut_from_bytes(&mut buf.t[..]).is_err());
6212
        assert!(<[u8; 8]>::ref_from_bytes(&buf.t[..]).is_err());
6213
        assert!(<[u8; 8]>::mut_from_bytes(&mut buf.t[..]).is_err());
6214
6215
        // Fail because the buffer is too small.
6216
        let mut buf = Align::<[u8; 4], AU64>::default();
6217
        assert!(AU64::ref_from_bytes(&buf.t[..]).is_err());
6218
        assert!(AU64::mut_from_bytes(&mut buf.t[..]).is_err());
6219
        assert!(<[u8; 8]>::ref_from_bytes(&buf.t[..]).is_err());
6220
        assert!(<[u8; 8]>::mut_from_bytes(&mut buf.t[..]).is_err());
6221
        assert!(AU64::ref_from_prefix(&buf.t[..]).is_err());
6222
        assert!(AU64::mut_from_prefix(&mut buf.t[..]).is_err());
6223
        assert!(AU64::ref_from_suffix(&buf.t[..]).is_err());
6224
        assert!(AU64::mut_from_suffix(&mut buf.t[..]).is_err());
6225
        assert!(<[u8; 8]>::ref_from_prefix(&buf.t[..]).is_err());
6226
        assert!(<[u8; 8]>::mut_from_prefix(&mut buf.t[..]).is_err());
6227
        assert!(<[u8; 8]>::ref_from_suffix(&buf.t[..]).is_err());
6228
        assert!(<[u8; 8]>::mut_from_suffix(&mut buf.t[..]).is_err());
6229
6230
        // Fail because the alignment is insufficient.
6231
        let mut buf = Align::<[u8; 13], AU64>::default();
6232
        assert!(AU64::ref_from_bytes(&buf.t[1..]).is_err());
6233
        assert!(AU64::mut_from_bytes(&mut buf.t[1..]).is_err());
6234
        assert!(AU64::ref_from_bytes(&buf.t[1..]).is_err());
6235
        assert!(AU64::mut_from_bytes(&mut buf.t[1..]).is_err());
6236
        assert!(AU64::ref_from_prefix(&buf.t[1..]).is_err());
6237
        assert!(AU64::mut_from_prefix(&mut buf.t[1..]).is_err());
6238
        assert!(AU64::ref_from_suffix(&buf.t[..]).is_err());
6239
        assert!(AU64::mut_from_suffix(&mut buf.t[..]).is_err());
6240
    }
6241
6242
    #[test]
6243
    fn test_to_methods() {
6244
        /// Run a series of tests by calling `IntoBytes` methods on `t`.
6245
        ///
6246
        /// `bytes` is the expected byte sequence returned from `t.as_bytes()`
6247
        /// before `t` has been modified. `post_mutation` is the expected
6248
        /// sequence returned from `t.as_bytes()` after `t.as_mut_bytes()[0]`
6249
        /// has had its bits flipped (by applying `^= 0xFF`).
6250
        ///
6251
        /// `N` is the size of `t` in bytes.
6252
        fn test<T: FromBytes + IntoBytes + Immutable + Debug + Eq + ?Sized, const N: usize>(
6253
            t: &mut T,
6254
            bytes: &[u8],
6255
            post_mutation: &T,
6256
        ) {
6257
            // Test that we can access the underlying bytes, and that we get the
6258
            // right bytes and the right number of bytes.
6259
            assert_eq!(t.as_bytes(), bytes);
6260
6261
            // Test that changes to the underlying byte slices are reflected in
6262
            // the original object.
6263
            t.as_mut_bytes()[0] ^= 0xFF;
6264
            assert_eq!(t, post_mutation);
6265
            t.as_mut_bytes()[0] ^= 0xFF;
6266
6267
            // `write_to` rejects slices that are too small or too large.
6268
            assert!(t.write_to(&mut vec![0; N - 1][..]).is_err());
6269
            assert!(t.write_to(&mut vec![0; N + 1][..]).is_err());
6270
6271
            // `write_to` works as expected.
6272
            let mut bytes = [0; N];
6273
            assert_eq!(t.write_to(&mut bytes[..]), Ok(()));
6274
            assert_eq!(bytes, t.as_bytes());
6275
6276
            // `write_to_prefix` rejects slices that are too small.
6277
            assert!(t.write_to_prefix(&mut vec![0; N - 1][..]).is_err());
6278
6279
            // `write_to_prefix` works with exact-sized slices.
6280
            let mut bytes = [0; N];
6281
            assert_eq!(t.write_to_prefix(&mut bytes[..]), Ok(()));
6282
            assert_eq!(bytes, t.as_bytes());
6283
6284
            // `write_to_prefix` works with too-large slices, and any bytes past
6285
            // the prefix aren't modified.
6286
            let mut too_many_bytes = vec![0; N + 1];
6287
            too_many_bytes[N] = 123;
6288
            assert_eq!(t.write_to_prefix(&mut too_many_bytes[..]), Ok(()));
6289
            assert_eq!(&too_many_bytes[..N], t.as_bytes());
6290
            assert_eq!(too_many_bytes[N], 123);
6291
6292
            // `write_to_suffix` rejects slices that are too small.
6293
            assert!(t.write_to_suffix(&mut vec![0; N - 1][..]).is_err());
6294
6295
            // `write_to_suffix` works with exact-sized slices.
6296
            let mut bytes = [0; N];
6297
            assert_eq!(t.write_to_suffix(&mut bytes[..]), Ok(()));
6298
            assert_eq!(bytes, t.as_bytes());
6299
6300
            // `write_to_suffix` works with too-large slices, and any bytes
6301
            // before the suffix aren't modified.
6302
            let mut too_many_bytes = vec![0; N + 1];
6303
            too_many_bytes[0] = 123;
6304
            assert_eq!(t.write_to_suffix(&mut too_many_bytes[..]), Ok(()));
6305
            assert_eq!(&too_many_bytes[1..], t.as_bytes());
6306
            assert_eq!(too_many_bytes[0], 123);
6307
        }
6308
6309
        #[derive(Debug, Eq, PartialEq, FromBytes, IntoBytes, Immutable)]
6310
        #[repr(C)]
6311
        struct Foo {
6312
            a: u32,
6313
            b: Wrapping<u32>,
6314
            c: Option<NonZeroU32>,
6315
        }
6316
6317
        let expected_bytes: Vec<u8> = if cfg!(target_endian = "little") {
6318
            vec![1, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0]
6319
        } else {
6320
            vec![0, 0, 0, 1, 0, 0, 0, 2, 0, 0, 0, 0]
6321
        };
6322
        let post_mutation_expected_a =
6323
            if cfg!(target_endian = "little") { 0x00_00_00_FE } else { 0xFF_00_00_01 };
6324
        test::<_, 12>(
6325
            &mut Foo { a: 1, b: Wrapping(2), c: None },
6326
            expected_bytes.as_bytes(),
6327
            &Foo { a: post_mutation_expected_a, b: Wrapping(2), c: None },
6328
        );
6329
        test::<_, 3>(
6330
            Unsized::from_mut_slice(&mut [1, 2, 3]),
6331
            &[1, 2, 3],
6332
            Unsized::from_mut_slice(&mut [0xFE, 2, 3]),
6333
        );
6334
    }
6335
6336
    #[test]
6337
    fn test_array() {
6338
        #[derive(FromBytes, IntoBytes, Immutable)]
6339
        #[repr(C)]
6340
        struct Foo {
6341
            a: [u16; 33],
6342
        }
6343
6344
        let foo = Foo { a: [0xFFFF; 33] };
6345
        let expected = [0xFFu8; 66];
6346
        assert_eq!(foo.as_bytes(), &expected[..]);
6347
    }
6348
6349
    #[test]
6350
    fn test_new_zeroed() {
6351
        assert!(!bool::new_zeroed());
6352
        assert_eq!(u64::new_zeroed(), 0);
6353
        // This test exists in order to exercise unsafe code, especially when
6354
        // running under Miri.
6355
        #[allow(clippy::unit_cmp)]
6356
        {
6357
            assert_eq!(<()>::new_zeroed(), ());
6358
        }
6359
    }
6360
6361
    #[test]
6362
    fn test_transparent_packed_generic_struct() {
6363
        #[derive(IntoBytes, FromBytes, Unaligned)]
6364
        #[repr(transparent)]
6365
        #[allow(dead_code)] // We never construct this type
6366
        struct Foo<T> {
6367
            _t: T,
6368
            _phantom: PhantomData<()>,
6369
        }
6370
6371
        assert_impl_all!(Foo<u32>: FromZeros, FromBytes, IntoBytes);
6372
        assert_impl_all!(Foo<u8>: Unaligned);
6373
6374
        #[derive(IntoBytes, FromBytes, Unaligned)]
6375
        #[repr(C, packed)]
6376
        #[allow(dead_code)] // We never construct this type
6377
        struct Bar<T, U> {
6378
            _t: T,
6379
            _u: U,
6380
        }
6381
6382
        assert_impl_all!(Bar<u8, AU64>: FromZeros, FromBytes, IntoBytes, Unaligned);
6383
    }
6384
6385
    #[cfg(feature = "alloc")]
6386
    mod alloc {
6387
        use super::*;
6388
6389
        #[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
6390
        #[test]
6391
        fn test_extend_vec_zeroed() {
6392
            // Test extending when there is an existing allocation.
6393
            let mut v = vec![100u16, 200, 300];
6394
            FromZeros::extend_vec_zeroed(&mut v, 3).unwrap();
6395
            assert_eq!(v.len(), 6);
6396
            assert_eq!(&*v, &[100, 200, 300, 0, 0, 0]);
6397
            drop(v);
6398
6399
            // Test extending when there is no existing allocation.
6400
            let mut v: Vec<u64> = Vec::new();
6401
            FromZeros::extend_vec_zeroed(&mut v, 3).unwrap();
6402
            assert_eq!(v.len(), 3);
6403
            assert_eq!(&*v, &[0, 0, 0]);
6404
            drop(v);
6405
        }
6406
6407
        #[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
6408
        #[test]
6409
        fn test_extend_vec_zeroed_zst() {
6410
            // Test extending when there is an existing (fake) allocation.
6411
            let mut v = vec![(), (), ()];
6412
            <()>::extend_vec_zeroed(&mut v, 3).unwrap();
6413
            assert_eq!(v.len(), 6);
6414
            assert_eq!(&*v, &[(), (), (), (), (), ()]);
6415
            drop(v);
6416
6417
            // Test extending when there is no existing (fake) allocation.
6418
            let mut v: Vec<()> = Vec::new();
6419
            <()>::extend_vec_zeroed(&mut v, 3).unwrap();
6420
            assert_eq!(&*v, &[(), (), ()]);
6421
            drop(v);
6422
        }
6423
6424
        #[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
6425
        #[test]
6426
        fn test_insert_vec_zeroed() {
6427
            // Insert at start (no existing allocation).
6428
            let mut v: Vec<u64> = Vec::new();
6429
            u64::insert_vec_zeroed(&mut v, 0, 2).unwrap();
6430
            assert_eq!(v.len(), 2);
6431
            assert_eq!(&*v, &[0, 0]);
6432
            drop(v);
6433
6434
            // Insert at start.
6435
            let mut v = vec![100u64, 200, 300];
6436
            u64::insert_vec_zeroed(&mut v, 0, 2).unwrap();
6437
            assert_eq!(v.len(), 5);
6438
            assert_eq!(&*v, &[0, 0, 100, 200, 300]);
6439
            drop(v);
6440
6441
            // Insert at middle.
6442
            let mut v = vec![100u64, 200, 300];
6443
            u64::insert_vec_zeroed(&mut v, 1, 1).unwrap();
6444
            assert_eq!(v.len(), 4);
6445
            assert_eq!(&*v, &[100, 0, 200, 300]);
6446
            drop(v);
6447
6448
            // Insert at end.
6449
            let mut v = vec![100u64, 200, 300];
6450
            u64::insert_vec_zeroed(&mut v, 3, 1).unwrap();
6451
            assert_eq!(v.len(), 4);
6452
            assert_eq!(&*v, &[100, 200, 300, 0]);
6453
            drop(v);
6454
        }
6455
6456
        #[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
6457
        #[test]
6458
        fn test_insert_vec_zeroed_zst() {
6459
            // Insert at start (no existing fake allocation).
6460
            let mut v: Vec<()> = Vec::new();
6461
            <()>::insert_vec_zeroed(&mut v, 0, 2).unwrap();
6462
            assert_eq!(v.len(), 2);
6463
            assert_eq!(&*v, &[(), ()]);
6464
            drop(v);
6465
6466
            // Insert at start.
6467
            let mut v = vec![(), (), ()];
6468
            <()>::insert_vec_zeroed(&mut v, 0, 2).unwrap();
6469
            assert_eq!(v.len(), 5);
6470
            assert_eq!(&*v, &[(), (), (), (), ()]);
6471
            drop(v);
6472
6473
            // Insert at middle.
6474
            let mut v = vec![(), (), ()];
6475
            <()>::insert_vec_zeroed(&mut v, 1, 1).unwrap();
6476
            assert_eq!(v.len(), 4);
6477
            assert_eq!(&*v, &[(), (), (), ()]);
6478
            drop(v);
6479
6480
            // Insert at end.
6481
            let mut v = vec![(), (), ()];
6482
            <()>::insert_vec_zeroed(&mut v, 3, 1).unwrap();
6483
            assert_eq!(v.len(), 4);
6484
            assert_eq!(&*v, &[(), (), (), ()]);
6485
            drop(v);
6486
        }
6487
6488
        #[test]
6489
        fn test_new_box_zeroed() {
6490
            assert_eq!(u64::new_box_zeroed(), Ok(Box::new(0)));
6491
        }
6492
6493
        #[test]
6494
        fn test_new_box_zeroed_array() {
6495
            drop(<[u32; 0x1000]>::new_box_zeroed());
6496
        }
6497
6498
        #[test]
6499
        fn test_new_box_zeroed_zst() {
6500
            // This test exists in order to exercise unsafe code, especially
6501
            // when running under Miri.
6502
            #[allow(clippy::unit_cmp)]
6503
            {
6504
                assert_eq!(<()>::new_box_zeroed(), Ok(Box::new(())));
6505
            }
6506
        }
6507
6508
        #[test]
6509
        fn test_new_box_zeroed_with_elems() {
6510
            let mut s: Box<[u64]> = <[u64]>::new_box_zeroed_with_elems(3).unwrap();
6511
            assert_eq!(s.len(), 3);
6512
            assert_eq!(&*s, &[0, 0, 0]);
6513
            s[1] = 3;
6514
            assert_eq!(&*s, &[0, 3, 0]);
6515
        }
6516
6517
        #[test]
6518
        fn test_new_box_zeroed_with_elems_empty() {
6519
            let s: Box<[u64]> = <[u64]>::new_box_zeroed_with_elems(0).unwrap();
6520
            assert_eq!(s.len(), 0);
6521
        }
6522
6523
        #[test]
6524
        fn test_new_box_zeroed_with_elems_zst() {
6525
            let mut s: Box<[()]> = <[()]>::new_box_zeroed_with_elems(3).unwrap();
6526
            assert_eq!(s.len(), 3);
6527
            assert!(s.get(10).is_none());
6528
            // This test exists in order to exercise unsafe code, especially
6529
            // when running under Miri.
6530
            #[allow(clippy::unit_cmp)]
6531
            {
6532
                assert_eq!(s[1], ());
6533
            }
6534
            s[2] = ();
6535
        }
6536
6537
        #[test]
6538
        fn test_new_box_zeroed_with_elems_zst_empty() {
6539
            let s: Box<[()]> = <[()]>::new_box_zeroed_with_elems(0).unwrap();
6540
            assert_eq!(s.len(), 0);
6541
        }
6542
6543
        #[test]
6544
        fn new_box_zeroed_with_elems_errors() {
6545
            assert_eq!(<[u16]>::new_box_zeroed_with_elems(usize::MAX), Err(AllocError));
6546
6547
            let max = <usize as core::convert::TryFrom<_>>::try_from(isize::MAX).unwrap();
6548
            assert_eq!(
6549
                <[u16]>::new_box_zeroed_with_elems((max / mem::size_of::<u16>()) + 1),
6550
                Err(AllocError)
6551
            );
6552
        }
6553
    }
6554
}
6555
6556
#[cfg(kani)]
6557
mod proofs {
6558
    use super::*;
6559
6560
    impl kani::Arbitrary for DstLayout {
6561
        fn any() -> Self {
6562
            let align: NonZeroUsize = kani::any();
6563
            let size_info: SizeInfo = kani::any();
6564
6565
            kani::assume(align.is_power_of_two());
6566
            kani::assume(align < DstLayout::THEORETICAL_MAX_ALIGN);
6567
6568
            // For testing purposes, we most care about instantiations of
6569
            // `DstLayout` that can correspond to actual Rust types. We use
6570
            // `Layout` to verify that our `DstLayout` satisfies the validity
6571
            // conditions of Rust layouts.
6572
            kani::assume(
6573
                match size_info {
6574
                    SizeInfo::Sized { size } => Layout::from_size_align(size, align.get()),
6575
                    SizeInfo::SliceDst(TrailingSliceLayout { offset, elem_size: _ }) => {
6576
                        // `SliceDst`` cannot encode an exact size, but we know
6577
                        // it is at least `offset` bytes.
6578
                        Layout::from_size_align(offset, align.get())
6579
                    }
6580
                }
6581
                .is_ok(),
6582
            );
6583
6584
            Self { align: align, size_info: size_info }
6585
        }
6586
    }
6587
6588
    impl kani::Arbitrary for SizeInfo {
6589
        fn any() -> Self {
6590
            let is_sized: bool = kani::any();
6591
6592
            match is_sized {
6593
                true => {
6594
                    let size: usize = kani::any();
6595
6596
                    kani::assume(size <= isize::MAX as _);
6597
6598
                    SizeInfo::Sized { size }
6599
                }
6600
                false => SizeInfo::SliceDst(kani::any()),
6601
            }
6602
        }
6603
    }
6604
6605
    impl kani::Arbitrary for TrailingSliceLayout {
6606
        fn any() -> Self {
6607
            let elem_size: usize = kani::any();
6608
            let offset: usize = kani::any();
6609
6610
            kani::assume(elem_size < isize::MAX as _);
6611
            kani::assume(offset < isize::MAX as _);
6612
6613
            TrailingSliceLayout { elem_size, offset }
6614
        }
6615
    }
6616
6617
    #[kani::proof]
6618
    fn prove_dst_layout_extend() {
6619
        use crate::util::{max, min, padding_needed_for};
6620
6621
        let base: DstLayout = kani::any();
6622
        let field: DstLayout = kani::any();
6623
        let packed: Option<NonZeroUsize> = kani::any();
6624
6625
        if let Some(max_align) = packed {
6626
            kani::assume(max_align.is_power_of_two());
6627
            kani::assume(base.align <= max_align);
6628
        }
6629
6630
        // The base can only be extended if it's sized.
6631
        kani::assume(matches!(base.size_info, SizeInfo::Sized { .. }));
6632
        let base_size = if let SizeInfo::Sized { size } = base.size_info {
6633
            size
6634
        } else {
6635
            unreachable!();
6636
        };
6637
6638
        // Under the above conditions, `DstLayout::extend` will not panic.
6639
        let composite = base.extend(field, packed);
6640
6641
        // The field's alignment is clamped by `max_align` (i.e., the
6642
        // `packed` attribute, if any) [1].
6643
        //
6644
        // [1] Per https://doc.rust-lang.org/reference/type-layout.html#the-alignment-modifiers:
6645
        //
6646
        //   The alignments of each field, for the purpose of positioning
6647
        //   fields, is the smaller of the specified alignment and the
6648
        //   alignment of the field's type.
6649
        let field_align = min(field.align, packed.unwrap_or(DstLayout::THEORETICAL_MAX_ALIGN));
6650
6651
        // The struct's alignment is the maximum of its previous alignment and
6652
        // `field_align`.
6653
        assert_eq!(composite.align, max(base.align, field_align));
6654
6655
        // Compute the minimum amount of inter-field padding needed to
6656
        // satisfy the field's alignment, and offset of the trailing field.
6657
        // [1]
6658
        //
6659
        // [1] Per https://doc.rust-lang.org/reference/type-layout.html#the-alignment-modifiers:
6660
        //
6661
        //   Inter-field padding is guaranteed to be the minimum required in
6662
        //   order to satisfy each field's (possibly altered) alignment.
6663
        let padding = padding_needed_for(base_size, field_align);
6664
        let offset = base_size + padding;
6665
6666
        // For testing purposes, we'll also construct `alloc::Layout`
6667
        // stand-ins for `DstLayout`, and show that `extend` behaves
6668
        // comparably on both types.
6669
        let base_analog = Layout::from_size_align(base_size, base.align.get()).unwrap();
6670
6671
        match field.size_info {
6672
            SizeInfo::Sized { size: field_size } => {
6673
                if let SizeInfo::Sized { size: composite_size } = composite.size_info {
6674
                    // If the trailing field is sized, the resulting layout will
6675
                    // be sized. Its size will be the sum of the preceding
6676
                    // layout, the size of the new field, and the size of
6677
                    // inter-field padding between the two.
6678
                    assert_eq!(composite_size, offset + field_size);
6679
6680
                    let field_analog =
6681
                        Layout::from_size_align(field_size, field_align.get()).unwrap();
6682
6683
                    if let Ok((actual_composite, actual_offset)) = base_analog.extend(field_analog)
6684
                    {
6685
                        assert_eq!(actual_offset, offset);
6686
                        assert_eq!(actual_composite.size(), composite_size);
6687
                        assert_eq!(actual_composite.align(), composite.align.get());
6688
                    } else {
6689
                        // An error here reflects that composite of `base`
6690
                        // and `field` cannot correspond to a real Rust type
6691
                        // fragment, because such a fragment would violate
6692
                        // the basic invariants of a valid Rust layout. At
6693
                        // the time of writing, `DstLayout` is a little more
6694
                        // permissive than `Layout`, so we don't assert
6695
                        // anything in this branch (e.g., unreachability).
6696
                    }
6697
                } else {
6698
                    panic!("The composite of two sized layouts must be sized.")
6699
                }
6700
            }
6701
            SizeInfo::SliceDst(TrailingSliceLayout {
6702
                offset: field_offset,
6703
                elem_size: field_elem_size,
6704
            }) => {
6705
                if let SizeInfo::SliceDst(TrailingSliceLayout {
6706
                    offset: composite_offset,
6707
                    elem_size: composite_elem_size,
6708
                }) = composite.size_info
6709
                {
6710
                    // The offset of the trailing slice component is the sum
6711
                    // of the offset of the trailing field and the trailing
6712
                    // slice offset within that field.
6713
                    assert_eq!(composite_offset, offset + field_offset);
6714
                    // The elem size is unchanged.
6715
                    assert_eq!(composite_elem_size, field_elem_size);
6716
6717
                    let field_analog =
6718
                        Layout::from_size_align(field_offset, field_align.get()).unwrap();
6719
6720
                    if let Ok((actual_composite, actual_offset)) = base_analog.extend(field_analog)
6721
                    {
6722
                        assert_eq!(actual_offset, offset);
6723
                        assert_eq!(actual_composite.size(), composite_offset);
6724
                        assert_eq!(actual_composite.align(), composite.align.get());
6725
                    } else {
6726
                        // An error here reflects that composite of `base`
6727
                        // and `field` cannot correspond to a real Rust type
6728
                        // fragment, because such a fragment would violate
6729
                        // the basic invariants of a valid Rust layout. At
6730
                        // the time of writing, `DstLayout` is a little more
6731
                        // permissive than `Layout`, so we don't assert
6732
                        // anything in this branch (e.g., unreachability).
6733
                    }
6734
                } else {
6735
                    panic!("The extension of a layout with a DST must result in a DST.")
6736
                }
6737
            }
6738
        }
6739
    }
6740
6741
    #[kani::proof]
6742
    #[kani::should_panic]
6743
    fn prove_dst_layout_extend_dst_panics() {
6744
        let base: DstLayout = kani::any();
6745
        let field: DstLayout = kani::any();
6746
        let packed: Option<NonZeroUsize> = kani::any();
6747
6748
        if let Some(max_align) = packed {
6749
            kani::assume(max_align.is_power_of_two());
6750
            kani::assume(base.align <= max_align);
6751
        }
6752
6753
        kani::assume(matches!(base.size_info, SizeInfo::SliceDst(..)));
6754
6755
        let _ = base.extend(field, packed);
6756
    }
6757
6758
    #[kani::proof]
6759
    fn prove_dst_layout_pad_to_align() {
6760
        use crate::util::padding_needed_for;
6761
6762
        let layout: DstLayout = kani::any();
6763
6764
        let padded: DstLayout = layout.pad_to_align();
6765
6766
        // Calling `pad_to_align` does not alter the `DstLayout`'s alignment.
6767
        assert_eq!(padded.align, layout.align);
6768
6769
        if let SizeInfo::Sized { size: unpadded_size } = layout.size_info {
6770
            if let SizeInfo::Sized { size: padded_size } = padded.size_info {
6771
                // If the layout is sized, it will remain sized after padding is
6772
                // added. Its sum will be its unpadded size and the size of the
6773
                // trailing padding needed to satisfy its alignment
6774
                // requirements.
6775
                let padding = padding_needed_for(unpadded_size, layout.align);
6776
                assert_eq!(padded_size, unpadded_size + padding);
6777
6778
                // Prove that calling `DstLayout::pad_to_align` behaves
6779
                // identically to `Layout::pad_to_align`.
6780
                let layout_analog =
6781
                    Layout::from_size_align(unpadded_size, layout.align.get()).unwrap();
6782
                let padded_analog = layout_analog.pad_to_align();
6783
                assert_eq!(padded_analog.align(), layout.align.get());
6784
                assert_eq!(padded_analog.size(), padded_size);
6785
            } else {
6786
                panic!("The padding of a sized layout must result in a sized layout.")
6787
            }
6788
        } else {
6789
            // If the layout is a DST, padding cannot be statically added.
6790
            assert_eq!(padded.size_info, layout.size_info);
6791
        }
6792
    }
6793
}