Coverage Report

Created: 2025-11-16 06:37

next uncovered line (L), next uncovered region (R), next uncovered branch (B)
/rust/registry/src/index.crates.io-1949cf8c6b5b557f/zerocopy-0.8.20/src/lib.rs
Line
Count
Source
1
// Copyright 2018 The Fuchsia Authors
2
//
3
// Licensed under the 2-Clause BSD License <LICENSE-BSD or
4
// https://opensource.org/license/bsd-2-clause>, Apache License, Version 2.0
5
// <LICENSE-APACHE or https://www.apache.org/licenses/LICENSE-2.0>, or the MIT
6
// license <LICENSE-MIT or https://opensource.org/licenses/MIT>, at your option.
7
// This file may not be copied, modified, or distributed except according to
8
// those terms.
9
10
// After updating the following doc comment, make sure to run the following
11
// command to update `README.md` based on its contents:
12
//
13
//   cargo -q run --manifest-path tools/Cargo.toml -p generate-readme > README.md
14
15
//! *<span style="font-size: 100%; color:grey;">Need more out of zerocopy?
16
//! Submit a [customer request issue][customer-request-issue]!</span>*
17
//!
18
//! ***<span style="font-size: 140%">Fast, safe, <span
19
//! style="color:red;">compile error</span>. Pick two.</span>***
20
//!
21
//! Zerocopy makes zero-cost memory manipulation effortless. We write `unsafe`
22
//! so you don't have to.
23
//!
24
//! *Thanks for using zerocopy 0.8! For an overview of what changes from 0.7,
25
//! check out our [release notes][release-notes], which include a step-by-step
26
//! guide for upgrading from 0.7.*
27
//!
28
//! *Have questions? Need help? Ask the maintainers on [GitHub][github-q-a] or
29
//! on [Discord][discord]!*
30
//!
31
//! [customer-request-issue]: https://github.com/google/zerocopy/issues/new/choose
32
//! [release-notes]: https://github.com/google/zerocopy/discussions/1680
33
//! [github-q-a]: https://github.com/google/zerocopy/discussions/categories/q-a
34
//! [discord]: https://discord.gg/MAvWH2R6zk
35
//!
36
//! # Overview
37
//!
38
//! ##### Conversion Traits
39
//!
40
//! Zerocopy provides four derivable traits for zero-cost conversions:
41
//! - [`TryFromBytes`] indicates that a type may safely be converted from
42
//!   certain byte sequences (conditional on runtime checks)
43
//! - [`FromZeros`] indicates that a sequence of zero bytes represents a valid
44
//!   instance of a type
45
//! - [`FromBytes`] indicates that a type may safely be converted from an
46
//!   arbitrary byte sequence
47
//! - [`IntoBytes`] indicates that a type may safely be converted *to* a byte
48
//!   sequence
49
//!
50
//! These traits support sized types, slices, and [slice DSTs][slice-dsts].
51
//!
52
//! [slice-dsts]: KnownLayout#dynamically-sized-types
53
//!
54
//! ##### Marker Traits
55
//!
56
//! Zerocopy provides three derivable marker traits that do not provide any
57
//! functionality themselves, but are required to call certain methods provided
58
//! by the conversion traits:
59
//! - [`KnownLayout`] indicates that zerocopy can reason about certain layout
60
//!   qualities of a type
61
//! - [`Immutable`] indicates that a type is free from interior mutability,
62
//!   except by ownership or an exclusive (`&mut`) borrow
63
//! - [`Unaligned`] indicates that a type's alignment requirement is 1
64
//!
65
//! You should generally derive these marker traits whenever possible.
66
//!
67
//! ##### Conversion Macros
68
//!
69
//! Zerocopy provides six macros for safe casting between types:
70
//!
71
//! - ([`try_`][try_transmute])[`transmute`] (conditionally) converts a value of
72
//!   one type to a value of another type of the same size
73
//! - ([`try_`][try_transmute_mut])[`transmute_mut`] (conditionally) converts a
74
//!   mutable reference of one type to a mutable reference of another type of
75
//!   the same size
76
//! - ([`try_`][try_transmute_ref])[`transmute_ref`] (conditionally) converts a
77
//!   mutable or immutable reference of one type to an immutable reference of
78
//!   another type of the same size
79
//!
80
//! These macros perform *compile-time* size and alignment checks, meaning that
81
//! unconditional casts have zero cost at runtime. Conditional casts do not need
82
//! to validate size or alignment runtime, but do need to validate contents.
83
//!
84
//! These macros cannot be used in generic contexts. For generic conversions,
85
//! use the methods defined by the [conversion traits](#conversion-traits).
86
//!
87
//! ##### Byteorder-Aware Numerics
88
//!
89
//! Zerocopy provides byte-order aware integer types that support these
90
//! conversions; see the [`byteorder`] module. These types are especially useful
91
//! for network parsing.
92
//!
93
//! # Cargo Features
94
//!
95
//! - **`alloc`**
96
//!   By default, `zerocopy` is `no_std`. When the `alloc` feature is enabled,
97
//!   the `alloc` crate is added as a dependency, and some allocation-related
98
//!   functionality is added.
99
//!
100
//! - **`std`**
101
//!   By default, `zerocopy` is `no_std`. When the `std` feature is enabled, the
102
//!   `std` crate is added as a dependency (ie, `no_std` is disabled), and
103
//!   support for some `std` types is added. `std` implies `alloc`.
104
//!
105
//! - **`derive`**
106
//!   Provides derives for the core marker traits via the `zerocopy-derive`
107
//!   crate. These derives are re-exported from `zerocopy`, so it is not
108
//!   necessary to depend on `zerocopy-derive` directly.
109
//!
110
//!   However, you may experience better compile times if you instead directly
111
//!   depend on both `zerocopy` and `zerocopy-derive` in your `Cargo.toml`,
112
//!   since doing so will allow Rust to compile these crates in parallel. To do
113
//!   so, do *not* enable the `derive` feature, and list both dependencies in
114
//!   your `Cargo.toml` with the same leading non-zero version number; e.g:
115
//!
116
//!   ```toml
117
//!   [dependencies]
118
//!   zerocopy = "0.X"
119
//!   zerocopy-derive = "0.X"
120
//!   ```
121
//!
122
//!   To avoid the risk of [duplicate import errors][duplicate-import-errors] if
123
//!   one of your dependencies enables zerocopy's `derive` feature, import
124
//!   derives as `use zerocopy_derive::*` rather than by name (e.g., `use
125
//!   zerocopy_derive::FromBytes`).
126
//!
127
//! - **`simd`**
128
//!   When the `simd` feature is enabled, `FromZeros`, `FromBytes`, and
129
//!   `IntoBytes` impls are emitted for all stable SIMD types which exist on the
130
//!   target platform. Note that the layout of SIMD types is not yet stabilized,
131
//!   so these impls may be removed in the future if layout changes make them
132
//!   invalid. For more information, see the Unsafe Code Guidelines Reference
133
//!   page on the [layout of packed SIMD vectors][simd-layout].
134
//!
135
//! - **`simd-nightly`**
136
//!   Enables the `simd` feature and adds support for SIMD types which are only
137
//!   available on nightly. Since these types are unstable, support for any type
138
//!   may be removed at any point in the future.
139
//!
140
//! - **`float-nightly`**
141
//!   Adds support for the unstable `f16` and `f128` types. These types are
142
//!   not yet fully implemented and may not be supported on all platforms.
143
//!
144
//! [duplicate-import-errors]: https://github.com/google/zerocopy/issues/1587
145
//! [simd-layout]: https://rust-lang.github.io/unsafe-code-guidelines/layout/packed-simd-vectors.html
146
//!
147
//! # Security Ethos
148
//!
149
//! Zerocopy is expressly designed for use in security-critical contexts. We
150
//! strive to ensure that that zerocopy code is sound under Rust's current
151
//! memory model, and *any future memory model*. We ensure this by:
152
//! - **...not 'guessing' about Rust's semantics.**
153
//!   We annotate `unsafe` code with a precise rationale for its soundness that
154
//!   cites a relevant section of Rust's official documentation. When Rust's
155
//!   documented semantics are unclear, we work with the Rust Operational
156
//!   Semantics Team to clarify Rust's documentation.
157
//! - **...rigorously testing our implementation.**
158
//!   We run tests using [Miri], ensuring that zerocopy is sound across a wide
159
//!   array of supported target platforms of varying endianness and pointer
160
//!   width, and across both current and experimental memory models of Rust.
161
//! - **...formally proving the correctness of our implementation.**
162
//!   We apply formal verification tools like [Kani][kani] to prove zerocopy's
163
//!   correctness.
164
//!
165
//! For more information, see our full [soundness policy].
166
//!
167
//! [Miri]: https://github.com/rust-lang/miri
168
//! [Kani]: https://github.com/model-checking/kani
169
//! [soundness policy]: https://github.com/google/zerocopy/blob/main/POLICIES.md#soundness
170
//!
171
//! # Relationship to Project Safe Transmute
172
//!
173
//! [Project Safe Transmute] is an official initiative of the Rust Project to
174
//! develop language-level support for safer transmutation. The Project consults
175
//! with crates like zerocopy to identify aspects of safer transmutation that
176
//! would benefit from compiler support, and has developed an [experimental,
177
//! compiler-supported analysis][mcp-transmutability] which determines whether,
178
//! for a given type, any value of that type may be soundly transmuted into
179
//! another type. Once this functionality is sufficiently mature, zerocopy
180
//! intends to replace its internal transmutability analysis (implemented by our
181
//! custom derives) with the compiler-supported one. This change will likely be
182
//! an implementation detail that is invisible to zerocopy's users.
183
//!
184
//! Project Safe Transmute will not replace the need for most of zerocopy's
185
//! higher-level abstractions. The experimental compiler analysis is a tool for
186
//! checking the soundness of `unsafe` code, not a tool to avoid writing
187
//! `unsafe` code altogether. For the foreseeable future, crates like zerocopy
188
//! will still be required in order to provide higher-level abstractions on top
189
//! of the building block provided by Project Safe Transmute.
190
//!
191
//! [Project Safe Transmute]: https://rust-lang.github.io/rfcs/2835-project-safe-transmute.html
192
//! [mcp-transmutability]: https://github.com/rust-lang/compiler-team/issues/411
193
//!
194
//! # MSRV
195
//!
196
//! See our [MSRV policy].
197
//!
198
//! [MSRV policy]: https://github.com/google/zerocopy/blob/main/POLICIES.md#msrv
199
//!
200
//! # Changelog
201
//!
202
//! Zerocopy uses [GitHub Releases].
203
//!
204
//! [GitHub Releases]: https://github.com/google/zerocopy/releases
205
//!
206
//! # Thanks
207
//!
208
//! Zerocopy is maintained by engineers at Google and Amazon with help from
209
//! [many wonderful contributors][contributors]. Thank you to everyone who has
210
//! lent a hand in making Rust a little more secure!
211
//!
212
//! [contributors]: https://github.com/google/zerocopy/graphs/contributors
213
214
// Sometimes we want to use lints which were added after our MSRV.
215
// `unknown_lints` is `warn` by default and we deny warnings in CI, so without
216
// this attribute, any unknown lint would cause a CI failure when testing with
217
// our MSRV.
218
#![allow(unknown_lints, non_local_definitions, unreachable_patterns)]
219
#![deny(renamed_and_removed_lints)]
220
#![deny(
221
    anonymous_parameters,
222
    deprecated_in_future,
223
    late_bound_lifetime_arguments,
224
    missing_copy_implementations,
225
    missing_debug_implementations,
226
    missing_docs,
227
    path_statements,
228
    patterns_in_fns_without_body,
229
    rust_2018_idioms,
230
    trivial_numeric_casts,
231
    unreachable_pub,
232
    unsafe_op_in_unsafe_fn,
233
    unused_extern_crates,
234
    // We intentionally choose not to deny `unused_qualifications`. When items
235
    // are added to the prelude (e.g., `core::mem::size_of`), this has the
236
    // consequence of making some uses trigger this lint on the latest toolchain
237
    // (e.g., `mem::size_of`), but fixing it (e.g. by replacing with `size_of`)
238
    // does not work on older toolchains.
239
    //
240
    // We tested a more complicated fix in #1413, but ultimately decided that,
241
    // since this lint is just a minor style lint, the complexity isn't worth it
242
    // - it's fine to occasionally have unused qualifications slip through,
243
    // especially since these do not affect our user-facing API in any way.
244
    variant_size_differences
245
)]
246
#![cfg_attr(
247
    __ZEROCOPY_INTERNAL_USE_ONLY_NIGHTLY_FEATURES_IN_TESTS,
248
    deny(fuzzy_provenance_casts, lossy_provenance_casts)
249
)]
250
#![deny(
251
    clippy::all,
252
    clippy::alloc_instead_of_core,
253
    clippy::arithmetic_side_effects,
254
    clippy::as_underscore,
255
    clippy::assertions_on_result_states,
256
    clippy::as_conversions,
257
    clippy::correctness,
258
    clippy::dbg_macro,
259
    clippy::decimal_literal_representation,
260
    clippy::double_must_use,
261
    clippy::get_unwrap,
262
    clippy::indexing_slicing,
263
    clippy::missing_inline_in_public_items,
264
    clippy::missing_safety_doc,
265
    clippy::must_use_candidate,
266
    clippy::must_use_unit,
267
    clippy::obfuscated_if_else,
268
    clippy::perf,
269
    clippy::print_stdout,
270
    clippy::return_self_not_must_use,
271
    clippy::std_instead_of_core,
272
    clippy::style,
273
    clippy::suspicious,
274
    clippy::todo,
275
    clippy::undocumented_unsafe_blocks,
276
    clippy::unimplemented,
277
    clippy::unnested_or_patterns,
278
    clippy::unwrap_used,
279
    clippy::use_debug
280
)]
281
#![allow(clippy::type_complexity)]
282
#![deny(
283
    rustdoc::bare_urls,
284
    rustdoc::broken_intra_doc_links,
285
    rustdoc::invalid_codeblock_attributes,
286
    rustdoc::invalid_html_tags,
287
    rustdoc::invalid_rust_codeblocks,
288
    rustdoc::missing_crate_level_docs,
289
    rustdoc::private_intra_doc_links
290
)]
291
// In test code, it makes sense to weight more heavily towards concise, readable
292
// code over correct or debuggable code.
293
#![cfg_attr(any(test, kani), allow(
294
    // In tests, you get line numbers and have access to source code, so panic
295
    // messages are less important. You also often unwrap a lot, which would
296
    // make expect'ing instead very verbose.
297
    clippy::unwrap_used,
298
    // In tests, there's no harm to "panic risks" - the worst that can happen is
299
    // that your test will fail, and you'll fix it. By contrast, panic risks in
300
    // production code introduce the possibly of code panicking unexpectedly "in
301
    // the field".
302
    clippy::arithmetic_side_effects,
303
    clippy::indexing_slicing,
304
))]
305
#![cfg_attr(not(any(test, feature = "std")), no_std)]
306
#![cfg_attr(
307
    all(feature = "simd-nightly", any(target_arch = "x86", target_arch = "x86_64")),
308
    feature(stdarch_x86_avx512)
309
)]
310
#![cfg_attr(
311
    all(feature = "simd-nightly", target_arch = "arm"),
312
    feature(stdarch_arm_dsp, stdarch_arm_neon_intrinsics)
313
)]
314
#![cfg_attr(
315
    all(feature = "simd-nightly", any(target_arch = "powerpc", target_arch = "powerpc64")),
316
    feature(stdarch_powerpc)
317
)]
318
#![cfg_attr(feature = "float-nightly", feature(f16, f128))]
319
#![cfg_attr(doc_cfg, feature(doc_cfg))]
320
#![cfg_attr(
321
    __ZEROCOPY_INTERNAL_USE_ONLY_NIGHTLY_FEATURES_IN_TESTS,
322
    feature(layout_for_ptr, coverage_attribute)
323
)]
324
325
// This is a hack to allow zerocopy-derive derives to work in this crate. They
326
// assume that zerocopy is linked as an extern crate, so they access items from
327
// it as `zerocopy::Xxx`. This makes that still work.
328
#[cfg(any(feature = "derive", test))]
329
extern crate self as zerocopy;
330
331
#[doc(hidden)]
332
#[macro_use]
333
pub mod util;
334
335
pub mod byte_slice;
336
pub mod byteorder;
337
mod deprecated;
338
// This module is `pub` so that zerocopy's error types and error handling
339
// documentation is grouped together in a cohesive module. In practice, we
340
// expect most users to use the re-export of `error`'s items to avoid identifier
341
// stuttering.
342
pub mod error;
343
mod impls;
344
#[doc(hidden)]
345
pub mod layout;
346
mod macros;
347
#[doc(hidden)]
348
pub mod pointer;
349
mod r#ref;
350
// TODO(#252): If we make this pub, come up with a better name.
351
mod wrappers;
352
353
pub use crate::byte_slice::*;
354
pub use crate::byteorder::*;
355
pub use crate::error::*;
356
pub use crate::r#ref::*;
357
pub use crate::wrappers::*;
358
359
use core::{
360
    cell::UnsafeCell,
361
    cmp::Ordering,
362
    fmt::{self, Debug, Display, Formatter},
363
    hash::Hasher,
364
    marker::PhantomData,
365
    mem::{self, ManuallyDrop, MaybeUninit as CoreMaybeUninit},
366
    num::{
367
        NonZeroI128, NonZeroI16, NonZeroI32, NonZeroI64, NonZeroI8, NonZeroIsize, NonZeroU128,
368
        NonZeroU16, NonZeroU32, NonZeroU64, NonZeroU8, NonZeroUsize, Wrapping,
369
    },
370
    ops::{Deref, DerefMut},
371
    ptr::{self, NonNull},
372
    slice,
373
};
374
375
#[cfg(feature = "std")]
376
use std::io;
377
378
use crate::pointer::{invariant, BecauseExclusive};
379
380
#[cfg(any(feature = "alloc", test))]
381
extern crate alloc;
382
#[cfg(any(feature = "alloc", test))]
383
use alloc::{boxed::Box, vec::Vec};
384
385
#[cfg(any(feature = "alloc", test, kani))]
386
use core::alloc::Layout;
387
388
// Used by `TryFromBytes::is_bit_valid`.
389
#[doc(hidden)]
390
pub use crate::pointer::{BecauseImmutable, Maybe, MaybeAligned, Ptr};
391
// Used by `KnownLayout`.
392
#[doc(hidden)]
393
pub use crate::layout::*;
394
395
// For each trait polyfill, as soon as the corresponding feature is stable, the
396
// polyfill import will be unused because method/function resolution will prefer
397
// the inherent method/function over a trait method/function. Thus, we suppress
398
// the `unused_imports` warning.
399
//
400
// See the documentation on `util::polyfills` for more information.
401
#[allow(unused_imports)]
402
use crate::util::polyfills::{self, NonNullExt as _, NumExt as _};
403
404
#[rustversion::nightly]
405
#[cfg(all(test, not(__ZEROCOPY_INTERNAL_USE_ONLY_NIGHTLY_FEATURES_IN_TESTS)))]
406
const _: () = {
407
    #[deprecated = "some tests may be skipped due to missing RUSTFLAGS=\"--cfg __ZEROCOPY_INTERNAL_USE_ONLY_NIGHTLY_FEATURES_IN_TESTS\""]
408
    const _WARNING: () = ();
409
    #[warn(deprecated)]
410
    _WARNING
411
};
412
413
// These exist so that code which was written against the old names will get
414
// less confusing error messages when they upgrade to a more recent version of
415
// zerocopy. On our MSRV toolchain, the error messages read, for example:
416
//
417
//   error[E0603]: trait `FromZeroes` is private
418
//       --> examples/deprecated.rs:1:15
419
//        |
420
//   1    | use zerocopy::FromZeroes;
421
//        |               ^^^^^^^^^^ private trait
422
//        |
423
//   note: the trait `FromZeroes` is defined here
424
//       --> /Users/josh/workspace/zerocopy/src/lib.rs:1845:5
425
//        |
426
//   1845 | use FromZeros as FromZeroes;
427
//        |     ^^^^^^^^^^^^^^^^^^^^^^^
428
//
429
// The "note" provides enough context to make it easy to figure out how to fix
430
// the error.
431
#[allow(unused)]
432
use {FromZeros as FromZeroes, IntoBytes as AsBytes, Ref as LayoutVerified};
433
434
/// Implements [`KnownLayout`].
435
///
436
/// This derive analyzes various aspects of a type's layout that are needed for
437
/// some of zerocopy's APIs. It can be applied to structs, enums, and unions;
438
/// e.g.:
439
///
440
/// ```
441
/// # use zerocopy_derive::KnownLayout;
442
/// #[derive(KnownLayout)]
443
/// struct MyStruct {
444
/// # /*
445
///     ...
446
/// # */
447
/// }
448
///
449
/// #[derive(KnownLayout)]
450
/// enum MyEnum {
451
/// #   V00,
452
/// # /*
453
///     ...
454
/// # */
455
/// }
456
///
457
/// #[derive(KnownLayout)]
458
/// union MyUnion {
459
/// #   variant: u8,
460
/// # /*
461
///     ...
462
/// # */
463
/// }
464
/// ```
465
///
466
/// # Limitations
467
///
468
/// This derive cannot currently be applied to unsized structs without an
469
/// explicit `repr` attribute.
470
///
471
/// Some invocations of this derive run afoul of a [known bug] in Rust's type
472
/// privacy checker. For example, this code:
473
///
474
/// ```compile_fail,E0446
475
/// use zerocopy::*;
476
/// # use zerocopy_derive::*;
477
///
478
/// #[derive(KnownLayout)]
479
/// #[repr(C)]
480
/// pub struct PublicType {
481
///     leading: Foo,
482
///     trailing: Bar,
483
/// }
484
///
485
/// #[derive(KnownLayout)]
486
/// struct Foo;
487
///
488
/// #[derive(KnownLayout)]
489
/// struct Bar;
490
/// ```
491
///
492
/// ...results in a compilation error:
493
///
494
/// ```text
495
/// error[E0446]: private type `Bar` in public interface
496
///  --> examples/bug.rs:3:10
497
///    |
498
/// 3  | #[derive(KnownLayout)]
499
///    |          ^^^^^^^^^^^ can't leak private type
500
/// ...
501
/// 14 | struct Bar;
502
///    | ---------- `Bar` declared as private
503
///    |
504
///    = note: this error originates in the derive macro `KnownLayout` (in Nightly builds, run with -Z macro-backtrace for more info)
505
/// ```
506
///
507
/// This issue arises when `#[derive(KnownLayout)]` is applied to `repr(C)`
508
/// structs whose trailing field type is less public than the enclosing struct.
509
///
510
/// To work around this, mark the trailing field type `pub` and annotate it with
511
/// `#[doc(hidden)]`; e.g.:
512
///
513
/// ```no_run
514
/// use zerocopy::*;
515
/// # use zerocopy_derive::*;
516
///
517
/// #[derive(KnownLayout)]
518
/// #[repr(C)]
519
/// pub struct PublicType {
520
///     leading: Foo,
521
///     trailing: Bar,
522
/// }
523
///
524
/// #[derive(KnownLayout)]
525
/// struct Foo;
526
///
527
/// #[doc(hidden)]
528
/// #[derive(KnownLayout)]
529
/// pub struct Bar; // <- `Bar` is now also `pub`
530
/// ```
531
///
532
/// [known bug]: https://github.com/rust-lang/rust/issues/45713
533
#[cfg(any(feature = "derive", test))]
534
#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
535
pub use zerocopy_derive::KnownLayout;
536
537
/// Indicates that zerocopy can reason about certain aspects of a type's layout.
538
///
539
/// This trait is required by many of zerocopy's APIs. It supports sized types,
540
/// slices, and [slice DSTs](#dynamically-sized-types).
541
///
542
/// # Implementation
543
///
544
/// **Do not implement this trait yourself!** Instead, use
545
/// [`#[derive(KnownLayout)]`][derive]; e.g.:
546
///
547
/// ```
548
/// # use zerocopy_derive::KnownLayout;
549
/// #[derive(KnownLayout)]
550
/// struct MyStruct {
551
/// # /*
552
///     ...
553
/// # */
554
/// }
555
///
556
/// #[derive(KnownLayout)]
557
/// enum MyEnum {
558
/// # /*
559
///     ...
560
/// # */
561
/// }
562
///
563
/// #[derive(KnownLayout)]
564
/// union MyUnion {
565
/// #   variant: u8,
566
/// # /*
567
///     ...
568
/// # */
569
/// }
570
/// ```
571
///
572
/// This derive performs a sophisticated analysis to deduce the layout
573
/// characteristics of types. You **must** implement this trait via the derive.
574
///
575
/// # Dynamically-sized types
576
///
577
/// `KnownLayout` supports slice-based dynamically sized types ("slice DSTs").
578
///
579
/// A slice DST is a type whose trailing field is either a slice or another
580
/// slice DST, rather than a type with fixed size. For example:
581
///
582
/// ```
583
/// #[repr(C)]
584
/// struct PacketHeader {
585
/// # /*
586
///     ...
587
/// # */
588
/// }
589
///
590
/// #[repr(C)]
591
/// struct Packet {
592
///     header: PacketHeader,
593
///     body: [u8],
594
/// }
595
/// ```
596
///
597
/// It can be useful to think of slice DSTs as a generalization of slices - in
598
/// other words, a normal slice is just the special case of a slice DST with
599
/// zero leading fields. In particular:
600
/// - Like slices, slice DSTs can have different lengths at runtime
601
/// - Like slices, slice DSTs cannot be passed by-value, but only by reference
602
///   or via other indirection such as `Box`
603
/// - Like slices, a reference (or `Box`, or other pointer type) to a slice DST
604
///   encodes the number of elements in the trailing slice field
605
///
606
/// ## Slice DST layout
607
///
608
/// Just like other composite Rust types, the layout of a slice DST is not
609
/// well-defined unless it is specified using an explicit `#[repr(...)]`
610
/// attribute such as `#[repr(C)]`. [Other representations are
611
/// supported][reprs], but in this section, we'll use `#[repr(C)]` as our
612
/// example.
613
///
614
/// A `#[repr(C)]` slice DST is laid out [just like sized `#[repr(C)]`
615
/// types][repr-c-structs], but the presenence of a variable-length field
616
/// introduces the possibility of *dynamic padding*. In particular, it may be
617
/// necessary to add trailing padding *after* the trailing slice field in order
618
/// to satisfy the outer type's alignment, and the amount of padding required
619
/// may be a function of the length of the trailing slice field. This is just a
620
/// natural consequence of the normal `#[repr(C)]` rules applied to slice DSTs,
621
/// but it can result in surprising behavior. For example, consider the
622
/// following type:
623
///
624
/// ```
625
/// #[repr(C)]
626
/// struct Foo {
627
///     a: u32,
628
///     b: u8,
629
///     z: [u16],
630
/// }
631
/// ```
632
///
633
/// Assuming that `u32` has alignment 4 (this is not true on all platforms),
634
/// then `Foo` has alignment 4 as well. Here is the smallest possible value for
635
/// `Foo`:
636
///
637
/// ```text
638
/// byte offset | 01234567
639
///       field | aaaab---
640
///                    ><
641
/// ```
642
///
643
/// In this value, `z` has length 0. Abiding by `#[repr(C)]`, the lowest offset
644
/// that we can place `z` at is 5, but since `z` has alignment 2, we need to
645
/// round up to offset 6. This means that there is one byte of padding between
646
/// `b` and `z`, then 0 bytes of `z` itself (denoted `><` in this diagram), and
647
/// then two bytes of padding after `z` in order to satisfy the overall
648
/// alignment of `Foo`. The size of this instance is 8 bytes.
649
///
650
/// What about if `z` has length 1?
651
///
652
/// ```text
653
/// byte offset | 01234567
654
///       field | aaaab-zz
655
/// ```
656
///
657
/// In this instance, `z` has length 1, and thus takes up 2 bytes. That means
658
/// that we no longer need padding after `z` in order to satisfy `Foo`'s
659
/// alignment. We've now seen two different values of `Foo` with two different
660
/// lengths of `z`, but they both have the same size - 8 bytes.
661
///
662
/// What about if `z` has length 2?
663
///
664
/// ```text
665
/// byte offset | 012345678901
666
///       field | aaaab-zzzz--
667
/// ```
668
///
669
/// Now `z` has length 2, and thus takes up 4 bytes. This brings our un-padded
670
/// size to 10, and so we now need another 2 bytes of padding after `z` to
671
/// satisfy `Foo`'s alignment.
672
///
673
/// Again, all of this is just a logical consequence of the `#[repr(C)]` rules
674
/// applied to slice DSTs, but it can be surprising that the amount of trailing
675
/// padding becomes a function of the trailing slice field's length, and thus
676
/// can only be computed at runtime.
677
///
678
/// [reprs]: https://doc.rust-lang.org/reference/type-layout.html#representations
679
/// [repr-c-structs]: https://doc.rust-lang.org/reference/type-layout.html#reprc-structs
680
///
681
/// ## What is a valid size?
682
///
683
/// There are two places in zerocopy's API that we refer to "a valid size" of a
684
/// type. In normal casts or conversions, where the source is a byte slice, we
685
/// need to know whether the source byte slice is a valid size of the
686
/// destination type. In prefix or suffix casts, we need to know whether *there
687
/// exists* a valid size of the destination type which fits in the source byte
688
/// slice and, if so, what the largest such size is.
689
///
690
/// As outlined above, a slice DST's size is defined by the number of elements
691
/// in its trailing slice field. However, there is not necessarily a 1-to-1
692
/// mapping between trailing slice field length and overall size. As we saw in
693
/// the previous section with the type `Foo`, instances with both 0 and 1
694
/// elements in the trailing `z` field result in a `Foo` whose size is 8 bytes.
695
///
696
/// When we say "x is a valid size of `T`", we mean one of two things:
697
/// - If `T: Sized`, then we mean that `x == size_of::<T>()`
698
/// - If `T` is a slice DST, then we mean that there exists a `len` such that the instance of
699
///   `T` with `len` trailing slice elements has size `x`
700
///
701
/// When we say "largest possible size of `T` that fits in a byte slice", we
702
/// mean one of two things:
703
/// - If `T: Sized`, then we mean `size_of::<T>()` if the byte slice is at least
704
///   `size_of::<T>()` bytes long
705
/// - If `T` is a slice DST, then we mean to consider all values, `len`, such
706
///   that the instance of `T` with `len` trailing slice elements fits in the
707
///   byte slice, and to choose the largest such `len`, if any
708
///
709
///
710
/// # Safety
711
///
712
/// This trait does not convey any safety guarantees to code outside this crate.
713
///
714
/// You must not rely on the `#[doc(hidden)]` internals of `KnownLayout`. Future
715
/// releases of zerocopy may make backwards-breaking changes to these items,
716
/// including changes that only affect soundness, which may cause code which
717
/// uses those items to silently become unsound.
718
///
719
#[cfg_attr(feature = "derive", doc = "[derive]: zerocopy_derive::KnownLayout")]
720
#[cfg_attr(
721
    not(feature = "derive"),
722
    doc = concat!("[derive]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.KnownLayout.html"),
723
)]
724
#[cfg_attr(
725
    zerocopy_diagnostic_on_unimplemented_1_78_0,
726
    diagnostic::on_unimplemented(note = "Consider adding `#[derive(KnownLayout)]` to `{Self}`")
727
)]
728
pub unsafe trait KnownLayout {
729
    // The `Self: Sized` bound makes it so that `KnownLayout` can still be
730
    // object safe. It's not currently object safe thanks to `const LAYOUT`, and
731
    // it likely won't be in the future, but there's no reason not to be
732
    // forwards-compatible with object safety.
733
    #[doc(hidden)]
734
    fn only_derive_is_allowed_to_implement_this_trait()
735
    where
736
        Self: Sized;
737
738
    /// The type of metadata stored in a pointer to `Self`.
739
    ///
740
    /// This is `()` for sized types and `usize` for slice DSTs.
741
    type PointerMetadata: PointerMetadata;
742
743
    /// A maybe-uninitialized analog of `Self`
744
    ///
745
    /// # Safety
746
    ///
747
    /// `Self::LAYOUT` and `Self::MaybeUninit::LAYOUT` are identical.
748
    /// `Self::MaybeUninit` admits uninitialized bytes in all positions.
749
    #[doc(hidden)]
750
    type MaybeUninit: ?Sized + KnownLayout<PointerMetadata = Self::PointerMetadata>;
751
752
    /// The layout of `Self`.
753
    ///
754
    /// # Safety
755
    ///
756
    /// Callers may assume that `LAYOUT` accurately reflects the layout of
757
    /// `Self`. In particular:
758
    /// - `LAYOUT.align` is equal to `Self`'s alignment
759
    /// - If `Self: Sized`, then `LAYOUT.size_info == SizeInfo::Sized { size }`
760
    ///   where `size == size_of::<Self>()`
761
    /// - If `Self` is a slice DST, then `LAYOUT.size_info ==
762
    ///   SizeInfo::SliceDst(slice_layout)` where:
763
    ///   - The size, `size`, of an instance of `Self` with `elems` trailing
764
    ///     slice elements is equal to `slice_layout.offset +
765
    ///     slice_layout.elem_size * elems` rounded up to the nearest multiple
766
    ///     of `LAYOUT.align`
767
    ///   - For such an instance, any bytes in the range `[slice_layout.offset +
768
    ///     slice_layout.elem_size * elems, size)` are padding and must not be
769
    ///     assumed to be initialized
770
    #[doc(hidden)]
771
    const LAYOUT: DstLayout;
772
773
    /// SAFETY: The returned pointer has the same address and provenance as
774
    /// `bytes`. If `Self` is a DST, the returned pointer's referent has `elems`
775
    /// elements in its trailing slice.
776
    #[doc(hidden)]
777
    fn raw_from_ptr_len(bytes: NonNull<u8>, meta: Self::PointerMetadata) -> NonNull<Self>;
778
779
    /// Extracts the metadata from a pointer to `Self`.
780
    ///
781
    /// # Safety
782
    ///
783
    /// `pointer_to_metadata` always returns the correct metadata stored in
784
    /// `ptr`.
785
    #[doc(hidden)]
786
    fn pointer_to_metadata(ptr: *mut Self) -> Self::PointerMetadata;
787
788
    /// Computes the length of the byte range addressed by `ptr`.
789
    ///
790
    /// Returns `None` if the resulting length would not fit in an `usize`.
791
    ///
792
    /// # Safety
793
    ///
794
    /// Callers may assume that `size_of_val_raw` always returns the correct
795
    /// size.
796
    ///
797
    /// Callers may assume that, if `ptr` addresses a byte range whose length
798
    /// fits in an `usize`, this will return `Some`.
799
    #[doc(hidden)]
800
    #[must_use]
801
    #[inline(always)]
802
    fn size_of_val_raw(ptr: NonNull<Self>) -> Option<usize> {
803
        let meta = Self::pointer_to_metadata(ptr.as_ptr());
804
        // SAFETY: `size_for_metadata` promises to only return `None` if the
805
        // resulting size would not fit in a `usize`.
806
        meta.size_for_metadata(Self::LAYOUT)
807
    }
808
}
809
810
/// The metadata associated with a [`KnownLayout`] type.
811
#[doc(hidden)]
812
pub trait PointerMetadata: Copy + Eq + Debug {
813
    /// Constructs a `Self` from an element count.
814
    ///
815
    /// If `Self = ()`, this returns `()`. If `Self = usize`, this returns
816
    /// `elems`. No other types are currently supported.
817
    fn from_elem_count(elems: usize) -> Self;
818
819
    /// Computes the size of the object with the given layout and pointer
820
    /// metadata.
821
    ///
822
    /// # Panics
823
    ///
824
    /// If `Self = ()`, `layout` must describe a sized type. If `Self = usize`,
825
    /// `layout` must describe a slice DST. Otherwise, `size_for_metadata` may
826
    /// panic.
827
    ///
828
    /// # Safety
829
    ///
830
    /// `size_for_metadata` promises to only return `None` if the resulting size
831
    /// would not fit in a `usize`.
832
    fn size_for_metadata(&self, layout: DstLayout) -> Option<usize>;
833
}
834
835
impl PointerMetadata for () {
836
    #[inline]
837
    #[allow(clippy::unused_unit)]
838
    fn from_elem_count(_elems: usize) -> () {}
839
840
    #[inline]
841
    fn size_for_metadata(&self, layout: DstLayout) -> Option<usize> {
842
        match layout.size_info {
843
            SizeInfo::Sized { size } => Some(size),
844
            // NOTE: This branch is unreachable, but we return `None` rather
845
            // than `unreachable!()` to avoid generating panic paths.
846
            SizeInfo::SliceDst(_) => None,
847
        }
848
    }
849
}
850
851
impl PointerMetadata for usize {
852
    #[inline]
853
    fn from_elem_count(elems: usize) -> usize {
854
        elems
855
    }
856
857
    #[inline]
858
    fn size_for_metadata(&self, layout: DstLayout) -> Option<usize> {
859
        match layout.size_info {
860
            SizeInfo::SliceDst(TrailingSliceLayout { offset, elem_size }) => {
861
                let slice_len = elem_size.checked_mul(*self)?;
862
                let without_padding = offset.checked_add(slice_len)?;
863
                without_padding.checked_add(util::padding_needed_for(without_padding, layout.align))
864
            }
865
            // NOTE: This branch is unreachable, but we return `None` rather
866
            // than `unreachable!()` to avoid generating panic paths.
867
            SizeInfo::Sized { .. } => None,
868
        }
869
    }
870
}
871
872
// SAFETY: Delegates safety to `DstLayout::for_slice`.
873
unsafe impl<T> KnownLayout for [T] {
874
    #[allow(clippy::missing_inline_in_public_items)]
875
    #[cfg_attr(
876
        all(coverage_nightly, __ZEROCOPY_INTERNAL_USE_ONLY_NIGHTLY_FEATURES_IN_TESTS),
877
        coverage(off)
878
    )]
879
    fn only_derive_is_allowed_to_implement_this_trait()
880
    where
881
        Self: Sized,
882
    {
883
    }
884
885
    type PointerMetadata = usize;
886
887
    // SAFETY: `CoreMaybeUninit<T>::LAYOUT` and `T::LAYOUT` are identical
888
    // because `CoreMaybeUninit<T>` has the same size and alignment as `T` [1].
889
    // Consequently, `[CoreMaybeUninit<T>]::LAYOUT` and `[T]::LAYOUT` are
890
    // identical, because they both lack a fixed-sized prefix and because they
891
    // inherit the alignments of their inner element type (which are identical)
892
    // [2][3].
893
    //
894
    // `[CoreMaybeUninit<T>]` admits uninitialized bytes at all positions
895
    // because `CoreMaybeUninit<T>` admits uninitialized bytes at all positions
896
    // and because the inner elements of `[CoreMaybeUninit<T>]` are laid out
897
    // back-to-back [2][3].
898
    //
899
    // [1] Per https://doc.rust-lang.org/1.81.0/std/mem/union.MaybeUninit.html#layout-1:
900
    //
901
    //   `MaybeUninit<T>` is guaranteed to have the same size, alignment, and ABI as
902
    //   `T`
903
    //
904
    // [2] Per https://doc.rust-lang.org/1.82.0/reference/type-layout.html#slice-layout:
905
    //
906
    //   Slices have the same layout as the section of the array they slice.
907
    //
908
    // [3] Per https://doc.rust-lang.org/1.82.0/reference/type-layout.html#array-layout:
909
    //
910
    //   An array of `[T; N]` has a size of `size_of::<T>() * N` and the same
911
    //   alignment of `T`. Arrays are laid out so that the zero-based `nth`
912
    //   element of the array is offset from the start of the array by `n *
913
    //   size_of::<T>()` bytes.
914
    type MaybeUninit = [CoreMaybeUninit<T>];
915
916
    const LAYOUT: DstLayout = DstLayout::for_slice::<T>();
917
918
    // SAFETY: `.cast` preserves address and provenance. The returned pointer
919
    // refers to an object with `elems` elements by construction.
920
    #[inline(always)]
921
    fn raw_from_ptr_len(data: NonNull<u8>, elems: usize) -> NonNull<Self> {
922
        // TODO(#67): Remove this allow. See NonNullExt for more details.
923
        #[allow(unstable_name_collisions)]
924
        NonNull::slice_from_raw_parts(data.cast::<T>(), elems)
925
    }
926
927
    #[inline(always)]
928
    fn pointer_to_metadata(ptr: *mut [T]) -> usize {
929
        #[allow(clippy::as_conversions)]
930
        let slc = ptr as *const [()];
931
932
        // SAFETY:
933
        // - `()` has alignment 1, so `slc` is trivially aligned.
934
        // - `slc` was derived from a non-null pointer.
935
        // - The size is 0 regardless of the length, so it is sound to
936
        //   materialize a reference regardless of location.
937
        // - By invariant, `self.ptr` has valid provenance.
938
        let slc = unsafe { &*slc };
939
940
        // This is correct because the preceding `as` cast preserves the number
941
        // of slice elements. [1]
942
        //
943
        // [1] Per https://doc.rust-lang.org/reference/expressions/operator-expr.html#pointer-to-pointer-cast:
944
        //
945
        //   For slice types like `[T]` and `[U]`, the raw pointer types `*const
946
        //   [T]`, `*mut [T]`, `*const [U]`, and `*mut [U]` encode the number of
947
        //   elements in this slice. Casts between these raw pointer types
948
        //   preserve the number of elements. ... The same holds for `str` and
949
        //   any compound type whose unsized tail is a slice type, such as
950
        //   struct `Foo(i32, [u8])` or `(u64, Foo)`.
951
        slc.len()
952
    }
953
}
954
955
#[rustfmt::skip]
956
impl_known_layout!(
957
    (),
958
    u8, i8, u16, i16, u32, i32, u64, i64, u128, i128, usize, isize, f32, f64,
959
    bool, char,
960
    NonZeroU8, NonZeroI8, NonZeroU16, NonZeroI16, NonZeroU32, NonZeroI32,
961
    NonZeroU64, NonZeroI64, NonZeroU128, NonZeroI128, NonZeroUsize, NonZeroIsize
962
);
963
#[rustfmt::skip]
964
#[cfg(feature = "float-nightly")]
965
impl_known_layout!(
966
    #[cfg_attr(doc_cfg, doc(cfg(feature = "float-nightly")))]
967
    f16,
968
    #[cfg_attr(doc_cfg, doc(cfg(feature = "float-nightly")))]
969
    f128
970
);
971
#[rustfmt::skip]
972
impl_known_layout!(
973
    T         => Option<T>,
974
    T: ?Sized => PhantomData<T>,
975
    T         => Wrapping<T>,
976
    T         => CoreMaybeUninit<T>,
977
    T: ?Sized => *const T,
978
    T: ?Sized => *mut T,
979
    T: ?Sized => &'_ T,
980
    T: ?Sized => &'_ mut T,
981
);
982
impl_known_layout!(const N: usize, T => [T; N]);
983
984
safety_comment! {
985
    /// SAFETY:
986
    /// `str`, `ManuallyDrop<[T]>` [1], and `UnsafeCell<T>` [2] have the same
987
    /// representations as `[u8]`, `[T]`, and `T` repsectively. `str` has
988
    /// different bit validity than `[u8]`, but that doesn't affect the
989
    /// soundness of this impl.
990
    ///
991
    /// [1] Per https://doc.rust-lang.org/nightly/core/mem/struct.ManuallyDrop.html:
992
    ///
993
    ///   `ManuallyDrop<T>` is guaranteed to have the same layout and bit
994
    ///   validity as `T`
995
    ///
996
    /// [2] Per https://doc.rust-lang.org/core/cell/struct.UnsafeCell.html#memory-layout:
997
    ///
998
    ///   `UnsafeCell<T>` has the same in-memory representation as its inner
999
    ///   type `T`.
1000
    ///
1001
    /// TODO(#429):
1002
    /// -  Add quotes from docs.
1003
    /// -  Once [1] (added in
1004
    /// https://github.com/rust-lang/rust/pull/115522) is available on stable,
1005
    /// quote the stable docs instead of the nightly docs.
1006
    unsafe_impl_known_layout!(#[repr([u8])] str);
1007
    unsafe_impl_known_layout!(T: ?Sized + KnownLayout => #[repr(T)] ManuallyDrop<T>);
1008
    unsafe_impl_known_layout!(T: ?Sized + KnownLayout => #[repr(T)] UnsafeCell<T>);
1009
}
1010
1011
safety_comment! {
1012
    /// SAFETY:
1013
    /// - By consequence of the invariant on `T::MaybeUninit` that `T::LAYOUT`
1014
    ///   and `T::MaybeUninit::LAYOUT` are equal, `T` and `T::MaybeUninit`
1015
    ///   have the same:
1016
    ///   - Fixed prefix size
1017
    ///   - Alignment
1018
    ///   - (For DSTs) trailing slice element size
1019
    /// - By consequence of the above, referents `T::MaybeUninit` and `T` have
1020
    ///   the require the same kind of pointer metadata, and thus it is valid to
1021
    ///   perform an `as` cast from `*mut T` and `*mut T::MaybeUninit`, and this
1022
    ///   operation preserves referent size (ie, `size_of_val_raw`).
1023
    unsafe_impl_known_layout!(T: ?Sized + KnownLayout => #[repr(T::MaybeUninit)] MaybeUninit<T>);
1024
}
1025
1026
/// Analyzes whether a type is [`FromZeros`].
1027
///
1028
/// This derive analyzes, at compile time, whether the annotated type satisfies
1029
/// the [safety conditions] of `FromZeros` and implements `FromZeros` and its
1030
/// supertraits if it is sound to do so. This derive can be applied to structs,
1031
/// enums, and unions; e.g.:
1032
///
1033
/// ```
1034
/// # use zerocopy_derive::{FromZeros, Immutable};
1035
/// #[derive(FromZeros)]
1036
/// struct MyStruct {
1037
/// # /*
1038
///     ...
1039
/// # */
1040
/// }
1041
///
1042
/// #[derive(FromZeros)]
1043
/// #[repr(u8)]
1044
/// enum MyEnum {
1045
/// #   Variant0,
1046
/// # /*
1047
///     ...
1048
/// # */
1049
/// }
1050
///
1051
/// #[derive(FromZeros, Immutable)]
1052
/// union MyUnion {
1053
/// #   variant: u8,
1054
/// # /*
1055
///     ...
1056
/// # */
1057
/// }
1058
/// ```
1059
///
1060
/// [safety conditions]: trait@FromZeros#safety
1061
///
1062
/// # Analysis
1063
///
1064
/// *This section describes, roughly, the analysis performed by this derive to
1065
/// determine whether it is sound to implement `FromZeros` for a given type.
1066
/// Unless you are modifying the implementation of this derive, or attempting to
1067
/// manually implement `FromZeros` for a type yourself, you don't need to read
1068
/// this section.*
1069
///
1070
/// If a type has the following properties, then this derive can implement
1071
/// `FromZeros` for that type:
1072
///
1073
/// - If the type is a struct, all of its fields must be `FromZeros`.
1074
/// - If the type is an enum:
1075
///   - It must have a defined representation (`repr`s `C`, `u8`, `u16`, `u32`,
1076
///     `u64`, `usize`, `i8`, `i16`, `i32`, `i64`, or `isize`).
1077
///   - It must have a variant with a discriminant/tag of `0`, and its fields
1078
///     must be `FromZeros`. See [the reference] for a description of
1079
///     discriminant values are specified.
1080
///   - The fields of that variant must be `FromZeros`.
1081
///
1082
/// This analysis is subject to change. Unsafe code may *only* rely on the
1083
/// documented [safety conditions] of `FromZeros`, and must *not* rely on the
1084
/// implementation details of this derive.
1085
///
1086
/// [the reference]: https://doc.rust-lang.org/reference/items/enumerations.html#custom-discriminant-values-for-fieldless-enumerations
1087
///
1088
/// ## Why isn't an explicit representation required for structs?
1089
///
1090
/// Neither this derive, nor the [safety conditions] of `FromZeros`, requires
1091
/// that structs are marked with `#[repr(C)]`.
1092
///
1093
/// Per the [Rust reference](reference),
1094
///
1095
/// > The representation of a type can change the padding between fields, but
1096
/// > does not change the layout of the fields themselves.
1097
///
1098
/// [reference]: https://doc.rust-lang.org/reference/type-layout.html#representations
1099
///
1100
/// Since the layout of structs only consists of padding bytes and field bytes,
1101
/// a struct is soundly `FromZeros` if:
1102
/// 1. its padding is soundly `FromZeros`, and
1103
/// 2. its fields are soundly `FromZeros`.
1104
///
1105
/// The answer to the first question is always yes: padding bytes do not have
1106
/// any validity constraints. A [discussion] of this question in the Unsafe Code
1107
/// Guidelines Working Group concluded that it would be virtually unimaginable
1108
/// for future versions of rustc to add validity constraints to padding bytes.
1109
///
1110
/// [discussion]: https://github.com/rust-lang/unsafe-code-guidelines/issues/174
1111
///
1112
/// Whether a struct is soundly `FromZeros` therefore solely depends on whether
1113
/// its fields are `FromZeros`.
1114
// TODO(#146): Document why we don't require an enum to have an explicit `repr`
1115
// attribute.
1116
#[cfg(any(feature = "derive", test))]
1117
#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
1118
pub use zerocopy_derive::FromZeros;
1119
1120
/// Analyzes whether a type is [`Immutable`].
1121
///
1122
/// This derive analyzes, at compile time, whether the annotated type satisfies
1123
/// the [safety conditions] of `Immutable` and implements `Immutable` if it is
1124
/// sound to do so. This derive can be applied to structs, enums, and unions;
1125
/// e.g.:
1126
///
1127
/// ```
1128
/// # use zerocopy_derive::Immutable;
1129
/// #[derive(Immutable)]
1130
/// struct MyStruct {
1131
/// # /*
1132
///     ...
1133
/// # */
1134
/// }
1135
///
1136
/// #[derive(Immutable)]
1137
/// enum MyEnum {
1138
/// #   Variant0,
1139
/// # /*
1140
///     ...
1141
/// # */
1142
/// }
1143
///
1144
/// #[derive(Immutable)]
1145
/// union MyUnion {
1146
/// #   variant: u8,
1147
/// # /*
1148
///     ...
1149
/// # */
1150
/// }
1151
/// ```
1152
///
1153
/// # Analysis
1154
///
1155
/// *This section describes, roughly, the analysis performed by this derive to
1156
/// determine whether it is sound to implement `Immutable` for a given type.
1157
/// Unless you are modifying the implementation of this derive, you don't need
1158
/// to read this section.*
1159
///
1160
/// If a type has the following properties, then this derive can implement
1161
/// `Immutable` for that type:
1162
///
1163
/// - All fields must be `Immutable`.
1164
///
1165
/// This analysis is subject to change. Unsafe code may *only* rely on the
1166
/// documented [safety conditions] of `Immutable`, and must *not* rely on the
1167
/// implementation details of this derive.
1168
///
1169
/// [safety conditions]: trait@Immutable#safety
1170
#[cfg(any(feature = "derive", test))]
1171
#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
1172
pub use zerocopy_derive::Immutable;
1173
1174
/// Types which are free from interior mutability.
1175
///
1176
/// `T: Immutable` indicates that `T` does not permit interior mutation, except
1177
/// by ownership or an exclusive (`&mut`) borrow.
1178
///
1179
/// # Implementation
1180
///
1181
/// **Do not implement this trait yourself!** Instead, use
1182
/// [`#[derive(Immutable)]`][derive] (requires the `derive` Cargo feature);
1183
/// e.g.:
1184
///
1185
/// ```
1186
/// # use zerocopy_derive::Immutable;
1187
/// #[derive(Immutable)]
1188
/// struct MyStruct {
1189
/// # /*
1190
///     ...
1191
/// # */
1192
/// }
1193
///
1194
/// #[derive(Immutable)]
1195
/// enum MyEnum {
1196
/// # /*
1197
///     ...
1198
/// # */
1199
/// }
1200
///
1201
/// #[derive(Immutable)]
1202
/// union MyUnion {
1203
/// #   variant: u8,
1204
/// # /*
1205
///     ...
1206
/// # */
1207
/// }
1208
/// ```
1209
///
1210
/// This derive performs a sophisticated, compile-time safety analysis to
1211
/// determine whether a type is `Immutable`.
1212
///
1213
/// # Safety
1214
///
1215
/// Unsafe code outside of this crate must not make any assumptions about `T`
1216
/// based on `T: Immutable`. We reserve the right to relax the requirements for
1217
/// `Immutable` in the future, and if unsafe code outside of this crate makes
1218
/// assumptions based on `T: Immutable`, future relaxations may cause that code
1219
/// to become unsound.
1220
///
1221
// # Safety (Internal)
1222
//
1223
// If `T: Immutable`, unsafe code *inside of this crate* may assume that, given
1224
// `t: &T`, `t` does not contain any [`UnsafeCell`]s at any byte location
1225
// within the byte range addressed by `t`. This includes ranges of length 0
1226
// (e.g., `UnsafeCell<()>` and `[UnsafeCell<u8>; 0]`). If a type implements
1227
// `Immutable` which violates this assumptions, it may cause this crate to
1228
// exhibit [undefined behavior].
1229
//
1230
// [`UnsafeCell`]: core::cell::UnsafeCell
1231
// [undefined behavior]: https://raphlinus.github.io/programming/rust/2018/08/17/undefined-behavior.html
1232
#[cfg_attr(
1233
    feature = "derive",
1234
    doc = "[derive]: zerocopy_derive::Immutable",
1235
    doc = "[derive-analysis]: zerocopy_derive::Immutable#analysis"
1236
)]
1237
#[cfg_attr(
1238
    not(feature = "derive"),
1239
    doc = concat!("[derive]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.Immutable.html"),
1240
    doc = concat!("[derive-analysis]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.Immutable.html#analysis"),
1241
)]
1242
#[cfg_attr(
1243
    zerocopy_diagnostic_on_unimplemented_1_78_0,
1244
    diagnostic::on_unimplemented(note = "Consider adding `#[derive(Immutable)]` to `{Self}`")
1245
)]
1246
pub unsafe trait Immutable {
1247
    // The `Self: Sized` bound makes it so that `Immutable` is still object
1248
    // safe.
1249
    #[doc(hidden)]
1250
    fn only_derive_is_allowed_to_implement_this_trait()
1251
    where
1252
        Self: Sized;
1253
}
1254
1255
/// Implements [`TryFromBytes`].
1256
///
1257
/// This derive synthesizes the runtime checks required to check whether a
1258
/// sequence of initialized bytes corresponds to a valid instance of a type.
1259
/// This derive can be applied to structs, enums, and unions; e.g.:
1260
///
1261
/// ```
1262
/// # use zerocopy_derive::{TryFromBytes, Immutable};
1263
/// #[derive(TryFromBytes)]
1264
/// struct MyStruct {
1265
/// # /*
1266
///     ...
1267
/// # */
1268
/// }
1269
///
1270
/// #[derive(TryFromBytes)]
1271
/// #[repr(u8)]
1272
/// enum MyEnum {
1273
/// #   V00,
1274
/// # /*
1275
///     ...
1276
/// # */
1277
/// }
1278
///
1279
/// #[derive(TryFromBytes, Immutable)]
1280
/// union MyUnion {
1281
/// #   variant: u8,
1282
/// # /*
1283
///     ...
1284
/// # */
1285
/// }
1286
/// ```
1287
///
1288
/// [safety conditions]: trait@TryFromBytes#safety
1289
#[cfg(any(feature = "derive", test))]
1290
#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
1291
pub use zerocopy_derive::TryFromBytes;
1292
1293
/// Types for which some bit patterns are valid.
1294
///
1295
/// A memory region of the appropriate length which contains initialized bytes
1296
/// can be viewed as a `TryFromBytes` type so long as the runtime value of those
1297
/// bytes corresponds to a [*valid instance*] of that type. For example,
1298
/// [`bool`] is `TryFromBytes`, so zerocopy can transmute a [`u8`] into a
1299
/// [`bool`] so long as it first checks that the value of the [`u8`] is `0` or
1300
/// `1`.
1301
///
1302
/// # Implementation
1303
///
1304
/// **Do not implement this trait yourself!** Instead, use
1305
/// [`#[derive(TryFromBytes)]`][derive]; e.g.:
1306
///
1307
/// ```
1308
/// # use zerocopy_derive::{TryFromBytes, Immutable};
1309
/// #[derive(TryFromBytes)]
1310
/// struct MyStruct {
1311
/// # /*
1312
///     ...
1313
/// # */
1314
/// }
1315
///
1316
/// #[derive(TryFromBytes)]
1317
/// #[repr(u8)]
1318
/// enum MyEnum {
1319
/// #   V00,
1320
/// # /*
1321
///     ...
1322
/// # */
1323
/// }
1324
///
1325
/// #[derive(TryFromBytes, Immutable)]
1326
/// union MyUnion {
1327
/// #   variant: u8,
1328
/// # /*
1329
///     ...
1330
/// # */
1331
/// }
1332
/// ```
1333
///
1334
/// This derive ensures that the runtime check of whether bytes correspond to a
1335
/// valid instance is sound. You **must** implement this trait via the derive.
1336
///
1337
/// # What is a "valid instance"?
1338
///
1339
/// In Rust, each type has *bit validity*, which refers to the set of bit
1340
/// patterns which may appear in an instance of that type. It is impossible for
1341
/// safe Rust code to produce values which violate bit validity (ie, values
1342
/// outside of the "valid" set of bit patterns). If `unsafe` code produces an
1343
/// invalid value, this is considered [undefined behavior].
1344
///
1345
/// Rust's bit validity rules are currently being decided, which means that some
1346
/// types have three classes of bit patterns: those which are definitely valid,
1347
/// and whose validity is documented in the language; those which may or may not
1348
/// be considered valid at some point in the future; and those which are
1349
/// definitely invalid.
1350
///
1351
/// Zerocopy takes a conservative approach, and only considers a bit pattern to
1352
/// be valid if its validity is a documenteed guarantee provided by the
1353
/// language.
1354
///
1355
/// For most use cases, Rust's current guarantees align with programmers'
1356
/// intuitions about what ought to be valid. As a result, zerocopy's
1357
/// conservatism should not affect most users.
1358
///
1359
/// If you are negatively affected by lack of support for a particular type,
1360
/// we encourage you to let us know by [filing an issue][github-repo].
1361
///
1362
/// # `TryFromBytes` is not symmetrical with [`IntoBytes`]
1363
///
1364
/// There are some types which implement both `TryFromBytes` and [`IntoBytes`],
1365
/// but for which `TryFromBytes` is not guaranteed to accept all byte sequences
1366
/// produced by `IntoBytes`. In other words, for some `T: TryFromBytes +
1367
/// IntoBytes`, there exist values of `t: T` such that
1368
/// `TryFromBytes::try_ref_from_bytes(t.as_bytes()) == None`. Code should not
1369
/// generally assume that values produced by `IntoBytes` will necessarily be
1370
/// accepted as valid by `TryFromBytes`.
1371
///
1372
/// # Safety
1373
///
1374
/// On its own, `T: TryFromBytes` does not make any guarantees about the layout
1375
/// or representation of `T`. It merely provides the ability to perform a
1376
/// validity check at runtime via methods like [`try_ref_from_bytes`].
1377
///
1378
/// You must not rely on the `#[doc(hidden)]` internals of `TryFromBytes`.
1379
/// Future releases of zerocopy may make backwards-breaking changes to these
1380
/// items, including changes that only affect soundness, which may cause code
1381
/// which uses those items to silently become unsound.
1382
///
1383
/// [undefined behavior]: https://raphlinus.github.io/programming/rust/2018/08/17/undefined-behavior.html
1384
/// [github-repo]: https://github.com/google/zerocopy
1385
/// [`try_ref_from_bytes`]: TryFromBytes::try_ref_from_bytes
1386
/// [*valid instance*]: #what-is-a-valid-instance
1387
#[cfg_attr(feature = "derive", doc = "[derive]: zerocopy_derive::TryFromBytes")]
1388
#[cfg_attr(
1389
    not(feature = "derive"),
1390
    doc = concat!("[derive]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.TryFromBytes.html"),
1391
)]
1392
#[cfg_attr(
1393
    zerocopy_diagnostic_on_unimplemented_1_78_0,
1394
    diagnostic::on_unimplemented(note = "Consider adding `#[derive(TryFromBytes)]` to `{Self}`")
1395
)]
1396
pub unsafe trait TryFromBytes {
1397
    // The `Self: Sized` bound makes it so that `TryFromBytes` is still object
1398
    // safe.
1399
    #[doc(hidden)]
1400
    fn only_derive_is_allowed_to_implement_this_trait()
1401
    where
1402
        Self: Sized;
1403
1404
    /// Does a given memory range contain a valid instance of `Self`?
1405
    ///
1406
    /// # Safety
1407
    ///
1408
    /// Unsafe code may assume that, if `is_bit_valid(candidate)` returns true,
1409
    /// `*candidate` contains a valid `Self`.
1410
    ///
1411
    /// # Panics
1412
    ///
1413
    /// `is_bit_valid` may panic. Callers are responsible for ensuring that any
1414
    /// `unsafe` code remains sound even in the face of `is_bit_valid`
1415
    /// panicking. (We support user-defined validation routines; so long as
1416
    /// these routines are not required to be `unsafe`, there is no way to
1417
    /// ensure that these do not generate panics.)
1418
    ///
1419
    /// Besides user-defined validation routines panicking, `is_bit_valid` will
1420
    /// either panic or fail to compile if called on a pointer with [`Shared`]
1421
    /// aliasing when `Self: !Immutable`.
1422
    ///
1423
    /// [`UnsafeCell`]: core::cell::UnsafeCell
1424
    /// [`Shared`]: invariant::Shared
1425
    #[doc(hidden)]
1426
    fn is_bit_valid<A: invariant::Aliasing + invariant::AtLeast<invariant::Shared>>(
1427
        candidate: Maybe<'_, Self, A>,
1428
    ) -> bool;
1429
1430
    /// Attempts to interpret the given `source` as a `&Self`.
1431
    ///
1432
    /// If the bytes of `source` are a valid instance of `Self`, this method
1433
    /// returns a reference to those bytes interpreted as a `Self`. If the
1434
    /// length of `source` is not a [valid size of `Self`][valid-size], or if
1435
    /// `source` is not appropriately aligned, or if `source` is not a valid
1436
    /// instance of `Self`, this returns `Err`. If [`Self:
1437
    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
1438
    /// error][ConvertError::from].
1439
    ///
1440
    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
1441
    ///
1442
    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
1443
    /// [self-unaligned]: Unaligned
1444
    /// [slice-dst]: KnownLayout#dynamically-sized-types
1445
    ///
1446
    /// # Compile-Time Assertions
1447
    ///
1448
    /// This method cannot yet be used on unsized types whose dynamically-sized
1449
    /// component is zero-sized. Attempting to use this method on such types
1450
    /// results in a compile-time assertion error; e.g.:
1451
    ///
1452
    /// ```compile_fail,E0080
1453
    /// use zerocopy::*;
1454
    /// # use zerocopy_derive::*;
1455
    ///
1456
    /// #[derive(TryFromBytes, Immutable, KnownLayout)]
1457
    /// #[repr(C)]
1458
    /// struct ZSTy {
1459
    ///     leading_sized: u16,
1460
    ///     trailing_dst: [()],
1461
    /// }
1462
    ///
1463
    /// let _ = ZSTy::try_ref_from_bytes(0u16.as_bytes()); // âš  Compile Error!
1464
    /// ```
1465
    ///
1466
    /// # Examples
1467
    ///
1468
    /// ```
1469
    /// use zerocopy::TryFromBytes;
1470
    /// # use zerocopy_derive::*;
1471
    ///
1472
    /// // The only valid value of this type is the byte `0xC0`
1473
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1474
    /// #[repr(u8)]
1475
    /// enum C0 { xC0 = 0xC0 }
1476
    ///
1477
    /// // The only valid value of this type is the byte sequence `0xC0C0`.
1478
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1479
    /// #[repr(C)]
1480
    /// struct C0C0(C0, C0);
1481
    ///
1482
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1483
    /// #[repr(C)]
1484
    /// struct Packet {
1485
    ///     magic_number: C0C0,
1486
    ///     mug_size: u8,
1487
    ///     temperature: u8,
1488
    ///     marshmallows: [[u8; 2]],
1489
    /// }
1490
    ///
1491
    /// let bytes = &[0xC0, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5][..];
1492
    ///
1493
    /// let packet = Packet::try_ref_from_bytes(bytes).unwrap();
1494
    ///
1495
    /// assert_eq!(packet.mug_size, 240);
1496
    /// assert_eq!(packet.temperature, 77);
1497
    /// assert_eq!(packet.marshmallows, [[0, 1], [2, 3], [4, 5]]);
1498
    ///
1499
    /// // These bytes are not valid instance of `Packet`.
1500
    /// let bytes = &[0x10, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5][..];
1501
    /// assert!(Packet::try_ref_from_bytes(bytes).is_err());
1502
    /// ```
1503
    #[must_use = "has no side effects"]
1504
    #[inline]
1505
    fn try_ref_from_bytes(source: &[u8]) -> Result<&Self, TryCastError<&[u8], Self>>
1506
    where
1507
        Self: KnownLayout + Immutable,
1508
    {
1509
        static_assert_dst_is_not_zst!(Self);
1510
        match Ptr::from_ref(source).try_cast_into_no_leftover::<Self, BecauseImmutable>(None) {
1511
            Ok(source) => {
1512
                // This call may panic. If that happens, it doesn't cause any soundness
1513
                // issues, as we have not generated any invalid state which we need to
1514
                // fix before returning.
1515
                //
1516
                // Note that one panic or post-monomorphization error condition is
1517
                // calling `try_into_valid` (and thus `is_bit_valid`) with a shared
1518
                // pointer when `Self: !Immutable`. Since `Self: Immutable`, this panic
1519
                // condition will not happen.
1520
                match source.try_into_valid() {
1521
                    Ok(valid) => Ok(valid.as_ref()),
1522
                    Err(e) => {
1523
                        Err(e.map_src(|src| src.as_bytes::<BecauseImmutable>().as_ref()).into())
1524
                    }
1525
                }
1526
            }
1527
            Err(e) => Err(e.map_src(Ptr::as_ref).into()),
1528
        }
1529
    }
1530
1531
    /// Attempts to interpret the prefix of the given `source` as a `&Self`.
1532
    ///
1533
    /// This method computes the [largest possible size of `Self`][valid-size]
1534
    /// that can fit in the leading bytes of `source`. If that prefix is a valid
1535
    /// instance of `Self`, this method returns a reference to those bytes
1536
    /// interpreted as `Self`, and a reference to the remaining bytes. If there
1537
    /// are insufficient bytes, or if `source` is not appropriately aligned, or
1538
    /// if those bytes are not a valid instance of `Self`, this returns `Err`.
1539
    /// If [`Self: Unaligned`][self-unaligned], you can [infallibly discard the
1540
    /// alignment error][ConvertError::from].
1541
    ///
1542
    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
1543
    ///
1544
    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
1545
    /// [self-unaligned]: Unaligned
1546
    /// [slice-dst]: KnownLayout#dynamically-sized-types
1547
    ///
1548
    /// # Compile-Time Assertions
1549
    ///
1550
    /// This method cannot yet be used on unsized types whose dynamically-sized
1551
    /// component is zero-sized. Attempting to use this method on such types
1552
    /// results in a compile-time assertion error; e.g.:
1553
    ///
1554
    /// ```compile_fail,E0080
1555
    /// use zerocopy::*;
1556
    /// # use zerocopy_derive::*;
1557
    ///
1558
    /// #[derive(TryFromBytes, Immutable, KnownLayout)]
1559
    /// #[repr(C)]
1560
    /// struct ZSTy {
1561
    ///     leading_sized: u16,
1562
    ///     trailing_dst: [()],
1563
    /// }
1564
    ///
1565
    /// let _ = ZSTy::try_ref_from_prefix(0u16.as_bytes()); // âš  Compile Error!
1566
    /// ```
1567
    ///
1568
    /// # Examples
1569
    ///
1570
    /// ```
1571
    /// use zerocopy::TryFromBytes;
1572
    /// # use zerocopy_derive::*;
1573
    ///
1574
    /// // The only valid value of this type is the byte `0xC0`
1575
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1576
    /// #[repr(u8)]
1577
    /// enum C0 { xC0 = 0xC0 }
1578
    ///
1579
    /// // The only valid value of this type is the bytes `0xC0C0`.
1580
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1581
    /// #[repr(C)]
1582
    /// struct C0C0(C0, C0);
1583
    ///
1584
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1585
    /// #[repr(C)]
1586
    /// struct Packet {
1587
    ///     magic_number: C0C0,
1588
    ///     mug_size: u8,
1589
    ///     temperature: u8,
1590
    ///     marshmallows: [[u8; 2]],
1591
    /// }
1592
    ///
1593
    /// // These are more bytes than are needed to encode a `Packet`.
1594
    /// let bytes = &[0xC0, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5, 6][..];
1595
    ///
1596
    /// let (packet, suffix) = Packet::try_ref_from_prefix(bytes).unwrap();
1597
    ///
1598
    /// assert_eq!(packet.mug_size, 240);
1599
    /// assert_eq!(packet.temperature, 77);
1600
    /// assert_eq!(packet.marshmallows, [[0, 1], [2, 3], [4, 5]]);
1601
    /// assert_eq!(suffix, &[6u8][..]);
1602
    ///
1603
    /// // These bytes are not valid instance of `Packet`.
1604
    /// let bytes = &[0x10, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5, 6][..];
1605
    /// assert!(Packet::try_ref_from_prefix(bytes).is_err());
1606
    /// ```
1607
    #[must_use = "has no side effects"]
1608
    #[inline]
1609
    fn try_ref_from_prefix(source: &[u8]) -> Result<(&Self, &[u8]), TryCastError<&[u8], Self>>
1610
    where
1611
        Self: KnownLayout + Immutable,
1612
    {
1613
        static_assert_dst_is_not_zst!(Self);
1614
        try_ref_from_prefix_suffix(source, CastType::Prefix, None)
1615
    }
1616
1617
    /// Attempts to interpret the suffix of the given `source` as a `&Self`.
1618
    ///
1619
    /// This method computes the [largest possible size of `Self`][valid-size]
1620
    /// that can fit in the trailing bytes of `source`. If that suffix is a
1621
    /// valid instance of `Self`, this method returns a reference to those bytes
1622
    /// interpreted as `Self`, and a reference to the preceding bytes. If there
1623
    /// are insufficient bytes, or if the suffix of `source` would not be
1624
    /// appropriately aligned, or if the suffix is not a valid instance of
1625
    /// `Self`, this returns `Err`. If [`Self: Unaligned`][self-unaligned], you
1626
    /// can [infallibly discard the alignment error][ConvertError::from].
1627
    ///
1628
    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
1629
    ///
1630
    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
1631
    /// [self-unaligned]: Unaligned
1632
    /// [slice-dst]: KnownLayout#dynamically-sized-types
1633
    ///
1634
    /// # Compile-Time Assertions
1635
    ///
1636
    /// This method cannot yet be used on unsized types whose dynamically-sized
1637
    /// component is zero-sized. Attempting to use this method on such types
1638
    /// results in a compile-time assertion error; e.g.:
1639
    ///
1640
    /// ```compile_fail,E0080
1641
    /// use zerocopy::*;
1642
    /// # use zerocopy_derive::*;
1643
    ///
1644
    /// #[derive(TryFromBytes, Immutable, KnownLayout)]
1645
    /// #[repr(C)]
1646
    /// struct ZSTy {
1647
    ///     leading_sized: u16,
1648
    ///     trailing_dst: [()],
1649
    /// }
1650
    ///
1651
    /// let _ = ZSTy::try_ref_from_suffix(0u16.as_bytes()); // âš  Compile Error!
1652
    /// ```
1653
    ///
1654
    /// # Examples
1655
    ///
1656
    /// ```
1657
    /// use zerocopy::TryFromBytes;
1658
    /// # use zerocopy_derive::*;
1659
    ///
1660
    /// // The only valid value of this type is the byte `0xC0`
1661
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1662
    /// #[repr(u8)]
1663
    /// enum C0 { xC0 = 0xC0 }
1664
    ///
1665
    /// // The only valid value of this type is the bytes `0xC0C0`.
1666
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1667
    /// #[repr(C)]
1668
    /// struct C0C0(C0, C0);
1669
    ///
1670
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1671
    /// #[repr(C)]
1672
    /// struct Packet {
1673
    ///     magic_number: C0C0,
1674
    ///     mug_size: u8,
1675
    ///     temperature: u8,
1676
    ///     marshmallows: [[u8; 2]],
1677
    /// }
1678
    ///
1679
    /// // These are more bytes than are needed to encode a `Packet`.
1680
    /// let bytes = &[0, 0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7][..];
1681
    ///
1682
    /// let (prefix, packet) = Packet::try_ref_from_suffix(bytes).unwrap();
1683
    ///
1684
    /// assert_eq!(packet.mug_size, 240);
1685
    /// assert_eq!(packet.temperature, 77);
1686
    /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
1687
    /// assert_eq!(prefix, &[0u8][..]);
1688
    ///
1689
    /// // These bytes are not valid instance of `Packet`.
1690
    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 77, 240, 0xC0, 0x10][..];
1691
    /// assert!(Packet::try_ref_from_suffix(bytes).is_err());
1692
    /// ```
1693
    #[must_use = "has no side effects"]
1694
    #[inline]
1695
    fn try_ref_from_suffix(source: &[u8]) -> Result<(&[u8], &Self), TryCastError<&[u8], Self>>
1696
    where
1697
        Self: KnownLayout + Immutable,
1698
    {
1699
        static_assert_dst_is_not_zst!(Self);
1700
        try_ref_from_prefix_suffix(source, CastType::Suffix, None).map(swap)
1701
    }
1702
1703
    /// Attempts to interpret the given `source` as a `&mut Self` without
1704
    /// copying.
1705
    ///
1706
    /// If the bytes of `source` are a valid instance of `Self`, this method
1707
    /// returns a reference to those bytes interpreted as a `Self`. If the
1708
    /// length of `source` is not a [valid size of `Self`][valid-size], or if
1709
    /// `source` is not appropriately aligned, or if `source` is not a valid
1710
    /// instance of `Self`, this returns `Err`. If [`Self:
1711
    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
1712
    /// error][ConvertError::from].
1713
    ///
1714
    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
1715
    ///
1716
    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
1717
    /// [self-unaligned]: Unaligned
1718
    /// [slice-dst]: KnownLayout#dynamically-sized-types
1719
    ///
1720
    /// # Compile-Time Assertions
1721
    ///
1722
    /// This method cannot yet be used on unsized types whose dynamically-sized
1723
    /// component is zero-sized. Attempting to use this method on such types
1724
    /// results in a compile-time assertion error; e.g.:
1725
    ///
1726
    /// ```compile_fail,E0080
1727
    /// use zerocopy::*;
1728
    /// # use zerocopy_derive::*;
1729
    ///
1730
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1731
    /// #[repr(C, packed)]
1732
    /// struct ZSTy {
1733
    ///     leading_sized: [u8; 2],
1734
    ///     trailing_dst: [()],
1735
    /// }
1736
    ///
1737
    /// let mut source = [85, 85];
1738
    /// let _ = ZSTy::try_mut_from_bytes(&mut source[..]); // âš  Compile Error!
1739
    /// ```
1740
    ///
1741
    /// # Examples
1742
    ///
1743
    /// ```
1744
    /// use zerocopy::TryFromBytes;
1745
    /// # use zerocopy_derive::*;
1746
    ///
1747
    /// // The only valid value of this type is the byte `0xC0`
1748
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1749
    /// #[repr(u8)]
1750
    /// enum C0 { xC0 = 0xC0 }
1751
    ///
1752
    /// // The only valid value of this type is the bytes `0xC0C0`.
1753
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1754
    /// #[repr(C)]
1755
    /// struct C0C0(C0, C0);
1756
    ///
1757
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1758
    /// #[repr(C, packed)]
1759
    /// struct Packet {
1760
    ///     magic_number: C0C0,
1761
    ///     mug_size: u8,
1762
    ///     temperature: u8,
1763
    ///     marshmallows: [[u8; 2]],
1764
    /// }
1765
    ///
1766
    /// let bytes = &mut [0xC0, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5][..];
1767
    ///
1768
    /// let packet = Packet::try_mut_from_bytes(bytes).unwrap();
1769
    ///
1770
    /// assert_eq!(packet.mug_size, 240);
1771
    /// assert_eq!(packet.temperature, 77);
1772
    /// assert_eq!(packet.marshmallows, [[0, 1], [2, 3], [4, 5]]);
1773
    ///
1774
    /// packet.temperature = 111;
1775
    ///
1776
    /// assert_eq!(bytes, [0xC0, 0xC0, 240, 111, 0, 1, 2, 3, 4, 5]);
1777
    ///
1778
    /// // These bytes are not valid instance of `Packet`.
1779
    /// let bytes = &mut [0x10, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5, 6][..];
1780
    /// assert!(Packet::try_mut_from_bytes(bytes).is_err());
1781
    /// ```
1782
    #[must_use = "has no side effects"]
1783
    #[inline]
1784
    fn try_mut_from_bytes(bytes: &mut [u8]) -> Result<&mut Self, TryCastError<&mut [u8], Self>>
1785
    where
1786
        Self: KnownLayout + IntoBytes,
1787
    {
1788
        static_assert_dst_is_not_zst!(Self);
1789
        match Ptr::from_mut(bytes).try_cast_into_no_leftover::<Self, BecauseExclusive>(None) {
1790
            Ok(source) => {
1791
                // This call may panic. If that happens, it doesn't cause any soundness
1792
                // issues, as we have not generated any invalid state which we need to
1793
                // fix before returning.
1794
                //
1795
                // Note that one panic or post-monomorphization error condition is
1796
                // calling `try_into_valid` (and thus `is_bit_valid`) with a shared
1797
                // pointer when `Self: !Immutable`. Since `Self: Immutable`, this panic
1798
                // condition will not happen.
1799
                match source.try_into_valid() {
1800
                    Ok(source) => Ok(source.as_mut()),
1801
                    Err(e) => {
1802
                        Err(e.map_src(|src| src.as_bytes::<BecauseExclusive>().as_mut()).into())
1803
                    }
1804
                }
1805
            }
1806
            Err(e) => Err(e.map_src(Ptr::as_mut).into()),
1807
        }
1808
    }
1809
1810
    /// Attempts to interpret the prefix of the given `source` as a `&mut
1811
    /// Self`.
1812
    ///
1813
    /// This method computes the [largest possible size of `Self`][valid-size]
1814
    /// that can fit in the leading bytes of `source`. If that prefix is a valid
1815
    /// instance of `Self`, this method returns a reference to those bytes
1816
    /// interpreted as `Self`, and a reference to the remaining bytes. If there
1817
    /// are insufficient bytes, or if `source` is not appropriately aligned, or
1818
    /// if the bytes are not a valid instance of `Self`, this returns `Err`. If
1819
    /// [`Self: Unaligned`][self-unaligned], you can [infallibly discard the
1820
    /// alignment error][ConvertError::from].
1821
    ///
1822
    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
1823
    ///
1824
    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
1825
    /// [self-unaligned]: Unaligned
1826
    /// [slice-dst]: KnownLayout#dynamically-sized-types
1827
    ///
1828
    /// # Compile-Time Assertions
1829
    ///
1830
    /// This method cannot yet be used on unsized types whose dynamically-sized
1831
    /// component is zero-sized. Attempting to use this method on such types
1832
    /// results in a compile-time assertion error; e.g.:
1833
    ///
1834
    /// ```compile_fail,E0080
1835
    /// use zerocopy::*;
1836
    /// # use zerocopy_derive::*;
1837
    ///
1838
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1839
    /// #[repr(C, packed)]
1840
    /// struct ZSTy {
1841
    ///     leading_sized: [u8; 2],
1842
    ///     trailing_dst: [()],
1843
    /// }
1844
    ///
1845
    /// let mut source = [85, 85];
1846
    /// let _ = ZSTy::try_mut_from_prefix(&mut source[..]); // âš  Compile Error!
1847
    /// ```
1848
    ///
1849
    /// # Examples
1850
    ///
1851
    /// ```
1852
    /// use zerocopy::TryFromBytes;
1853
    /// # use zerocopy_derive::*;
1854
    ///
1855
    /// // The only valid value of this type is the byte `0xC0`
1856
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1857
    /// #[repr(u8)]
1858
    /// enum C0 { xC0 = 0xC0 }
1859
    ///
1860
    /// // The only valid value of this type is the bytes `0xC0C0`.
1861
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1862
    /// #[repr(C)]
1863
    /// struct C0C0(C0, C0);
1864
    ///
1865
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1866
    /// #[repr(C, packed)]
1867
    /// struct Packet {
1868
    ///     magic_number: C0C0,
1869
    ///     mug_size: u8,
1870
    ///     temperature: u8,
1871
    ///     marshmallows: [[u8; 2]],
1872
    /// }
1873
    ///
1874
    /// // These are more bytes than are needed to encode a `Packet`.
1875
    /// let bytes = &mut [0xC0, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5, 6][..];
1876
    ///
1877
    /// let (packet, suffix) = Packet::try_mut_from_prefix(bytes).unwrap();
1878
    ///
1879
    /// assert_eq!(packet.mug_size, 240);
1880
    /// assert_eq!(packet.temperature, 77);
1881
    /// assert_eq!(packet.marshmallows, [[0, 1], [2, 3], [4, 5]]);
1882
    /// assert_eq!(suffix, &[6u8][..]);
1883
    ///
1884
    /// packet.temperature = 111;
1885
    /// suffix[0] = 222;
1886
    ///
1887
    /// assert_eq!(bytes, [0xC0, 0xC0, 240, 111, 0, 1, 2, 3, 4, 5, 222]);
1888
    ///
1889
    /// // These bytes are not valid instance of `Packet`.
1890
    /// let bytes = &mut [0x10, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5, 6][..];
1891
    /// assert!(Packet::try_mut_from_prefix(bytes).is_err());
1892
    /// ```
1893
    #[must_use = "has no side effects"]
1894
    #[inline]
1895
    fn try_mut_from_prefix(
1896
        source: &mut [u8],
1897
    ) -> Result<(&mut Self, &mut [u8]), TryCastError<&mut [u8], Self>>
1898
    where
1899
        Self: KnownLayout + IntoBytes,
1900
    {
1901
        static_assert_dst_is_not_zst!(Self);
1902
        try_mut_from_prefix_suffix(source, CastType::Prefix, None)
1903
    }
1904
1905
    /// Attempts to interpret the suffix of the given `source` as a `&mut
1906
    /// Self`.
1907
    ///
1908
    /// This method computes the [largest possible size of `Self`][valid-size]
1909
    /// that can fit in the trailing bytes of `source`. If that suffix is a
1910
    /// valid instance of `Self`, this method returns a reference to those bytes
1911
    /// interpreted as `Self`, and a reference to the preceding bytes. If there
1912
    /// are insufficient bytes, or if the suffix of `source` would not be
1913
    /// appropriately aligned, or if the suffix is not a valid instance of
1914
    /// `Self`, this returns `Err`. If [`Self: Unaligned`][self-unaligned], you
1915
    /// can [infallibly discard the alignment error][ConvertError::from].
1916
    ///
1917
    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
1918
    ///
1919
    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
1920
    /// [self-unaligned]: Unaligned
1921
    /// [slice-dst]: KnownLayout#dynamically-sized-types
1922
    ///
1923
    /// # Compile-Time Assertions
1924
    ///
1925
    /// This method cannot yet be used on unsized types whose dynamically-sized
1926
    /// component is zero-sized. Attempting to use this method on such types
1927
    /// results in a compile-time assertion error; e.g.:
1928
    ///
1929
    /// ```compile_fail,E0080
1930
    /// use zerocopy::*;
1931
    /// # use zerocopy_derive::*;
1932
    ///
1933
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1934
    /// #[repr(C, packed)]
1935
    /// struct ZSTy {
1936
    ///     leading_sized: u16,
1937
    ///     trailing_dst: [()],
1938
    /// }
1939
    ///
1940
    /// let mut source = [85, 85];
1941
    /// let _ = ZSTy::try_mut_from_suffix(&mut source[..]); // âš  Compile Error!
1942
    /// ```
1943
    ///
1944
    /// # Examples
1945
    ///
1946
    /// ```
1947
    /// use zerocopy::TryFromBytes;
1948
    /// # use zerocopy_derive::*;
1949
    ///
1950
    /// // The only valid value of this type is the byte `0xC0`
1951
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1952
    /// #[repr(u8)]
1953
    /// enum C0 { xC0 = 0xC0 }
1954
    ///
1955
    /// // The only valid value of this type is the bytes `0xC0C0`.
1956
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1957
    /// #[repr(C)]
1958
    /// struct C0C0(C0, C0);
1959
    ///
1960
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1961
    /// #[repr(C, packed)]
1962
    /// struct Packet {
1963
    ///     magic_number: C0C0,
1964
    ///     mug_size: u8,
1965
    ///     temperature: u8,
1966
    ///     marshmallows: [[u8; 2]],
1967
    /// }
1968
    ///
1969
    /// // These are more bytes than are needed to encode a `Packet`.
1970
    /// let bytes = &mut [0, 0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7][..];
1971
    ///
1972
    /// let (prefix, packet) = Packet::try_mut_from_suffix(bytes).unwrap();
1973
    ///
1974
    /// assert_eq!(packet.mug_size, 240);
1975
    /// assert_eq!(packet.temperature, 77);
1976
    /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
1977
    /// assert_eq!(prefix, &[0u8][..]);
1978
    ///
1979
    /// prefix[0] = 111;
1980
    /// packet.temperature = 222;
1981
    ///
1982
    /// assert_eq!(bytes, [111, 0xC0, 0xC0, 240, 222, 2, 3, 4, 5, 6, 7]);
1983
    ///
1984
    /// // These bytes are not valid instance of `Packet`.
1985
    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 77, 240, 0xC0, 0x10][..];
1986
    /// assert!(Packet::try_mut_from_suffix(bytes).is_err());
1987
    /// ```
1988
    #[must_use = "has no side effects"]
1989
    #[inline]
1990
    fn try_mut_from_suffix(
1991
        source: &mut [u8],
1992
    ) -> Result<(&mut [u8], &mut Self), TryCastError<&mut [u8], Self>>
1993
    where
1994
        Self: KnownLayout + IntoBytes,
1995
    {
1996
        static_assert_dst_is_not_zst!(Self);
1997
        try_mut_from_prefix_suffix(source, CastType::Suffix, None).map(swap)
1998
    }
1999
2000
    /// Attempts to interpret the given `source` as a `&Self` with a DST length
2001
    /// equal to `count`.
2002
    ///
2003
    /// This method attempts to return a reference to `source` interpreted as a
2004
    /// `Self` with `count` trailing elements. If the length of `source` is not
2005
    /// equal to the size of `Self` with `count` elements, if `source` is not
2006
    /// appropriately aligned, or if `source` does not contain a valid instance
2007
    /// of `Self`, this returns `Err`. If [`Self: Unaligned`][self-unaligned],
2008
    /// you can [infallibly discard the alignment error][ConvertError::from].
2009
    ///
2010
    /// [self-unaligned]: Unaligned
2011
    /// [slice-dst]: KnownLayout#dynamically-sized-types
2012
    ///
2013
    /// # Examples
2014
    ///
2015
    /// ```
2016
    /// # #![allow(non_camel_case_types)] // For C0::xC0
2017
    /// use zerocopy::TryFromBytes;
2018
    /// # use zerocopy_derive::*;
2019
    ///
2020
    /// // The only valid value of this type is the byte `0xC0`
2021
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2022
    /// #[repr(u8)]
2023
    /// enum C0 { xC0 = 0xC0 }
2024
    ///
2025
    /// // The only valid value of this type is the bytes `0xC0C0`.
2026
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2027
    /// #[repr(C)]
2028
    /// struct C0C0(C0, C0);
2029
    ///
2030
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2031
    /// #[repr(C)]
2032
    /// struct Packet {
2033
    ///     magic_number: C0C0,
2034
    ///     mug_size: u8,
2035
    ///     temperature: u8,
2036
    ///     marshmallows: [[u8; 2]],
2037
    /// }
2038
    ///
2039
    /// let bytes = &[0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7][..];
2040
    ///
2041
    /// let packet = Packet::try_ref_from_bytes_with_elems(bytes, 3).unwrap();
2042
    ///
2043
    /// assert_eq!(packet.mug_size, 240);
2044
    /// assert_eq!(packet.temperature, 77);
2045
    /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
2046
    ///
2047
    /// // These bytes are not valid instance of `Packet`.
2048
    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 77, 240, 0xC0, 0xC0][..];
2049
    /// assert!(Packet::try_ref_from_bytes_with_elems(bytes, 3).is_err());
2050
    /// ```
2051
    ///
2052
    /// Since an explicit `count` is provided, this method supports types with
2053
    /// zero-sized trailing slice elements. Methods such as [`try_ref_from_bytes`]
2054
    /// which do not take an explicit count do not support such types.
2055
    ///
2056
    /// ```
2057
    /// use core::num::NonZeroU16;
2058
    /// use zerocopy::*;
2059
    /// # use zerocopy_derive::*;
2060
    ///
2061
    /// #[derive(TryFromBytes, Immutable, KnownLayout)]
2062
    /// #[repr(C)]
2063
    /// struct ZSTy {
2064
    ///     leading_sized: NonZeroU16,
2065
    ///     trailing_dst: [()],
2066
    /// }
2067
    ///
2068
    /// let src = 0xCAFEu16.as_bytes();
2069
    /// let zsty = ZSTy::try_ref_from_bytes_with_elems(src, 42).unwrap();
2070
    /// assert_eq!(zsty.trailing_dst.len(), 42);
2071
    /// ```
2072
    ///
2073
    /// [`try_ref_from_bytes`]: TryFromBytes::try_ref_from_bytes
2074
    #[must_use = "has no side effects"]
2075
    #[inline]
2076
    fn try_ref_from_bytes_with_elems(
2077
        source: &[u8],
2078
        count: usize,
2079
    ) -> Result<&Self, TryCastError<&[u8], Self>>
2080
    where
2081
        Self: KnownLayout<PointerMetadata = usize> + Immutable,
2082
    {
2083
        match Ptr::from_ref(source).try_cast_into_no_leftover::<Self, BecauseImmutable>(Some(count))
2084
        {
2085
            Ok(source) => {
2086
                // This call may panic. If that happens, it doesn't cause any soundness
2087
                // issues, as we have not generated any invalid state which we need to
2088
                // fix before returning.
2089
                //
2090
                // Note that one panic or post-monomorphization error condition is
2091
                // calling `try_into_valid` (and thus `is_bit_valid`) with a shared
2092
                // pointer when `Self: !Immutable`. Since `Self: Immutable`, this panic
2093
                // condition will not happen.
2094
                match source.try_into_valid() {
2095
                    Ok(source) => Ok(source.as_ref()),
2096
                    Err(e) => {
2097
                        Err(e.map_src(|src| src.as_bytes::<BecauseImmutable>().as_ref()).into())
2098
                    }
2099
                }
2100
            }
2101
            Err(e) => Err(e.map_src(Ptr::as_ref).into()),
2102
        }
2103
    }
2104
2105
    /// Attempts to interpret the prefix of the given `source` as a `&Self` with
2106
    /// a DST length equal to `count`.
2107
    ///
2108
    /// This method attempts to return a reference to the prefix of `source`
2109
    /// interpreted as a `Self` with `count` trailing elements, and a reference
2110
    /// to the remaining bytes. If the length of `source` is less than the size
2111
    /// of `Self` with `count` elements, if `source` is not appropriately
2112
    /// aligned, or if the prefix of `source` does not contain a valid instance
2113
    /// of `Self`, this returns `Err`. If [`Self: Unaligned`][self-unaligned],
2114
    /// you can [infallibly discard the alignment error][ConvertError::from].
2115
    ///
2116
    /// [self-unaligned]: Unaligned
2117
    /// [slice-dst]: KnownLayout#dynamically-sized-types
2118
    ///
2119
    /// # Examples
2120
    ///
2121
    /// ```
2122
    /// # #![allow(non_camel_case_types)] // For C0::xC0
2123
    /// use zerocopy::TryFromBytes;
2124
    /// # use zerocopy_derive::*;
2125
    ///
2126
    /// // The only valid value of this type is the byte `0xC0`
2127
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2128
    /// #[repr(u8)]
2129
    /// enum C0 { xC0 = 0xC0 }
2130
    ///
2131
    /// // The only valid value of this type is the bytes `0xC0C0`.
2132
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2133
    /// #[repr(C)]
2134
    /// struct C0C0(C0, C0);
2135
    ///
2136
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2137
    /// #[repr(C)]
2138
    /// struct Packet {
2139
    ///     magic_number: C0C0,
2140
    ///     mug_size: u8,
2141
    ///     temperature: u8,
2142
    ///     marshmallows: [[u8; 2]],
2143
    /// }
2144
    ///
2145
    /// let bytes = &[0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7, 8][..];
2146
    ///
2147
    /// let (packet, suffix) = Packet::try_ref_from_prefix_with_elems(bytes, 3).unwrap();
2148
    ///
2149
    /// assert_eq!(packet.mug_size, 240);
2150
    /// assert_eq!(packet.temperature, 77);
2151
    /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
2152
    /// assert_eq!(suffix, &[8u8][..]);
2153
    ///
2154
    /// // These bytes are not valid instance of `Packet`.
2155
    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7, 8, 77, 240, 0xC0, 0xC0][..];
2156
    /// assert!(Packet::try_ref_from_prefix_with_elems(bytes, 3).is_err());
2157
    /// ```
2158
    ///
2159
    /// Since an explicit `count` is provided, this method supports types with
2160
    /// zero-sized trailing slice elements. Methods such as [`try_ref_from_prefix`]
2161
    /// which do not take an explicit count do not support such types.
2162
    ///
2163
    /// ```
2164
    /// use core::num::NonZeroU16;
2165
    /// use zerocopy::*;
2166
    /// # use zerocopy_derive::*;
2167
    ///
2168
    /// #[derive(TryFromBytes, Immutable, KnownLayout)]
2169
    /// #[repr(C)]
2170
    /// struct ZSTy {
2171
    ///     leading_sized: NonZeroU16,
2172
    ///     trailing_dst: [()],
2173
    /// }
2174
    ///
2175
    /// let src = 0xCAFEu16.as_bytes();
2176
    /// let (zsty, _) = ZSTy::try_ref_from_prefix_with_elems(src, 42).unwrap();
2177
    /// assert_eq!(zsty.trailing_dst.len(), 42);
2178
    /// ```
2179
    ///
2180
    /// [`try_ref_from_prefix`]: TryFromBytes::try_ref_from_prefix
2181
    #[must_use = "has no side effects"]
2182
    #[inline]
2183
    fn try_ref_from_prefix_with_elems(
2184
        source: &[u8],
2185
        count: usize,
2186
    ) -> Result<(&Self, &[u8]), TryCastError<&[u8], Self>>
2187
    where
2188
        Self: KnownLayout<PointerMetadata = usize> + Immutable,
2189
    {
2190
        try_ref_from_prefix_suffix(source, CastType::Prefix, Some(count))
2191
    }
2192
2193
    /// Attempts to interpret the suffix of the given `source` as a `&Self` with
2194
    /// a DST length equal to `count`.
2195
    ///
2196
    /// This method attempts to return a reference to the suffix of `source`
2197
    /// interpreted as a `Self` with `count` trailing elements, and a reference
2198
    /// to the preceding bytes. If the length of `source` is less than the size
2199
    /// of `Self` with `count` elements, if the suffix of `source` is not
2200
    /// appropriately aligned, or if the suffix of `source` does not contain a
2201
    /// valid instance of `Self`, this returns `Err`. If [`Self:
2202
    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
2203
    /// error][ConvertError::from].
2204
    ///
2205
    /// [self-unaligned]: Unaligned
2206
    /// [slice-dst]: KnownLayout#dynamically-sized-types
2207
    ///
2208
    /// # Examples
2209
    ///
2210
    /// ```
2211
    /// # #![allow(non_camel_case_types)] // For C0::xC0
2212
    /// use zerocopy::TryFromBytes;
2213
    /// # use zerocopy_derive::*;
2214
    ///
2215
    /// // The only valid value of this type is the byte `0xC0`
2216
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2217
    /// #[repr(u8)]
2218
    /// enum C0 { xC0 = 0xC0 }
2219
    ///
2220
    /// // The only valid value of this type is the bytes `0xC0C0`.
2221
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2222
    /// #[repr(C)]
2223
    /// struct C0C0(C0, C0);
2224
    ///
2225
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2226
    /// #[repr(C)]
2227
    /// struct Packet {
2228
    ///     magic_number: C0C0,
2229
    ///     mug_size: u8,
2230
    ///     temperature: u8,
2231
    ///     marshmallows: [[u8; 2]],
2232
    /// }
2233
    ///
2234
    /// let bytes = &[123, 0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7][..];
2235
    ///
2236
    /// let (prefix, packet) = Packet::try_ref_from_suffix_with_elems(bytes, 3).unwrap();
2237
    ///
2238
    /// assert_eq!(packet.mug_size, 240);
2239
    /// assert_eq!(packet.temperature, 77);
2240
    /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
2241
    /// assert_eq!(prefix, &[123u8][..]);
2242
    ///
2243
    /// // These bytes are not valid instance of `Packet`.
2244
    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 77, 240, 0xC0, 0xC0][..];
2245
    /// assert!(Packet::try_ref_from_suffix_with_elems(bytes, 3).is_err());
2246
    /// ```
2247
    ///
2248
    /// Since an explicit `count` is provided, this method supports types with
2249
    /// zero-sized trailing slice elements. Methods such as [`try_ref_from_prefix`]
2250
    /// which do not take an explicit count do not support such types.
2251
    ///
2252
    /// ```
2253
    /// use core::num::NonZeroU16;
2254
    /// use zerocopy::*;
2255
    /// # use zerocopy_derive::*;
2256
    ///
2257
    /// #[derive(TryFromBytes, Immutable, KnownLayout)]
2258
    /// #[repr(C)]
2259
    /// struct ZSTy {
2260
    ///     leading_sized: NonZeroU16,
2261
    ///     trailing_dst: [()],
2262
    /// }
2263
    ///
2264
    /// let src = 0xCAFEu16.as_bytes();
2265
    /// let (_, zsty) = ZSTy::try_ref_from_suffix_with_elems(src, 42).unwrap();
2266
    /// assert_eq!(zsty.trailing_dst.len(), 42);
2267
    /// ```
2268
    ///
2269
    /// [`try_ref_from_prefix`]: TryFromBytes::try_ref_from_prefix
2270
    #[must_use = "has no side effects"]
2271
    #[inline]
2272
    fn try_ref_from_suffix_with_elems(
2273
        source: &[u8],
2274
        count: usize,
2275
    ) -> Result<(&[u8], &Self), TryCastError<&[u8], Self>>
2276
    where
2277
        Self: KnownLayout<PointerMetadata = usize> + Immutable,
2278
    {
2279
        try_ref_from_prefix_suffix(source, CastType::Suffix, Some(count)).map(swap)
2280
    }
2281
2282
    /// Attempts to interpret the given `source` as a `&mut Self` with a DST
2283
    /// length equal to `count`.
2284
    ///
2285
    /// This method attempts to return a reference to `source` interpreted as a
2286
    /// `Self` with `count` trailing elements. If the length of `source` is not
2287
    /// equal to the size of `Self` with `count` elements, if `source` is not
2288
    /// appropriately aligned, or if `source` does not contain a valid instance
2289
    /// of `Self`, this returns `Err`. If [`Self: Unaligned`][self-unaligned],
2290
    /// you can [infallibly discard the alignment error][ConvertError::from].
2291
    ///
2292
    /// [self-unaligned]: Unaligned
2293
    /// [slice-dst]: KnownLayout#dynamically-sized-types
2294
    ///
2295
    /// # Examples
2296
    ///
2297
    /// ```
2298
    /// # #![allow(non_camel_case_types)] // For C0::xC0
2299
    /// use zerocopy::TryFromBytes;
2300
    /// # use zerocopy_derive::*;
2301
    ///
2302
    /// // The only valid value of this type is the byte `0xC0`
2303
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2304
    /// #[repr(u8)]
2305
    /// enum C0 { xC0 = 0xC0 }
2306
    ///
2307
    /// // The only valid value of this type is the bytes `0xC0C0`.
2308
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2309
    /// #[repr(C)]
2310
    /// struct C0C0(C0, C0);
2311
    ///
2312
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2313
    /// #[repr(C, packed)]
2314
    /// struct Packet {
2315
    ///     magic_number: C0C0,
2316
    ///     mug_size: u8,
2317
    ///     temperature: u8,
2318
    ///     marshmallows: [[u8; 2]],
2319
    /// }
2320
    ///
2321
    /// let bytes = &mut [0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7][..];
2322
    ///
2323
    /// let packet = Packet::try_mut_from_bytes_with_elems(bytes, 3).unwrap();
2324
    ///
2325
    /// assert_eq!(packet.mug_size, 240);
2326
    /// assert_eq!(packet.temperature, 77);
2327
    /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
2328
    ///
2329
    /// packet.temperature = 111;
2330
    ///
2331
    /// assert_eq!(bytes, [0xC0, 0xC0, 240, 111, 2, 3, 4, 5, 6, 7]);
2332
    ///
2333
    /// // These bytes are not valid instance of `Packet`.
2334
    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 77, 240, 0xC0, 0xC0][..];
2335
    /// assert!(Packet::try_mut_from_bytes_with_elems(bytes, 3).is_err());
2336
    /// ```
2337
    ///
2338
    /// Since an explicit `count` is provided, this method supports types with
2339
    /// zero-sized trailing slice elements. Methods such as [`try_mut_from_bytes`]
2340
    /// which do not take an explicit count do not support such types.
2341
    ///
2342
    /// ```
2343
    /// use core::num::NonZeroU16;
2344
    /// use zerocopy::*;
2345
    /// # use zerocopy_derive::*;
2346
    ///
2347
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2348
    /// #[repr(C, packed)]
2349
    /// struct ZSTy {
2350
    ///     leading_sized: NonZeroU16,
2351
    ///     trailing_dst: [()],
2352
    /// }
2353
    ///
2354
    /// let mut src = 0xCAFEu16;
2355
    /// let src = src.as_mut_bytes();
2356
    /// let zsty = ZSTy::try_mut_from_bytes_with_elems(src, 42).unwrap();
2357
    /// assert_eq!(zsty.trailing_dst.len(), 42);
2358
    /// ```
2359
    ///
2360
    /// [`try_mut_from_bytes`]: TryFromBytes::try_mut_from_bytes
2361
    #[must_use = "has no side effects"]
2362
    #[inline]
2363
    fn try_mut_from_bytes_with_elems(
2364
        source: &mut [u8],
2365
        count: usize,
2366
    ) -> Result<&mut Self, TryCastError<&mut [u8], Self>>
2367
    where
2368
        Self: KnownLayout<PointerMetadata = usize> + IntoBytes,
2369
    {
2370
        match Ptr::from_mut(source).try_cast_into_no_leftover::<Self, BecauseExclusive>(Some(count))
2371
        {
2372
            Ok(source) => {
2373
                // This call may panic. If that happens, it doesn't cause any soundness
2374
                // issues, as we have not generated any invalid state which we need to
2375
                // fix before returning.
2376
                //
2377
                // Note that one panic or post-monomorphization error condition is
2378
                // calling `try_into_valid` (and thus `is_bit_valid`) with a shared
2379
                // pointer when `Self: !Immutable`. Since `Self: Immutable`, this panic
2380
                // condition will not happen.
2381
                match source.try_into_valid() {
2382
                    Ok(source) => Ok(source.as_mut()),
2383
                    Err(e) => {
2384
                        Err(e.map_src(|src| src.as_bytes::<BecauseExclusive>().as_mut()).into())
2385
                    }
2386
                }
2387
            }
2388
            Err(e) => Err(e.map_src(Ptr::as_mut).into()),
2389
        }
2390
    }
2391
2392
    /// Attempts to interpret the prefix of the given `source` as a `&mut Self`
2393
    /// with a DST length equal to `count`.
2394
    ///
2395
    /// This method attempts to return a reference to the prefix of `source`
2396
    /// interpreted as a `Self` with `count` trailing elements, and a reference
2397
    /// to the remaining bytes. If the length of `source` is less than the size
2398
    /// of `Self` with `count` elements, if `source` is not appropriately
2399
    /// aligned, or if the prefix of `source` does not contain a valid instance
2400
    /// of `Self`, this returns `Err`. If [`Self: Unaligned`][self-unaligned],
2401
    /// you can [infallibly discard the alignment error][ConvertError::from].
2402
    ///
2403
    /// [self-unaligned]: Unaligned
2404
    /// [slice-dst]: KnownLayout#dynamically-sized-types
2405
    ///
2406
    /// # Examples
2407
    ///
2408
    /// ```
2409
    /// # #![allow(non_camel_case_types)] // For C0::xC0
2410
    /// use zerocopy::TryFromBytes;
2411
    /// # use zerocopy_derive::*;
2412
    ///
2413
    /// // The only valid value of this type is the byte `0xC0`
2414
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2415
    /// #[repr(u8)]
2416
    /// enum C0 { xC0 = 0xC0 }
2417
    ///
2418
    /// // The only valid value of this type is the bytes `0xC0C0`.
2419
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2420
    /// #[repr(C)]
2421
    /// struct C0C0(C0, C0);
2422
    ///
2423
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2424
    /// #[repr(C, packed)]
2425
    /// struct Packet {
2426
    ///     magic_number: C0C0,
2427
    ///     mug_size: u8,
2428
    ///     temperature: u8,
2429
    ///     marshmallows: [[u8; 2]],
2430
    /// }
2431
    ///
2432
    /// let bytes = &mut [0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7, 8][..];
2433
    ///
2434
    /// let (packet, suffix) = Packet::try_mut_from_prefix_with_elems(bytes, 3).unwrap();
2435
    ///
2436
    /// assert_eq!(packet.mug_size, 240);
2437
    /// assert_eq!(packet.temperature, 77);
2438
    /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
2439
    /// assert_eq!(suffix, &[8u8][..]);
2440
    ///
2441
    /// packet.temperature = 111;
2442
    /// suffix[0] = 222;
2443
    ///
2444
    /// assert_eq!(bytes, [0xC0, 0xC0, 240, 111, 2, 3, 4, 5, 6, 7, 222]);
2445
    ///
2446
    /// // These bytes are not valid instance of `Packet`.
2447
    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7, 8, 77, 240, 0xC0, 0xC0][..];
2448
    /// assert!(Packet::try_mut_from_prefix_with_elems(bytes, 3).is_err());
2449
    /// ```
2450
    ///
2451
    /// Since an explicit `count` is provided, this method supports types with
2452
    /// zero-sized trailing slice elements. Methods such as [`try_mut_from_prefix`]
2453
    /// which do not take an explicit count do not support such types.
2454
    ///
2455
    /// ```
2456
    /// use core::num::NonZeroU16;
2457
    /// use zerocopy::*;
2458
    /// # use zerocopy_derive::*;
2459
    ///
2460
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2461
    /// #[repr(C, packed)]
2462
    /// struct ZSTy {
2463
    ///     leading_sized: NonZeroU16,
2464
    ///     trailing_dst: [()],
2465
    /// }
2466
    ///
2467
    /// let mut src = 0xCAFEu16;
2468
    /// let src = src.as_mut_bytes();
2469
    /// let (zsty, _) = ZSTy::try_mut_from_prefix_with_elems(src, 42).unwrap();
2470
    /// assert_eq!(zsty.trailing_dst.len(), 42);
2471
    /// ```
2472
    ///
2473
    /// [`try_mut_from_prefix`]: TryFromBytes::try_mut_from_prefix
2474
    #[must_use = "has no side effects"]
2475
    #[inline]
2476
    fn try_mut_from_prefix_with_elems(
2477
        source: &mut [u8],
2478
        count: usize,
2479
    ) -> Result<(&mut Self, &mut [u8]), TryCastError<&mut [u8], Self>>
2480
    where
2481
        Self: KnownLayout<PointerMetadata = usize> + IntoBytes,
2482
    {
2483
        try_mut_from_prefix_suffix(source, CastType::Prefix, Some(count))
2484
    }
2485
2486
    /// Attempts to interpret the suffix of the given `source` as a `&mut Self`
2487
    /// with a DST length equal to `count`.
2488
    ///
2489
    /// This method attempts to return a reference to the suffix of `source`
2490
    /// interpreted as a `Self` with `count` trailing elements, and a reference
2491
    /// to the preceding bytes. If the length of `source` is less than the size
2492
    /// of `Self` with `count` elements, if the suffix of `source` is not
2493
    /// appropriately aligned, or if the suffix of `source` does not contain a
2494
    /// valid instance of `Self`, this returns `Err`. If [`Self:
2495
    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
2496
    /// error][ConvertError::from].
2497
    ///
2498
    /// [self-unaligned]: Unaligned
2499
    /// [slice-dst]: KnownLayout#dynamically-sized-types
2500
    ///
2501
    /// # Examples
2502
    ///
2503
    /// ```
2504
    /// # #![allow(non_camel_case_types)] // For C0::xC0
2505
    /// use zerocopy::TryFromBytes;
2506
    /// # use zerocopy_derive::*;
2507
    ///
2508
    /// // The only valid value of this type is the byte `0xC0`
2509
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2510
    /// #[repr(u8)]
2511
    /// enum C0 { xC0 = 0xC0 }
2512
    ///
2513
    /// // The only valid value of this type is the bytes `0xC0C0`.
2514
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2515
    /// #[repr(C)]
2516
    /// struct C0C0(C0, C0);
2517
    ///
2518
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2519
    /// #[repr(C, packed)]
2520
    /// struct Packet {
2521
    ///     magic_number: C0C0,
2522
    ///     mug_size: u8,
2523
    ///     temperature: u8,
2524
    ///     marshmallows: [[u8; 2]],
2525
    /// }
2526
    ///
2527
    /// let bytes = &mut [123, 0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7][..];
2528
    ///
2529
    /// let (prefix, packet) = Packet::try_mut_from_suffix_with_elems(bytes, 3).unwrap();
2530
    ///
2531
    /// assert_eq!(packet.mug_size, 240);
2532
    /// assert_eq!(packet.temperature, 77);
2533
    /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
2534
    /// assert_eq!(prefix, &[123u8][..]);
2535
    ///
2536
    /// prefix[0] = 111;
2537
    /// packet.temperature = 222;
2538
    ///
2539
    /// assert_eq!(bytes, [111, 0xC0, 0xC0, 240, 222, 2, 3, 4, 5, 6, 7]);
2540
    ///
2541
    /// // These bytes are not valid instance of `Packet`.
2542
    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7, 8, 77, 240, 0xC0, 0xC0][..];
2543
    /// assert!(Packet::try_mut_from_suffix_with_elems(bytes, 3).is_err());
2544
    /// ```
2545
    ///
2546
    /// Since an explicit `count` is provided, this method supports types with
2547
    /// zero-sized trailing slice elements. Methods such as [`try_mut_from_prefix`]
2548
    /// which do not take an explicit count do not support such types.
2549
    ///
2550
    /// ```
2551
    /// use core::num::NonZeroU16;
2552
    /// use zerocopy::*;
2553
    /// # use zerocopy_derive::*;
2554
    ///
2555
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2556
    /// #[repr(C, packed)]
2557
    /// struct ZSTy {
2558
    ///     leading_sized: NonZeroU16,
2559
    ///     trailing_dst: [()],
2560
    /// }
2561
    ///
2562
    /// let mut src = 0xCAFEu16;
2563
    /// let src = src.as_mut_bytes();
2564
    /// let (_, zsty) = ZSTy::try_mut_from_suffix_with_elems(src, 42).unwrap();
2565
    /// assert_eq!(zsty.trailing_dst.len(), 42);
2566
    /// ```
2567
    ///
2568
    /// [`try_mut_from_prefix`]: TryFromBytes::try_mut_from_prefix
2569
    #[must_use = "has no side effects"]
2570
    #[inline]
2571
    fn try_mut_from_suffix_with_elems(
2572
        source: &mut [u8],
2573
        count: usize,
2574
    ) -> Result<(&mut [u8], &mut Self), TryCastError<&mut [u8], Self>>
2575
    where
2576
        Self: KnownLayout<PointerMetadata = usize> + IntoBytes,
2577
    {
2578
        try_mut_from_prefix_suffix(source, CastType::Suffix, Some(count)).map(swap)
2579
    }
2580
2581
    /// Attempts to read the given `source` as a `Self`.
2582
    ///
2583
    /// If `source.len() != size_of::<Self>()` or the bytes are not a valid
2584
    /// instance of `Self`, this returns `Err`.
2585
    ///
2586
    /// # Examples
2587
    ///
2588
    /// ```
2589
    /// use zerocopy::TryFromBytes;
2590
    /// # use zerocopy_derive::*;
2591
    ///
2592
    /// // The only valid value of this type is the byte `0xC0`
2593
    /// #[derive(TryFromBytes)]
2594
    /// #[repr(u8)]
2595
    /// enum C0 { xC0 = 0xC0 }
2596
    ///
2597
    /// // The only valid value of this type is the bytes `0xC0C0`.
2598
    /// #[derive(TryFromBytes)]
2599
    /// #[repr(C)]
2600
    /// struct C0C0(C0, C0);
2601
    ///
2602
    /// #[derive(TryFromBytes)]
2603
    /// #[repr(C)]
2604
    /// struct Packet {
2605
    ///     magic_number: C0C0,
2606
    ///     mug_size: u8,
2607
    ///     temperature: u8,
2608
    /// }
2609
    ///
2610
    /// let bytes = &[0xC0, 0xC0, 240, 77][..];
2611
    ///
2612
    /// let packet = Packet::try_read_from_bytes(bytes).unwrap();
2613
    ///
2614
    /// assert_eq!(packet.mug_size, 240);
2615
    /// assert_eq!(packet.temperature, 77);
2616
    ///
2617
    /// // These bytes are not valid instance of `Packet`.
2618
    /// let bytes = &mut [0x10, 0xC0, 240, 77][..];
2619
    /// assert!(Packet::try_read_from_bytes(bytes).is_err());
2620
    /// ```
2621
    #[must_use = "has no side effects"]
2622
    #[inline]
2623
    fn try_read_from_bytes(source: &[u8]) -> Result<Self, TryReadError<&[u8], Self>>
2624
    where
2625
        Self: Sized,
2626
    {
2627
        let candidate = match CoreMaybeUninit::<Self>::read_from_bytes(source) {
2628
            Ok(candidate) => candidate,
2629
            Err(e) => {
2630
                return Err(TryReadError::Size(e.with_dst()));
2631
            }
2632
        };
2633
        // SAFETY: `candidate` was copied from from `source: &[u8]`, so all of
2634
        // its bytes are initialized.
2635
        unsafe { try_read_from(source, candidate) }
2636
    }
2637
2638
    /// Attempts to read a `Self` from the prefix of the given `source`.
2639
    ///
2640
    /// This attempts to read a `Self` from the first `size_of::<Self>()` bytes
2641
    /// of `source`, returning that `Self` and any remaining bytes. If
2642
    /// `source.len() < size_of::<Self>()` or the bytes are not a valid instance
2643
    /// of `Self`, it returns `Err`.
2644
    ///
2645
    /// # Examples
2646
    ///
2647
    /// ```
2648
    /// use zerocopy::TryFromBytes;
2649
    /// # use zerocopy_derive::*;
2650
    ///
2651
    /// // The only valid value of this type is the byte `0xC0`
2652
    /// #[derive(TryFromBytes)]
2653
    /// #[repr(u8)]
2654
    /// enum C0 { xC0 = 0xC0 }
2655
    ///
2656
    /// // The only valid value of this type is the bytes `0xC0C0`.
2657
    /// #[derive(TryFromBytes)]
2658
    /// #[repr(C)]
2659
    /// struct C0C0(C0, C0);
2660
    ///
2661
    /// #[derive(TryFromBytes)]
2662
    /// #[repr(C)]
2663
    /// struct Packet {
2664
    ///     magic_number: C0C0,
2665
    ///     mug_size: u8,
2666
    ///     temperature: u8,
2667
    /// }
2668
    ///
2669
    /// // These are more bytes than are needed to encode a `Packet`.
2670
    /// let bytes = &[0xC0, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5, 6][..];
2671
    ///
2672
    /// let (packet, suffix) = Packet::try_read_from_prefix(bytes).unwrap();
2673
    ///
2674
    /// assert_eq!(packet.mug_size, 240);
2675
    /// assert_eq!(packet.temperature, 77);
2676
    /// assert_eq!(suffix, &[0u8, 1, 2, 3, 4, 5, 6][..]);
2677
    ///
2678
    /// // These bytes are not valid instance of `Packet`.
2679
    /// let bytes = &[0x10, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5, 6][..];
2680
    /// assert!(Packet::try_read_from_prefix(bytes).is_err());
2681
    /// ```
2682
    #[must_use = "has no side effects"]
2683
    #[inline]
2684
    fn try_read_from_prefix(source: &[u8]) -> Result<(Self, &[u8]), TryReadError<&[u8], Self>>
2685
    where
2686
        Self: Sized,
2687
    {
2688
        let (candidate, suffix) = match CoreMaybeUninit::<Self>::read_from_prefix(source) {
2689
            Ok(candidate) => candidate,
2690
            Err(e) => {
2691
                return Err(TryReadError::Size(e.with_dst()));
2692
            }
2693
        };
2694
        // SAFETY: `candidate` was copied from from `source: &[u8]`, so all of
2695
        // its bytes are initialized.
2696
        unsafe { try_read_from(source, candidate).map(|slf| (slf, suffix)) }
2697
    }
2698
2699
    /// Attempts to read a `Self` from the suffix of the given `source`.
2700
    ///
2701
    /// This attempts to read a `Self` from the last `size_of::<Self>()` bytes
2702
    /// of `source`, returning that `Self` and any preceding bytes. If
2703
    /// `source.len() < size_of::<Self>()` or the bytes are not a valid instance
2704
    /// of `Self`, it returns `Err`.
2705
    ///
2706
    /// # Examples
2707
    ///
2708
    /// ```
2709
    /// # #![allow(non_camel_case_types)] // For C0::xC0
2710
    /// use zerocopy::TryFromBytes;
2711
    /// # use zerocopy_derive::*;
2712
    ///
2713
    /// // The only valid value of this type is the byte `0xC0`
2714
    /// #[derive(TryFromBytes)]
2715
    /// #[repr(u8)]
2716
    /// enum C0 { xC0 = 0xC0 }
2717
    ///
2718
    /// // The only valid value of this type is the bytes `0xC0C0`.
2719
    /// #[derive(TryFromBytes)]
2720
    /// #[repr(C)]
2721
    /// struct C0C0(C0, C0);
2722
    ///
2723
    /// #[derive(TryFromBytes)]
2724
    /// #[repr(C)]
2725
    /// struct Packet {
2726
    ///     magic_number: C0C0,
2727
    ///     mug_size: u8,
2728
    ///     temperature: u8,
2729
    /// }
2730
    ///
2731
    /// // These are more bytes than are needed to encode a `Packet`.
2732
    /// let bytes = &[0, 1, 2, 3, 4, 5, 0xC0, 0xC0, 240, 77][..];
2733
    ///
2734
    /// let (prefix, packet) = Packet::try_read_from_suffix(bytes).unwrap();
2735
    ///
2736
    /// assert_eq!(packet.mug_size, 240);
2737
    /// assert_eq!(packet.temperature, 77);
2738
    /// assert_eq!(prefix, &[0u8, 1, 2, 3, 4, 5][..]);
2739
    ///
2740
    /// // These bytes are not valid instance of `Packet`.
2741
    /// let bytes = &[0, 1, 2, 3, 4, 5, 0x10, 0xC0, 240, 77][..];
2742
    /// assert!(Packet::try_read_from_suffix(bytes).is_err());
2743
    /// ```
2744
    #[must_use = "has no side effects"]
2745
    #[inline]
2746
    fn try_read_from_suffix(source: &[u8]) -> Result<(&[u8], Self), TryReadError<&[u8], Self>>
2747
    where
2748
        Self: Sized,
2749
    {
2750
        let (prefix, candidate) = match CoreMaybeUninit::<Self>::read_from_suffix(source) {
2751
            Ok(candidate) => candidate,
2752
            Err(e) => {
2753
                return Err(TryReadError::Size(e.with_dst()));
2754
            }
2755
        };
2756
        // SAFETY: `candidate` was copied from from `source: &[u8]`, so all of
2757
        // its bytes are initialized.
2758
        unsafe { try_read_from(source, candidate).map(|slf| (prefix, slf)) }
2759
    }
2760
}
2761
2762
#[inline(always)]
2763
fn try_ref_from_prefix_suffix<T: TryFromBytes + KnownLayout + Immutable + ?Sized>(
2764
    source: &[u8],
2765
    cast_type: CastType,
2766
    meta: Option<T::PointerMetadata>,
2767
) -> Result<(&T, &[u8]), TryCastError<&[u8], T>> {
2768
    match Ptr::from_ref(source).try_cast_into::<T, BecauseImmutable>(cast_type, meta) {
2769
        Ok((source, prefix_suffix)) => {
2770
            // This call may panic. If that happens, it doesn't cause any soundness
2771
            // issues, as we have not generated any invalid state which we need to
2772
            // fix before returning.
2773
            //
2774
            // Note that one panic or post-monomorphization error condition is
2775
            // calling `try_into_valid` (and thus `is_bit_valid`) with a shared
2776
            // pointer when `Self: !Immutable`. Since `Self: Immutable`, this panic
2777
            // condition will not happen.
2778
            match source.try_into_valid() {
2779
                Ok(valid) => Ok((valid.as_ref(), prefix_suffix.as_ref())),
2780
                Err(e) => Err(e.map_src(|src| src.as_bytes::<BecauseImmutable>().as_ref()).into()),
2781
            }
2782
        }
2783
        Err(e) => Err(e.map_src(Ptr::as_ref).into()),
2784
    }
2785
}
2786
2787
#[inline(always)]
2788
fn try_mut_from_prefix_suffix<T: IntoBytes + TryFromBytes + KnownLayout + ?Sized>(
2789
    candidate: &mut [u8],
2790
    cast_type: CastType,
2791
    meta: Option<T::PointerMetadata>,
2792
) -> Result<(&mut T, &mut [u8]), TryCastError<&mut [u8], T>> {
2793
    match Ptr::from_mut(candidate).try_cast_into::<T, BecauseExclusive>(cast_type, meta) {
2794
        Ok((candidate, prefix_suffix)) => {
2795
            // This call may panic. If that happens, it doesn't cause any soundness
2796
            // issues, as we have not generated any invalid state which we need to
2797
            // fix before returning.
2798
            //
2799
            // Note that one panic or post-monomorphization error condition is
2800
            // calling `try_into_valid` (and thus `is_bit_valid`) with a shared
2801
            // pointer when `Self: !Immutable`. Since `Self: Immutable`, this panic
2802
            // condition will not happen.
2803
            match candidate.try_into_valid() {
2804
                Ok(valid) => Ok((valid.as_mut(), prefix_suffix.as_mut())),
2805
                Err(e) => Err(e.map_src(|src| src.as_bytes::<BecauseExclusive>().as_mut()).into()),
2806
            }
2807
        }
2808
        Err(e) => Err(e.map_src(Ptr::as_mut).into()),
2809
    }
2810
}
2811
2812
#[inline(always)]
2813
fn swap<T, U>((t, u): (T, U)) -> (U, T) {
2814
    (u, t)
2815
}
2816
2817
/// # Safety
2818
///
2819
/// All bytes of `candidate` must be initialized.
2820
#[inline(always)]
2821
unsafe fn try_read_from<S, T: TryFromBytes>(
2822
    source: S,
2823
    mut candidate: CoreMaybeUninit<T>,
2824
) -> Result<T, TryReadError<S, T>> {
2825
    // We use `from_mut` despite not mutating via `c_ptr` so that we don't need
2826
    // to add a `T: Immutable` bound.
2827
    let c_ptr = Ptr::from_mut(&mut candidate);
2828
    let c_ptr = c_ptr.transparent_wrapper_into_inner();
2829
    // SAFETY: `c_ptr` has no uninitialized sub-ranges because it derived from
2830
    // `candidate`, which the caller promises is entirely initialized.
2831
    let c_ptr = unsafe { c_ptr.assume_validity::<invariant::Initialized>() };
2832
2833
    // This call may panic. If that happens, it doesn't cause any soundness
2834
    // issues, as we have not generated any invalid state which we need to
2835
    // fix before returning.
2836
    //
2837
    // Note that one panic or post-monomorphization error condition is
2838
    // calling `try_into_valid` (and thus `is_bit_valid`) with a shared
2839
    // pointer when `Self: !Immutable`. Since `Self: Immutable`, this panic
2840
    // condition will not happen.
2841
    if !T::is_bit_valid(c_ptr.forget_aligned()) {
2842
        return Err(ValidityError::new(source).into());
2843
    }
2844
2845
    // SAFETY: We just validated that `candidate` contains a valid `T`.
2846
    Ok(unsafe { candidate.assume_init() })
2847
}
2848
2849
/// Types for which a sequence of bytes all set to zero represents a valid
2850
/// instance of the type.
2851
///
2852
/// Any memory region of the appropriate length which is guaranteed to contain
2853
/// only zero bytes can be viewed as any `FromZeros` type with no runtime
2854
/// overhead. This is useful whenever memory is known to be in a zeroed state,
2855
/// such memory returned from some allocation routines.
2856
///
2857
/// # Warning: Padding bytes
2858
///
2859
/// Note that, when a value is moved or copied, only the non-padding bytes of
2860
/// that value are guaranteed to be preserved. It is unsound to assume that
2861
/// values written to padding bytes are preserved after a move or copy. For more
2862
/// details, see the [`FromBytes` docs][frombytes-warning-padding-bytes].
2863
///
2864
/// [frombytes-warning-padding-bytes]: FromBytes#warning-padding-bytes
2865
///
2866
/// # Implementation
2867
///
2868
/// **Do not implement this trait yourself!** Instead, use
2869
/// [`#[derive(FromZeros)]`][derive]; e.g.:
2870
///
2871
/// ```
2872
/// # use zerocopy_derive::{FromZeros, Immutable};
2873
/// #[derive(FromZeros)]
2874
/// struct MyStruct {
2875
/// # /*
2876
///     ...
2877
/// # */
2878
/// }
2879
///
2880
/// #[derive(FromZeros)]
2881
/// #[repr(u8)]
2882
/// enum MyEnum {
2883
/// #   Variant0,
2884
/// # /*
2885
///     ...
2886
/// # */
2887
/// }
2888
///
2889
/// #[derive(FromZeros, Immutable)]
2890
/// union MyUnion {
2891
/// #   variant: u8,
2892
/// # /*
2893
///     ...
2894
/// # */
2895
/// }
2896
/// ```
2897
///
2898
/// This derive performs a sophisticated, compile-time safety analysis to
2899
/// determine whether a type is `FromZeros`.
2900
///
2901
/// # Safety
2902
///
2903
/// *This section describes what is required in order for `T: FromZeros`, and
2904
/// what unsafe code may assume of such types. If you don't plan on implementing
2905
/// `FromZeros` manually, and you don't plan on writing unsafe code that
2906
/// operates on `FromZeros` types, then you don't need to read this section.*
2907
///
2908
/// If `T: FromZeros`, then unsafe code may assume that it is sound to produce a
2909
/// `T` whose bytes are all initialized to zero. If a type is marked as
2910
/// `FromZeros` which violates this contract, it may cause undefined behavior.
2911
///
2912
/// `#[derive(FromZeros)]` only permits [types which satisfy these
2913
/// requirements][derive-analysis].
2914
///
2915
#[cfg_attr(
2916
    feature = "derive",
2917
    doc = "[derive]: zerocopy_derive::FromZeros",
2918
    doc = "[derive-analysis]: zerocopy_derive::FromZeros#analysis"
2919
)]
2920
#[cfg_attr(
2921
    not(feature = "derive"),
2922
    doc = concat!("[derive]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.FromZeros.html"),
2923
    doc = concat!("[derive-analysis]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.FromZeros.html#analysis"),
2924
)]
2925
#[cfg_attr(
2926
    zerocopy_diagnostic_on_unimplemented_1_78_0,
2927
    diagnostic::on_unimplemented(note = "Consider adding `#[derive(FromZeros)]` to `{Self}`")
2928
)]
2929
pub unsafe trait FromZeros: TryFromBytes {
2930
    // The `Self: Sized` bound makes it so that `FromZeros` is still object
2931
    // safe.
2932
    #[doc(hidden)]
2933
    fn only_derive_is_allowed_to_implement_this_trait()
2934
    where
2935
        Self: Sized;
2936
2937
    /// Overwrites `self` with zeros.
2938
    ///
2939
    /// Sets every byte in `self` to 0. While this is similar to doing `*self =
2940
    /// Self::new_zeroed()`, it differs in that `zero` does not semantically
2941
    /// drop the current value and replace it with a new one — it simply
2942
    /// modifies the bytes of the existing value.
2943
    ///
2944
    /// # Examples
2945
    ///
2946
    /// ```
2947
    /// # use zerocopy::FromZeros;
2948
    /// # use zerocopy_derive::*;
2949
    /// #
2950
    /// #[derive(FromZeros)]
2951
    /// #[repr(C)]
2952
    /// struct PacketHeader {
2953
    ///     src_port: [u8; 2],
2954
    ///     dst_port: [u8; 2],
2955
    ///     length: [u8; 2],
2956
    ///     checksum: [u8; 2],
2957
    /// }
2958
    ///
2959
    /// let mut header = PacketHeader {
2960
    ///     src_port: 100u16.to_be_bytes(),
2961
    ///     dst_port: 200u16.to_be_bytes(),
2962
    ///     length: 300u16.to_be_bytes(),
2963
    ///     checksum: 400u16.to_be_bytes(),
2964
    /// };
2965
    ///
2966
    /// header.zero();
2967
    ///
2968
    /// assert_eq!(header.src_port, [0, 0]);
2969
    /// assert_eq!(header.dst_port, [0, 0]);
2970
    /// assert_eq!(header.length, [0, 0]);
2971
    /// assert_eq!(header.checksum, [0, 0]);
2972
    /// ```
2973
    #[inline(always)]
2974
    fn zero(&mut self) {
2975
        let slf: *mut Self = self;
2976
        let len = mem::size_of_val(self);
2977
        // SAFETY:
2978
        // - `self` is guaranteed by the type system to be valid for writes of
2979
        //   size `size_of_val(self)`.
2980
        // - `u8`'s alignment is 1, and thus `self` is guaranteed to be aligned
2981
        //   as required by `u8`.
2982
        // - Since `Self: FromZeros`, the all-zeros instance is a valid instance
2983
        //   of `Self.`
2984
        //
2985
        // TODO(#429): Add references to docs and quotes.
2986
        unsafe { ptr::write_bytes(slf.cast::<u8>(), 0, len) };
2987
    }
2988
2989
    /// Creates an instance of `Self` from zeroed bytes.
2990
    ///
2991
    /// # Examples
2992
    ///
2993
    /// ```
2994
    /// # use zerocopy::FromZeros;
2995
    /// # use zerocopy_derive::*;
2996
    /// #
2997
    /// #[derive(FromZeros)]
2998
    /// #[repr(C)]
2999
    /// struct PacketHeader {
3000
    ///     src_port: [u8; 2],
3001
    ///     dst_port: [u8; 2],
3002
    ///     length: [u8; 2],
3003
    ///     checksum: [u8; 2],
3004
    /// }
3005
    ///
3006
    /// let header: PacketHeader = FromZeros::new_zeroed();
3007
    ///
3008
    /// assert_eq!(header.src_port, [0, 0]);
3009
    /// assert_eq!(header.dst_port, [0, 0]);
3010
    /// assert_eq!(header.length, [0, 0]);
3011
    /// assert_eq!(header.checksum, [0, 0]);
3012
    /// ```
3013
    #[must_use = "has no side effects"]
3014
    #[inline(always)]
3015
    fn new_zeroed() -> Self
3016
    where
3017
        Self: Sized,
3018
    {
3019
        // SAFETY: `FromZeros` says that the all-zeros bit pattern is legal.
3020
        unsafe { mem::zeroed() }
3021
    }
3022
3023
    /// Creates a `Box<Self>` from zeroed bytes.
3024
    ///
3025
    /// This function is useful for allocating large values on the heap and
3026
    /// zero-initializing them, without ever creating a temporary instance of
3027
    /// `Self` on the stack. For example, `<[u8; 1048576]>::new_box_zeroed()`
3028
    /// will allocate `[u8; 1048576]` directly on the heap; it does not require
3029
    /// storing `[u8; 1048576]` in a temporary variable on the stack.
3030
    ///
3031
    /// On systems that use a heap implementation that supports allocating from
3032
    /// pre-zeroed memory, using `new_box_zeroed` (or related functions) may
3033
    /// have performance benefits.
3034
    ///
3035
    /// # Errors
3036
    ///
3037
    /// Returns an error on allocation failure. Allocation failure is guaranteed
3038
    /// never to cause a panic or an abort.
3039
    #[must_use = "has no side effects (other than allocation)"]
3040
    #[cfg(any(feature = "alloc", test))]
3041
    #[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))]
3042
    #[inline]
3043
    fn new_box_zeroed() -> Result<Box<Self>, AllocError>
3044
    where
3045
        Self: Sized,
3046
    {
3047
        // If `T` is a ZST, then return a proper boxed instance of it. There is
3048
        // no allocation, but `Box` does require a correct dangling pointer.
3049
        let layout = Layout::new::<Self>();
3050
        if layout.size() == 0 {
3051
            // Construct the `Box` from a dangling pointer to avoid calling
3052
            // `Self::new_zeroed`. This ensures that stack space is never
3053
            // allocated for `Self` even on lower opt-levels where this branch
3054
            // might not get optimized out.
3055
3056
            // SAFETY: Per [1], when `T` is a ZST, `Box<T>`'s only validity
3057
            // requirements are that the pointer is non-null and sufficiently
3058
            // aligned. Per [2], `NonNull::dangling` produces a pointer which
3059
            // is sufficiently aligned. Since the produced pointer is a
3060
            // `NonNull`, it is non-null.
3061
            //
3062
            // [1] Per https://doc.rust-lang.org/nightly/std/boxed/index.html#memory-layout:
3063
            //
3064
            //   For zero-sized values, the `Box` pointer has to be non-null and sufficiently aligned.
3065
            //
3066
            // [2] Per https://doc.rust-lang.org/std/ptr/struct.NonNull.html#method.dangling:
3067
            //
3068
            //   Creates a new `NonNull` that is dangling, but well-aligned.
3069
            return Ok(unsafe { Box::from_raw(NonNull::dangling().as_ptr()) });
3070
        }
3071
3072
        // TODO(#429): Add a "SAFETY" comment and remove this `allow`.
3073
        #[allow(clippy::undocumented_unsafe_blocks)]
3074
        let ptr = unsafe { alloc::alloc::alloc_zeroed(layout).cast::<Self>() };
3075
        if ptr.is_null() {
3076
            return Err(AllocError);
3077
        }
3078
        // TODO(#429): Add a "SAFETY" comment and remove this `allow`.
3079
        #[allow(clippy::undocumented_unsafe_blocks)]
3080
        Ok(unsafe { Box::from_raw(ptr) })
3081
    }
3082
3083
    /// Creates a `Box<[Self]>` (a boxed slice) from zeroed bytes.
3084
    ///
3085
    /// This function is useful for allocating large values of `[Self]` on the
3086
    /// heap and zero-initializing them, without ever creating a temporary
3087
    /// instance of `[Self; _]` on the stack. For example,
3088
    /// `u8::new_box_slice_zeroed(1048576)` will allocate the slice directly on
3089
    /// the heap; it does not require storing the slice on the stack.
3090
    ///
3091
    /// On systems that use a heap implementation that supports allocating from
3092
    /// pre-zeroed memory, using `new_box_slice_zeroed` may have performance
3093
    /// benefits.
3094
    ///
3095
    /// If `Self` is a zero-sized type, then this function will return a
3096
    /// `Box<[Self]>` that has the correct `len`. Such a box cannot contain any
3097
    /// actual information, but its `len()` property will report the correct
3098
    /// value.
3099
    ///
3100
    /// # Errors
3101
    ///
3102
    /// Returns an error on allocation failure. Allocation failure is
3103
    /// guaranteed never to cause a panic or an abort.
3104
    #[must_use = "has no side effects (other than allocation)"]
3105
    #[cfg(feature = "alloc")]
3106
    #[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))]
3107
    #[inline]
3108
    fn new_box_zeroed_with_elems(count: usize) -> Result<Box<Self>, AllocError>
3109
    where
3110
        Self: KnownLayout<PointerMetadata = usize>,
3111
    {
3112
        // SAFETY: `alloc::alloc::alloc_zeroed` is a valid argument of
3113
        // `new_box`. The referent of the pointer returned by `alloc_zeroed`
3114
        // (and, consequently, the `Box` derived from it) is a valid instance of
3115
        // `Self`, because `Self` is `FromZeros`.
3116
        unsafe { crate::util::new_box(count, alloc::alloc::alloc_zeroed) }
3117
    }
3118
3119
    #[deprecated(since = "0.8.0", note = "renamed to `FromZeros::new_box_zeroed_with_elems`")]
3120
    #[doc(hidden)]
3121
    #[cfg(feature = "alloc")]
3122
    #[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))]
3123
    #[must_use = "has no side effects (other than allocation)"]
3124
    #[inline(always)]
3125
    fn new_box_slice_zeroed(len: usize) -> Result<Box<[Self]>, AllocError>
3126
    where
3127
        Self: Sized,
3128
    {
3129
        <[Self]>::new_box_zeroed_with_elems(len)
3130
    }
3131
3132
    /// Creates a `Vec<Self>` from zeroed bytes.
3133
    ///
3134
    /// This function is useful for allocating large values of `Vec`s and
3135
    /// zero-initializing them, without ever creating a temporary instance of
3136
    /// `[Self; _]` (or many temporary instances of `Self`) on the stack. For
3137
    /// example, `u8::new_vec_zeroed(1048576)` will allocate directly on the
3138
    /// heap; it does not require storing intermediate values on the stack.
3139
    ///
3140
    /// On systems that use a heap implementation that supports allocating from
3141
    /// pre-zeroed memory, using `new_vec_zeroed` may have performance benefits.
3142
    ///
3143
    /// If `Self` is a zero-sized type, then this function will return a
3144
    /// `Vec<Self>` that has the correct `len`. Such a `Vec` cannot contain any
3145
    /// actual information, but its `len()` property will report the correct
3146
    /// value.
3147
    ///
3148
    /// # Errors
3149
    ///
3150
    /// Returns an error on allocation failure. Allocation failure is
3151
    /// guaranteed never to cause a panic or an abort.
3152
    #[must_use = "has no side effects (other than allocation)"]
3153
    #[cfg(feature = "alloc")]
3154
    #[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))]
3155
    #[inline(always)]
3156
    fn new_vec_zeroed(len: usize) -> Result<Vec<Self>, AllocError>
3157
    where
3158
        Self: Sized,
3159
    {
3160
        <[Self]>::new_box_zeroed_with_elems(len).map(Into::into)
3161
    }
3162
3163
    /// Extends a `Vec<Self>` by pushing `additional` new items onto the end of
3164
    /// the vector. The new items are initialized with zeros.
3165
    #[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
3166
    #[cfg(feature = "alloc")]
3167
    #[cfg_attr(doc_cfg, doc(cfg(all(rust = "1.57.0", feature = "alloc"))))]
3168
    #[inline(always)]
3169
    fn extend_vec_zeroed(v: &mut Vec<Self>, additional: usize) -> Result<(), AllocError>
3170
    where
3171
        Self: Sized,
3172
    {
3173
        // PANICS: We pass `v.len()` for `position`, so the `position > v.len()`
3174
        // panic condition is not satisfied.
3175
        <Self as FromZeros>::insert_vec_zeroed(v, v.len(), additional)
3176
    }
3177
3178
    /// Inserts `additional` new items into `Vec<Self>` at `position`. The new
3179
    /// items are initialized with zeros.
3180
    ///
3181
    /// # Panics
3182
    ///
3183
    /// Panics if `position > v.len()`.
3184
    #[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
3185
    #[cfg(feature = "alloc")]
3186
    #[cfg_attr(doc_cfg, doc(cfg(all(rust = "1.57.0", feature = "alloc"))))]
3187
    #[inline]
3188
    fn insert_vec_zeroed(
3189
        v: &mut Vec<Self>,
3190
        position: usize,
3191
        additional: usize,
3192
    ) -> Result<(), AllocError>
3193
    where
3194
        Self: Sized,
3195
    {
3196
        assert!(position <= v.len());
3197
        // We only conditionally compile on versions on which `try_reserve` is
3198
        // stable; the Clippy lint is a false positive.
3199
        #[allow(clippy::incompatible_msrv)]
3200
        v.try_reserve(additional).map_err(|_| AllocError)?;
3201
        // SAFETY: The `try_reserve` call guarantees that these cannot overflow:
3202
        // * `ptr.add(position)`
3203
        // * `position + additional`
3204
        // * `v.len() + additional`
3205
        //
3206
        // `v.len() - position` cannot overflow because we asserted that
3207
        // `position <= v.len()`.
3208
        unsafe {
3209
            // This is a potentially overlapping copy.
3210
            let ptr = v.as_mut_ptr();
3211
            #[allow(clippy::arithmetic_side_effects)]
3212
            ptr.add(position).copy_to(ptr.add(position + additional), v.len() - position);
3213
            ptr.add(position).write_bytes(0, additional);
3214
            #[allow(clippy::arithmetic_side_effects)]
3215
            v.set_len(v.len() + additional);
3216
        }
3217
3218
        Ok(())
3219
    }
3220
}
3221
3222
/// Analyzes whether a type is [`FromBytes`].
3223
///
3224
/// This derive analyzes, at compile time, whether the annotated type satisfies
3225
/// the [safety conditions] of `FromBytes` and implements `FromBytes` and its
3226
/// supertraits if it is sound to do so. This derive can be applied to structs,
3227
/// enums, and unions;
3228
/// e.g.:
3229
///
3230
/// ```
3231
/// # use zerocopy_derive::{FromBytes, FromZeros, Immutable};
3232
/// #[derive(FromBytes)]
3233
/// struct MyStruct {
3234
/// # /*
3235
///     ...
3236
/// # */
3237
/// }
3238
///
3239
/// #[derive(FromBytes)]
3240
/// #[repr(u8)]
3241
/// enum MyEnum {
3242
/// #   V00, V01, V02, V03, V04, V05, V06, V07, V08, V09, V0A, V0B, V0C, V0D, V0E,
3243
/// #   V0F, V10, V11, V12, V13, V14, V15, V16, V17, V18, V19, V1A, V1B, V1C, V1D,
3244
/// #   V1E, V1F, V20, V21, V22, V23, V24, V25, V26, V27, V28, V29, V2A, V2B, V2C,
3245
/// #   V2D, V2E, V2F, V30, V31, V32, V33, V34, V35, V36, V37, V38, V39, V3A, V3B,
3246
/// #   V3C, V3D, V3E, V3F, V40, V41, V42, V43, V44, V45, V46, V47, V48, V49, V4A,
3247
/// #   V4B, V4C, V4D, V4E, V4F, V50, V51, V52, V53, V54, V55, V56, V57, V58, V59,
3248
/// #   V5A, V5B, V5C, V5D, V5E, V5F, V60, V61, V62, V63, V64, V65, V66, V67, V68,
3249
/// #   V69, V6A, V6B, V6C, V6D, V6E, V6F, V70, V71, V72, V73, V74, V75, V76, V77,
3250
/// #   V78, V79, V7A, V7B, V7C, V7D, V7E, V7F, V80, V81, V82, V83, V84, V85, V86,
3251
/// #   V87, V88, V89, V8A, V8B, V8C, V8D, V8E, V8F, V90, V91, V92, V93, V94, V95,
3252
/// #   V96, V97, V98, V99, V9A, V9B, V9C, V9D, V9E, V9F, VA0, VA1, VA2, VA3, VA4,
3253
/// #   VA5, VA6, VA7, VA8, VA9, VAA, VAB, VAC, VAD, VAE, VAF, VB0, VB1, VB2, VB3,
3254
/// #   VB4, VB5, VB6, VB7, VB8, VB9, VBA, VBB, VBC, VBD, VBE, VBF, VC0, VC1, VC2,
3255
/// #   VC3, VC4, VC5, VC6, VC7, VC8, VC9, VCA, VCB, VCC, VCD, VCE, VCF, VD0, VD1,
3256
/// #   VD2, VD3, VD4, VD5, VD6, VD7, VD8, VD9, VDA, VDB, VDC, VDD, VDE, VDF, VE0,
3257
/// #   VE1, VE2, VE3, VE4, VE5, VE6, VE7, VE8, VE9, VEA, VEB, VEC, VED, VEE, VEF,
3258
/// #   VF0, VF1, VF2, VF3, VF4, VF5, VF6, VF7, VF8, VF9, VFA, VFB, VFC, VFD, VFE,
3259
/// #   VFF,
3260
/// # /*
3261
///     ...
3262
/// # */
3263
/// }
3264
///
3265
/// #[derive(FromBytes, Immutable)]
3266
/// union MyUnion {
3267
/// #   variant: u8,
3268
/// # /*
3269
///     ...
3270
/// # */
3271
/// }
3272
/// ```
3273
///
3274
/// [safety conditions]: trait@FromBytes#safety
3275
///
3276
/// # Analysis
3277
///
3278
/// *This section describes, roughly, the analysis performed by this derive to
3279
/// determine whether it is sound to implement `FromBytes` for a given type.
3280
/// Unless you are modifying the implementation of this derive, or attempting to
3281
/// manually implement `FromBytes` for a type yourself, you don't need to read
3282
/// this section.*
3283
///
3284
/// If a type has the following properties, then this derive can implement
3285
/// `FromBytes` for that type:
3286
///
3287
/// - If the type is a struct, all of its fields must be `FromBytes`.
3288
/// - If the type is an enum:
3289
///   - It must have a defined representation (`repr`s `C`, `u8`, `u16`, `u32`,
3290
///     `u64`, `usize`, `i8`, `i16`, `i32`, `i64`, or `isize`).
3291
///   - The maximum number of discriminants must be used (so that every possible
3292
///     bit pattern is a valid one). Be very careful when using the `C`,
3293
///     `usize`, or `isize` representations, as their size is
3294
///     platform-dependent.
3295
///   - Its fields must be `FromBytes`.
3296
///
3297
/// This analysis is subject to change. Unsafe code may *only* rely on the
3298
/// documented [safety conditions] of `FromBytes`, and must *not* rely on the
3299
/// implementation details of this derive.
3300
///
3301
/// ## Why isn't an explicit representation required for structs?
3302
///
3303
/// Neither this derive, nor the [safety conditions] of `FromBytes`, requires
3304
/// that structs are marked with `#[repr(C)]`.
3305
///
3306
/// Per the [Rust reference](reference),
3307
///
3308
/// > The representation of a type can change the padding between fields, but
3309
/// > does not change the layout of the fields themselves.
3310
///
3311
/// [reference]: https://doc.rust-lang.org/reference/type-layout.html#representations
3312
///
3313
/// Since the layout of structs only consists of padding bytes and field bytes,
3314
/// a struct is soundly `FromBytes` if:
3315
/// 1. its padding is soundly `FromBytes`, and
3316
/// 2. its fields are soundly `FromBytes`.
3317
///
3318
/// The answer to the first question is always yes: padding bytes do not have
3319
/// any validity constraints. A [discussion] of this question in the Unsafe Code
3320
/// Guidelines Working Group concluded that it would be virtually unimaginable
3321
/// for future versions of rustc to add validity constraints to padding bytes.
3322
///
3323
/// [discussion]: https://github.com/rust-lang/unsafe-code-guidelines/issues/174
3324
///
3325
/// Whether a struct is soundly `FromBytes` therefore solely depends on whether
3326
/// its fields are `FromBytes`.
3327
// TODO(#146): Document why we don't require an enum to have an explicit `repr`
3328
// attribute.
3329
#[cfg(any(feature = "derive", test))]
3330
#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
3331
pub use zerocopy_derive::FromBytes;
3332
3333
/// Types for which any bit pattern is valid.
3334
///
3335
/// Any memory region of the appropriate length which contains initialized bytes
3336
/// can be viewed as any `FromBytes` type with no runtime overhead. This is
3337
/// useful for efficiently parsing bytes as structured data.
3338
///
3339
/// # Warning: Padding bytes
3340
///
3341
/// Note that, when a value is moved or copied, only the non-padding bytes of
3342
/// that value are guaranteed to be preserved. It is unsound to assume that
3343
/// values written to padding bytes are preserved after a move or copy. For
3344
/// example, the following is unsound:
3345
///
3346
/// ```rust,no_run
3347
/// use core::mem::{size_of, transmute};
3348
/// use zerocopy::FromZeros;
3349
/// # use zerocopy_derive::*;
3350
///
3351
/// // Assume `Foo` is a type with padding bytes.
3352
/// #[derive(FromZeros, Default)]
3353
/// struct Foo {
3354
/// # /*
3355
///     ...
3356
/// # */
3357
/// }
3358
///
3359
/// let mut foo: Foo = Foo::default();
3360
/// FromZeros::zero(&mut foo);
3361
/// // UNSOUND: Although `FromZeros::zero` writes zeros to all bytes of `foo`,
3362
/// // those writes are not guaranteed to be preserved in padding bytes when
3363
/// // `foo` is moved, so this may expose padding bytes as `u8`s.
3364
/// let foo_bytes: [u8; size_of::<Foo>()] = unsafe { transmute(foo) };
3365
/// ```
3366
///
3367
/// # Implementation
3368
///
3369
/// **Do not implement this trait yourself!** Instead, use
3370
/// [`#[derive(FromBytes)]`][derive]; e.g.:
3371
///
3372
/// ```
3373
/// # use zerocopy_derive::{FromBytes, Immutable};
3374
/// #[derive(FromBytes)]
3375
/// struct MyStruct {
3376
/// # /*
3377
///     ...
3378
/// # */
3379
/// }
3380
///
3381
/// #[derive(FromBytes)]
3382
/// #[repr(u8)]
3383
/// enum MyEnum {
3384
/// #   V00, V01, V02, V03, V04, V05, V06, V07, V08, V09, V0A, V0B, V0C, V0D, V0E,
3385
/// #   V0F, V10, V11, V12, V13, V14, V15, V16, V17, V18, V19, V1A, V1B, V1C, V1D,
3386
/// #   V1E, V1F, V20, V21, V22, V23, V24, V25, V26, V27, V28, V29, V2A, V2B, V2C,
3387
/// #   V2D, V2E, V2F, V30, V31, V32, V33, V34, V35, V36, V37, V38, V39, V3A, V3B,
3388
/// #   V3C, V3D, V3E, V3F, V40, V41, V42, V43, V44, V45, V46, V47, V48, V49, V4A,
3389
/// #   V4B, V4C, V4D, V4E, V4F, V50, V51, V52, V53, V54, V55, V56, V57, V58, V59,
3390
/// #   V5A, V5B, V5C, V5D, V5E, V5F, V60, V61, V62, V63, V64, V65, V66, V67, V68,
3391
/// #   V69, V6A, V6B, V6C, V6D, V6E, V6F, V70, V71, V72, V73, V74, V75, V76, V77,
3392
/// #   V78, V79, V7A, V7B, V7C, V7D, V7E, V7F, V80, V81, V82, V83, V84, V85, V86,
3393
/// #   V87, V88, V89, V8A, V8B, V8C, V8D, V8E, V8F, V90, V91, V92, V93, V94, V95,
3394
/// #   V96, V97, V98, V99, V9A, V9B, V9C, V9D, V9E, V9F, VA0, VA1, VA2, VA3, VA4,
3395
/// #   VA5, VA6, VA7, VA8, VA9, VAA, VAB, VAC, VAD, VAE, VAF, VB0, VB1, VB2, VB3,
3396
/// #   VB4, VB5, VB6, VB7, VB8, VB9, VBA, VBB, VBC, VBD, VBE, VBF, VC0, VC1, VC2,
3397
/// #   VC3, VC4, VC5, VC6, VC7, VC8, VC9, VCA, VCB, VCC, VCD, VCE, VCF, VD0, VD1,
3398
/// #   VD2, VD3, VD4, VD5, VD6, VD7, VD8, VD9, VDA, VDB, VDC, VDD, VDE, VDF, VE0,
3399
/// #   VE1, VE2, VE3, VE4, VE5, VE6, VE7, VE8, VE9, VEA, VEB, VEC, VED, VEE, VEF,
3400
/// #   VF0, VF1, VF2, VF3, VF4, VF5, VF6, VF7, VF8, VF9, VFA, VFB, VFC, VFD, VFE,
3401
/// #   VFF,
3402
/// # /*
3403
///     ...
3404
/// # */
3405
/// }
3406
///
3407
/// #[derive(FromBytes, Immutable)]
3408
/// union MyUnion {
3409
/// #   variant: u8,
3410
/// # /*
3411
///     ...
3412
/// # */
3413
/// }
3414
/// ```
3415
///
3416
/// This derive performs a sophisticated, compile-time safety analysis to
3417
/// determine whether a type is `FromBytes`.
3418
///
3419
/// # Safety
3420
///
3421
/// *This section describes what is required in order for `T: FromBytes`, and
3422
/// what unsafe code may assume of such types. If you don't plan on implementing
3423
/// `FromBytes` manually, and you don't plan on writing unsafe code that
3424
/// operates on `FromBytes` types, then you don't need to read this section.*
3425
///
3426
/// If `T: FromBytes`, then unsafe code may assume that it is sound to produce a
3427
/// `T` whose bytes are initialized to any sequence of valid `u8`s (in other
3428
/// words, any byte value which is not uninitialized). If a type is marked as
3429
/// `FromBytes` which violates this contract, it may cause undefined behavior.
3430
///
3431
/// `#[derive(FromBytes)]` only permits [types which satisfy these
3432
/// requirements][derive-analysis].
3433
///
3434
#[cfg_attr(
3435
    feature = "derive",
3436
    doc = "[derive]: zerocopy_derive::FromBytes",
3437
    doc = "[derive-analysis]: zerocopy_derive::FromBytes#analysis"
3438
)]
3439
#[cfg_attr(
3440
    not(feature = "derive"),
3441
    doc = concat!("[derive]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.FromBytes.html"),
3442
    doc = concat!("[derive-analysis]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.FromBytes.html#analysis"),
3443
)]
3444
#[cfg_attr(
3445
    zerocopy_diagnostic_on_unimplemented_1_78_0,
3446
    diagnostic::on_unimplemented(note = "Consider adding `#[derive(FromBytes)]` to `{Self}`")
3447
)]
3448
pub unsafe trait FromBytes: FromZeros {
3449
    // The `Self: Sized` bound makes it so that `FromBytes` is still object
3450
    // safe.
3451
    #[doc(hidden)]
3452
    fn only_derive_is_allowed_to_implement_this_trait()
3453
    where
3454
        Self: Sized;
3455
3456
    /// Interprets the given `source` as a `&Self`.
3457
    ///
3458
    /// This method attempts to return a reference to `source` interpreted as a
3459
    /// `Self`. If the length of `source` is not a [valid size of
3460
    /// `Self`][valid-size], or if `source` is not appropriately aligned, this
3461
    /// returns `Err`. If [`Self: Unaligned`][self-unaligned], you can
3462
    /// [infallibly discard the alignment error][size-error-from].
3463
    ///
3464
    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
3465
    ///
3466
    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
3467
    /// [self-unaligned]: Unaligned
3468
    /// [size-error-from]: error/struct.SizeError.html#method.from-1
3469
    /// [slice-dst]: KnownLayout#dynamically-sized-types
3470
    ///
3471
    /// # Compile-Time Assertions
3472
    ///
3473
    /// This method cannot yet be used on unsized types whose dynamically-sized
3474
    /// component is zero-sized. Attempting to use this method on such types
3475
    /// results in a compile-time assertion error; e.g.:
3476
    ///
3477
    /// ```compile_fail,E0080
3478
    /// use zerocopy::*;
3479
    /// # use zerocopy_derive::*;
3480
    ///
3481
    /// #[derive(FromBytes, Immutable, KnownLayout)]
3482
    /// #[repr(C)]
3483
    /// struct ZSTy {
3484
    ///     leading_sized: u16,
3485
    ///     trailing_dst: [()],
3486
    /// }
3487
    ///
3488
    /// let _ = ZSTy::ref_from_bytes(0u16.as_bytes()); // âš  Compile Error!
3489
    /// ```
3490
    ///
3491
    /// # Examples
3492
    ///
3493
    /// ```
3494
    /// use zerocopy::FromBytes;
3495
    /// # use zerocopy_derive::*;
3496
    ///
3497
    /// #[derive(FromBytes, KnownLayout, Immutable)]
3498
    /// #[repr(C)]
3499
    /// struct PacketHeader {
3500
    ///     src_port: [u8; 2],
3501
    ///     dst_port: [u8; 2],
3502
    ///     length: [u8; 2],
3503
    ///     checksum: [u8; 2],
3504
    /// }
3505
    ///
3506
    /// #[derive(FromBytes, KnownLayout, Immutable)]
3507
    /// #[repr(C)]
3508
    /// struct Packet {
3509
    ///     header: PacketHeader,
3510
    ///     body: [u8],
3511
    /// }
3512
    ///
3513
    /// // These bytes encode a `Packet`.
3514
    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11][..];
3515
    ///
3516
    /// let packet = Packet::ref_from_bytes(bytes).unwrap();
3517
    ///
3518
    /// assert_eq!(packet.header.src_port, [0, 1]);
3519
    /// assert_eq!(packet.header.dst_port, [2, 3]);
3520
    /// assert_eq!(packet.header.length, [4, 5]);
3521
    /// assert_eq!(packet.header.checksum, [6, 7]);
3522
    /// assert_eq!(packet.body, [8, 9, 10, 11]);
3523
    /// ```
3524
    #[must_use = "has no side effects"]
3525
    #[inline]
3526
    fn ref_from_bytes(source: &[u8]) -> Result<&Self, CastError<&[u8], Self>>
3527
    where
3528
        Self: KnownLayout + Immutable,
3529
    {
3530
        static_assert_dst_is_not_zst!(Self);
3531
        match Ptr::from_ref(source).try_cast_into_no_leftover::<_, BecauseImmutable>(None) {
3532
            Ok(ptr) => Ok(ptr.bikeshed_recall_valid().as_ref()),
3533
            Err(err) => Err(err.map_src(|src| src.as_ref())),
3534
        }
3535
    }
3536
3537
    /// Interprets the prefix of the given `source` as a `&Self` without
3538
    /// copying.
3539
    ///
3540
    /// This method computes the [largest possible size of `Self`][valid-size]
3541
    /// that can fit in the leading bytes of `source`, then attempts to return
3542
    /// both a reference to those bytes interpreted as a `Self`, and a reference
3543
    /// to the remaining bytes. If there are insufficient bytes, or if `source`
3544
    /// is not appropriately aligned, this returns `Err`. If [`Self:
3545
    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
3546
    /// error][size-error-from].
3547
    ///
3548
    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
3549
    ///
3550
    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
3551
    /// [self-unaligned]: Unaligned
3552
    /// [size-error-from]: error/struct.SizeError.html#method.from-1
3553
    /// [slice-dst]: KnownLayout#dynamically-sized-types
3554
    ///
3555
    /// # Compile-Time Assertions
3556
    ///
3557
    /// This method cannot yet be used on unsized types whose dynamically-sized
3558
    /// component is zero-sized. See [`ref_from_prefix_with_elems`], which does
3559
    /// support such types. Attempting to use this method on such types results
3560
    /// in a compile-time assertion error; e.g.:
3561
    ///
3562
    /// ```compile_fail,E0080
3563
    /// use zerocopy::*;
3564
    /// # use zerocopy_derive::*;
3565
    ///
3566
    /// #[derive(FromBytes, Immutable, KnownLayout)]
3567
    /// #[repr(C)]
3568
    /// struct ZSTy {
3569
    ///     leading_sized: u16,
3570
    ///     trailing_dst: [()],
3571
    /// }
3572
    ///
3573
    /// let _ = ZSTy::ref_from_prefix(0u16.as_bytes()); // âš  Compile Error!
3574
    /// ```
3575
    ///
3576
    /// [`ref_from_prefix_with_elems`]: FromBytes::ref_from_prefix_with_elems
3577
    ///
3578
    /// # Examples
3579
    ///
3580
    /// ```
3581
    /// use zerocopy::FromBytes;
3582
    /// # use zerocopy_derive::*;
3583
    ///
3584
    /// #[derive(FromBytes, KnownLayout, Immutable)]
3585
    /// #[repr(C)]
3586
    /// struct PacketHeader {
3587
    ///     src_port: [u8; 2],
3588
    ///     dst_port: [u8; 2],
3589
    ///     length: [u8; 2],
3590
    ///     checksum: [u8; 2],
3591
    /// }
3592
    ///
3593
    /// #[derive(FromBytes, KnownLayout, Immutable)]
3594
    /// #[repr(C)]
3595
    /// struct Packet {
3596
    ///     header: PacketHeader,
3597
    ///     body: [[u8; 2]],
3598
    /// }
3599
    ///
3600
    /// // These are more bytes than are needed to encode a `Packet`.
3601
    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14][..];
3602
    ///
3603
    /// let (packet, suffix) = Packet::ref_from_prefix(bytes).unwrap();
3604
    ///
3605
    /// assert_eq!(packet.header.src_port, [0, 1]);
3606
    /// assert_eq!(packet.header.dst_port, [2, 3]);
3607
    /// assert_eq!(packet.header.length, [4, 5]);
3608
    /// assert_eq!(packet.header.checksum, [6, 7]);
3609
    /// assert_eq!(packet.body, [[8, 9], [10, 11], [12, 13]]);
3610
    /// assert_eq!(suffix, &[14u8][..]);
3611
    /// ```
3612
    #[must_use = "has no side effects"]
3613
    #[inline]
3614
    fn ref_from_prefix(source: &[u8]) -> Result<(&Self, &[u8]), CastError<&[u8], Self>>
3615
    where
3616
        Self: KnownLayout + Immutable,
3617
    {
3618
        static_assert_dst_is_not_zst!(Self);
3619
        ref_from_prefix_suffix(source, None, CastType::Prefix)
3620
    }
3621
3622
    /// Interprets the suffix of the given bytes as a `&Self`.
3623
    ///
3624
    /// This method computes the [largest possible size of `Self`][valid-size]
3625
    /// that can fit in the trailing bytes of `source`, then attempts to return
3626
    /// both a reference to those bytes interpreted as a `Self`, and a reference
3627
    /// to the preceding bytes. If there are insufficient bytes, or if that
3628
    /// suffix of `source` is not appropriately aligned, this returns `Err`. If
3629
    /// [`Self: Unaligned`][self-unaligned], you can [infallibly discard the
3630
    /// alignment error][size-error-from].
3631
    ///
3632
    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
3633
    ///
3634
    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
3635
    /// [self-unaligned]: Unaligned
3636
    /// [size-error-from]: error/struct.SizeError.html#method.from-1
3637
    /// [slice-dst]: KnownLayout#dynamically-sized-types
3638
    ///
3639
    /// # Compile-Time Assertions
3640
    ///
3641
    /// This method cannot yet be used on unsized types whose dynamically-sized
3642
    /// component is zero-sized. See [`ref_from_suffix_with_elems`], which does
3643
    /// support such types. Attempting to use this method on such types results
3644
    /// in a compile-time assertion error; e.g.:
3645
    ///
3646
    /// ```compile_fail,E0080
3647
    /// use zerocopy::*;
3648
    /// # use zerocopy_derive::*;
3649
    ///
3650
    /// #[derive(FromBytes, Immutable, KnownLayout)]
3651
    /// #[repr(C)]
3652
    /// struct ZSTy {
3653
    ///     leading_sized: u16,
3654
    ///     trailing_dst: [()],
3655
    /// }
3656
    ///
3657
    /// let _ = ZSTy::ref_from_suffix(0u16.as_bytes()); // âš  Compile Error!
3658
    /// ```
3659
    ///
3660
    /// [`ref_from_suffix_with_elems`]: FromBytes::ref_from_suffix_with_elems
3661
    ///
3662
    /// # Examples
3663
    ///
3664
    /// ```
3665
    /// use zerocopy::FromBytes;
3666
    /// # use zerocopy_derive::*;
3667
    ///
3668
    /// #[derive(FromBytes, Immutable, KnownLayout)]
3669
    /// #[repr(C)]
3670
    /// struct PacketTrailer {
3671
    ///     frame_check_sequence: [u8; 4],
3672
    /// }
3673
    ///
3674
    /// // These are more bytes than are needed to encode a `PacketTrailer`.
3675
    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
3676
    ///
3677
    /// let (prefix, trailer) = PacketTrailer::ref_from_suffix(bytes).unwrap();
3678
    ///
3679
    /// assert_eq!(prefix, &[0, 1, 2, 3, 4, 5][..]);
3680
    /// assert_eq!(trailer.frame_check_sequence, [6, 7, 8, 9]);
3681
    /// ```
3682
    #[must_use = "has no side effects"]
3683
    #[inline]
3684
    fn ref_from_suffix(source: &[u8]) -> Result<(&[u8], &Self), CastError<&[u8], Self>>
3685
    where
3686
        Self: Immutable + KnownLayout,
3687
    {
3688
        static_assert_dst_is_not_zst!(Self);
3689
        ref_from_prefix_suffix(source, None, CastType::Suffix).map(swap)
3690
    }
3691
3692
    /// Interprets the given `source` as a `&mut Self`.
3693
    ///
3694
    /// This method attempts to return a reference to `source` interpreted as a
3695
    /// `Self`. If the length of `source` is not a [valid size of
3696
    /// `Self`][valid-size], or if `source` is not appropriately aligned, this
3697
    /// returns `Err`. If [`Self: Unaligned`][self-unaligned], you can
3698
    /// [infallibly discard the alignment error][size-error-from].
3699
    ///
3700
    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
3701
    ///
3702
    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
3703
    /// [self-unaligned]: Unaligned
3704
    /// [size-error-from]: error/struct.SizeError.html#method.from-1
3705
    /// [slice-dst]: KnownLayout#dynamically-sized-types
3706
    ///
3707
    /// # Compile-Time Assertions
3708
    ///
3709
    /// This method cannot yet be used on unsized types whose dynamically-sized
3710
    /// component is zero-sized. See [`mut_from_prefix_with_elems`], which does
3711
    /// support such types. Attempting to use this method on such types results
3712
    /// in a compile-time assertion error; e.g.:
3713
    ///
3714
    /// ```compile_fail,E0080
3715
    /// use zerocopy::*;
3716
    /// # use zerocopy_derive::*;
3717
    ///
3718
    /// #[derive(FromBytes, Immutable, IntoBytes, KnownLayout)]
3719
    /// #[repr(C, packed)]
3720
    /// struct ZSTy {
3721
    ///     leading_sized: [u8; 2],
3722
    ///     trailing_dst: [()],
3723
    /// }
3724
    ///
3725
    /// let mut source = [85, 85];
3726
    /// let _ = ZSTy::mut_from_bytes(&mut source[..]); // âš  Compile Error!
3727
    /// ```
3728
    ///
3729
    /// [`mut_from_prefix_with_elems`]: FromBytes::mut_from_prefix_with_elems
3730
    ///
3731
    /// # Examples
3732
    ///
3733
    /// ```
3734
    /// use zerocopy::FromBytes;
3735
    /// # use zerocopy_derive::*;
3736
    ///
3737
    /// #[derive(FromBytes, IntoBytes, KnownLayout, Immutable)]
3738
    /// #[repr(C)]
3739
    /// struct PacketHeader {
3740
    ///     src_port: [u8; 2],
3741
    ///     dst_port: [u8; 2],
3742
    ///     length: [u8; 2],
3743
    ///     checksum: [u8; 2],
3744
    /// }
3745
    ///
3746
    /// // These bytes encode a `PacketHeader`.
3747
    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7][..];
3748
    ///
3749
    /// let header = PacketHeader::mut_from_bytes(bytes).unwrap();
3750
    ///
3751
    /// assert_eq!(header.src_port, [0, 1]);
3752
    /// assert_eq!(header.dst_port, [2, 3]);
3753
    /// assert_eq!(header.length, [4, 5]);
3754
    /// assert_eq!(header.checksum, [6, 7]);
3755
    ///
3756
    /// header.checksum = [0, 0];
3757
    ///
3758
    /// assert_eq!(bytes, [0, 1, 2, 3, 4, 5, 0, 0]);
3759
    /// ```
3760
    #[must_use = "has no side effects"]
3761
    #[inline]
3762
    fn mut_from_bytes(source: &mut [u8]) -> Result<&mut Self, CastError<&mut [u8], Self>>
3763
    where
3764
        Self: IntoBytes + KnownLayout,
3765
    {
3766
        static_assert_dst_is_not_zst!(Self);
3767
        match Ptr::from_mut(source).try_cast_into_no_leftover::<_, BecauseExclusive>(None) {
3768
            Ok(ptr) => Ok(ptr.bikeshed_recall_valid().as_mut()),
3769
            Err(err) => Err(err.map_src(|src| src.as_mut())),
3770
        }
3771
    }
3772
3773
    /// Interprets the prefix of the given `source` as a `&mut Self` without
3774
    /// copying.
3775
    ///
3776
    /// This method computes the [largest possible size of `Self`][valid-size]
3777
    /// that can fit in the leading bytes of `source`, then attempts to return
3778
    /// both a reference to those bytes interpreted as a `Self`, and a reference
3779
    /// to the remaining bytes. If there are insufficient bytes, or if `source`
3780
    /// is not appropriately aligned, this returns `Err`. If [`Self:
3781
    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
3782
    /// error][size-error-from].
3783
    ///
3784
    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
3785
    ///
3786
    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
3787
    /// [self-unaligned]: Unaligned
3788
    /// [size-error-from]: error/struct.SizeError.html#method.from-1
3789
    /// [slice-dst]: KnownLayout#dynamically-sized-types
3790
    ///
3791
    /// # Compile-Time Assertions
3792
    ///
3793
    /// This method cannot yet be used on unsized types whose dynamically-sized
3794
    /// component is zero-sized. See [`mut_from_suffix_with_elems`], which does
3795
    /// support such types. Attempting to use this method on such types results
3796
    /// in a compile-time assertion error; e.g.:
3797
    ///
3798
    /// ```compile_fail,E0080
3799
    /// use zerocopy::*;
3800
    /// # use zerocopy_derive::*;
3801
    ///
3802
    /// #[derive(FromBytes, Immutable, IntoBytes, KnownLayout)]
3803
    /// #[repr(C, packed)]
3804
    /// struct ZSTy {
3805
    ///     leading_sized: [u8; 2],
3806
    ///     trailing_dst: [()],
3807
    /// }
3808
    ///
3809
    /// let mut source = [85, 85];
3810
    /// let _ = ZSTy::mut_from_prefix(&mut source[..]); // âš  Compile Error!
3811
    /// ```
3812
    ///
3813
    /// [`mut_from_suffix_with_elems`]: FromBytes::mut_from_suffix_with_elems
3814
    ///
3815
    /// # Examples
3816
    ///
3817
    /// ```
3818
    /// use zerocopy::FromBytes;
3819
    /// # use zerocopy_derive::*;
3820
    ///
3821
    /// #[derive(FromBytes, IntoBytes, KnownLayout, Immutable)]
3822
    /// #[repr(C)]
3823
    /// struct PacketHeader {
3824
    ///     src_port: [u8; 2],
3825
    ///     dst_port: [u8; 2],
3826
    ///     length: [u8; 2],
3827
    ///     checksum: [u8; 2],
3828
    /// }
3829
    ///
3830
    /// // These are more bytes than are needed to encode a `PacketHeader`.
3831
    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
3832
    ///
3833
    /// let (header, body) = PacketHeader::mut_from_prefix(bytes).unwrap();
3834
    ///
3835
    /// assert_eq!(header.src_port, [0, 1]);
3836
    /// assert_eq!(header.dst_port, [2, 3]);
3837
    /// assert_eq!(header.length, [4, 5]);
3838
    /// assert_eq!(header.checksum, [6, 7]);
3839
    /// assert_eq!(body, &[8, 9][..]);
3840
    ///
3841
    /// header.checksum = [0, 0];
3842
    /// body.fill(1);
3843
    ///
3844
    /// assert_eq!(bytes, [0, 1, 2, 3, 4, 5, 0, 0, 1, 1]);
3845
    /// ```
3846
    #[must_use = "has no side effects"]
3847
    #[inline]
3848
    fn mut_from_prefix(
3849
        source: &mut [u8],
3850
    ) -> Result<(&mut Self, &mut [u8]), CastError<&mut [u8], Self>>
3851
    where
3852
        Self: IntoBytes + KnownLayout,
3853
    {
3854
        static_assert_dst_is_not_zst!(Self);
3855
        mut_from_prefix_suffix(source, None, CastType::Prefix)
3856
    }
3857
3858
    /// Interprets the suffix of the given `source` as a `&mut Self` without
3859
    /// copying.
3860
    ///
3861
    /// This method computes the [largest possible size of `Self`][valid-size]
3862
    /// that can fit in the trailing bytes of `source`, then attempts to return
3863
    /// both a reference to those bytes interpreted as a `Self`, and a reference
3864
    /// to the preceding bytes. If there are insufficient bytes, or if that
3865
    /// suffix of `source` is not appropriately aligned, this returns `Err`. If
3866
    /// [`Self: Unaligned`][self-unaligned], you can [infallibly discard the
3867
    /// alignment error][size-error-from].
3868
    ///
3869
    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
3870
    ///
3871
    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
3872
    /// [self-unaligned]: Unaligned
3873
    /// [size-error-from]: error/struct.SizeError.html#method.from-1
3874
    /// [slice-dst]: KnownLayout#dynamically-sized-types
3875
    ///
3876
    /// # Compile-Time Assertions
3877
    ///
3878
    /// This method cannot yet be used on unsized types whose dynamically-sized
3879
    /// component is zero-sized. Attempting to use this method on such types
3880
    /// results in a compile-time assertion error; e.g.:
3881
    ///
3882
    /// ```compile_fail,E0080
3883
    /// use zerocopy::*;
3884
    /// # use zerocopy_derive::*;
3885
    ///
3886
    /// #[derive(FromBytes, Immutable, IntoBytes, KnownLayout)]
3887
    /// #[repr(C, packed)]
3888
    /// struct ZSTy {
3889
    ///     leading_sized: [u8; 2],
3890
    ///     trailing_dst: [()],
3891
    /// }
3892
    ///
3893
    /// let mut source = [85, 85];
3894
    /// let _ = ZSTy::mut_from_suffix(&mut source[..]); // âš  Compile Error!
3895
    /// ```
3896
    ///
3897
    /// # Examples
3898
    ///
3899
    /// ```
3900
    /// use zerocopy::FromBytes;
3901
    /// # use zerocopy_derive::*;
3902
    ///
3903
    /// #[derive(FromBytes, IntoBytes, KnownLayout, Immutable)]
3904
    /// #[repr(C)]
3905
    /// struct PacketTrailer {
3906
    ///     frame_check_sequence: [u8; 4],
3907
    /// }
3908
    ///
3909
    /// // These are more bytes than are needed to encode a `PacketTrailer`.
3910
    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
3911
    ///
3912
    /// let (prefix, trailer) = PacketTrailer::mut_from_suffix(bytes).unwrap();
3913
    ///
3914
    /// assert_eq!(prefix, &[0u8, 1, 2, 3, 4, 5][..]);
3915
    /// assert_eq!(trailer.frame_check_sequence, [6, 7, 8, 9]);
3916
    ///
3917
    /// prefix.fill(0);
3918
    /// trailer.frame_check_sequence.fill(1);
3919
    ///
3920
    /// assert_eq!(bytes, [0, 0, 0, 0, 0, 0, 1, 1, 1, 1]);
3921
    /// ```
3922
    #[must_use = "has no side effects"]
3923
    #[inline]
3924
    fn mut_from_suffix(
3925
        source: &mut [u8],
3926
    ) -> Result<(&mut [u8], &mut Self), CastError<&mut [u8], Self>>
3927
    where
3928
        Self: IntoBytes + KnownLayout,
3929
    {
3930
        static_assert_dst_is_not_zst!(Self);
3931
        mut_from_prefix_suffix(source, None, CastType::Suffix).map(swap)
3932
    }
3933
3934
    /// Interprets the given `source` as a `&Self` with a DST length equal to
3935
    /// `count`.
3936
    ///
3937
    /// This method attempts to return a reference to `source` interpreted as a
3938
    /// `Self` with `count` trailing elements. If the length of `source` is not
3939
    /// equal to the size of `Self` with `count` elements, or if `source` is not
3940
    /// appropriately aligned, this returns `Err`. If [`Self:
3941
    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
3942
    /// error][size-error-from].
3943
    ///
3944
    /// [self-unaligned]: Unaligned
3945
    /// [size-error-from]: error/struct.SizeError.html#method.from-1
3946
    ///
3947
    /// # Examples
3948
    ///
3949
    /// ```
3950
    /// use zerocopy::FromBytes;
3951
    /// # use zerocopy_derive::*;
3952
    ///
3953
    /// # #[derive(Debug, PartialEq, Eq)]
3954
    /// #[derive(FromBytes, Immutable)]
3955
    /// #[repr(C)]
3956
    /// struct Pixel {
3957
    ///     r: u8,
3958
    ///     g: u8,
3959
    ///     b: u8,
3960
    ///     a: u8,
3961
    /// }
3962
    ///
3963
    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7][..];
3964
    ///
3965
    /// let pixels = <[Pixel]>::ref_from_bytes_with_elems(bytes, 2).unwrap();
3966
    ///
3967
    /// assert_eq!(pixels, &[
3968
    ///     Pixel { r: 0, g: 1, b: 2, a: 3 },
3969
    ///     Pixel { r: 4, g: 5, b: 6, a: 7 },
3970
    /// ]);
3971
    ///
3972
    /// ```
3973
    ///
3974
    /// Since an explicit `count` is provided, this method supports types with
3975
    /// zero-sized trailing slice elements. Methods such as [`ref_from_bytes`]
3976
    /// which do not take an explicit count do not support such types.
3977
    ///
3978
    /// ```
3979
    /// use zerocopy::*;
3980
    /// # use zerocopy_derive::*;
3981
    ///
3982
    /// #[derive(FromBytes, Immutable, KnownLayout)]
3983
    /// #[repr(C)]
3984
    /// struct ZSTy {
3985
    ///     leading_sized: [u8; 2],
3986
    ///     trailing_dst: [()],
3987
    /// }
3988
    ///
3989
    /// let src = &[85, 85][..];
3990
    /// let zsty = ZSTy::ref_from_bytes_with_elems(src, 42).unwrap();
3991
    /// assert_eq!(zsty.trailing_dst.len(), 42);
3992
    /// ```
3993
    ///
3994
    /// [`ref_from_bytes`]: FromBytes::ref_from_bytes
3995
    #[must_use = "has no side effects"]
3996
    #[inline]
3997
    fn ref_from_bytes_with_elems(
3998
        source: &[u8],
3999
        count: usize,
4000
    ) -> Result<&Self, CastError<&[u8], Self>>
4001
    where
4002
        Self: KnownLayout<PointerMetadata = usize> + Immutable,
4003
    {
4004
        let source = Ptr::from_ref(source);
4005
        let maybe_slf = source.try_cast_into_no_leftover::<_, BecauseImmutable>(Some(count));
4006
        match maybe_slf {
4007
            Ok(slf) => Ok(slf.bikeshed_recall_valid().as_ref()),
4008
            Err(err) => Err(err.map_src(|s| s.as_ref())),
4009
        }
4010
    }
4011
4012
    /// Interprets the prefix of the given `source` as a DST `&Self` with length
4013
    /// equal to `count`.
4014
    ///
4015
    /// This method attempts to return a reference to the prefix of `source`
4016
    /// interpreted as a `Self` with `count` trailing elements, and a reference
4017
    /// to the remaining bytes. If there are insufficient bytes, or if `source`
4018
    /// is not appropriately aligned, this returns `Err`. If [`Self:
4019
    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
4020
    /// error][size-error-from].
4021
    ///
4022
    /// [self-unaligned]: Unaligned
4023
    /// [size-error-from]: error/struct.SizeError.html#method.from-1
4024
    ///
4025
    /// # Examples
4026
    ///
4027
    /// ```
4028
    /// use zerocopy::FromBytes;
4029
    /// # use zerocopy_derive::*;
4030
    ///
4031
    /// # #[derive(Debug, PartialEq, Eq)]
4032
    /// #[derive(FromBytes, Immutable)]
4033
    /// #[repr(C)]
4034
    /// struct Pixel {
4035
    ///     r: u8,
4036
    ///     g: u8,
4037
    ///     b: u8,
4038
    ///     a: u8,
4039
    /// }
4040
    ///
4041
    /// // These are more bytes than are needed to encode two `Pixel`s.
4042
    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
4043
    ///
4044
    /// let (pixels, suffix) = <[Pixel]>::ref_from_prefix_with_elems(bytes, 2).unwrap();
4045
    ///
4046
    /// assert_eq!(pixels, &[
4047
    ///     Pixel { r: 0, g: 1, b: 2, a: 3 },
4048
    ///     Pixel { r: 4, g: 5, b: 6, a: 7 },
4049
    /// ]);
4050
    ///
4051
    /// assert_eq!(suffix, &[8, 9]);
4052
    /// ```
4053
    ///
4054
    /// Since an explicit `count` is provided, this method supports types with
4055
    /// zero-sized trailing slice elements. Methods such as [`ref_from_prefix`]
4056
    /// which do not take an explicit count do not support such types.
4057
    ///
4058
    /// ```
4059
    /// use zerocopy::*;
4060
    /// # use zerocopy_derive::*;
4061
    ///
4062
    /// #[derive(FromBytes, Immutable, KnownLayout)]
4063
    /// #[repr(C)]
4064
    /// struct ZSTy {
4065
    ///     leading_sized: [u8; 2],
4066
    ///     trailing_dst: [()],
4067
    /// }
4068
    ///
4069
    /// let src = &[85, 85][..];
4070
    /// let (zsty, _) = ZSTy::ref_from_prefix_with_elems(src, 42).unwrap();
4071
    /// assert_eq!(zsty.trailing_dst.len(), 42);
4072
    /// ```
4073
    ///
4074
    /// [`ref_from_prefix`]: FromBytes::ref_from_prefix
4075
    #[must_use = "has no side effects"]
4076
    #[inline]
4077
    fn ref_from_prefix_with_elems(
4078
        source: &[u8],
4079
        count: usize,
4080
    ) -> Result<(&Self, &[u8]), CastError<&[u8], Self>>
4081
    where
4082
        Self: KnownLayout<PointerMetadata = usize> + Immutable,
4083
    {
4084
        ref_from_prefix_suffix(source, Some(count), CastType::Prefix)
4085
    }
4086
4087
    /// Interprets the suffix of the given `source` as a DST `&Self` with length
4088
    /// equal to `count`.
4089
    ///
4090
    /// This method attempts to return a reference to the suffix of `source`
4091
    /// interpreted as a `Self` with `count` trailing elements, and a reference
4092
    /// to the preceding bytes. If there are insufficient bytes, or if that
4093
    /// suffix of `source` is not appropriately aligned, this returns `Err`. If
4094
    /// [`Self: Unaligned`][self-unaligned], you can [infallibly discard the
4095
    /// alignment error][size-error-from].
4096
    ///
4097
    /// [self-unaligned]: Unaligned
4098
    /// [size-error-from]: error/struct.SizeError.html#method.from-1
4099
    ///
4100
    /// # Examples
4101
    ///
4102
    /// ```
4103
    /// use zerocopy::FromBytes;
4104
    /// # use zerocopy_derive::*;
4105
    ///
4106
    /// # #[derive(Debug, PartialEq, Eq)]
4107
    /// #[derive(FromBytes, Immutable)]
4108
    /// #[repr(C)]
4109
    /// struct Pixel {
4110
    ///     r: u8,
4111
    ///     g: u8,
4112
    ///     b: u8,
4113
    ///     a: u8,
4114
    /// }
4115
    ///
4116
    /// // These are more bytes than are needed to encode two `Pixel`s.
4117
    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
4118
    ///
4119
    /// let (prefix, pixels) = <[Pixel]>::ref_from_suffix_with_elems(bytes, 2).unwrap();
4120
    ///
4121
    /// assert_eq!(prefix, &[0, 1]);
4122
    ///
4123
    /// assert_eq!(pixels, &[
4124
    ///     Pixel { r: 2, g: 3, b: 4, a: 5 },
4125
    ///     Pixel { r: 6, g: 7, b: 8, a: 9 },
4126
    /// ]);
4127
    /// ```
4128
    ///
4129
    /// Since an explicit `count` is provided, this method supports types with
4130
    /// zero-sized trailing slice elements. Methods such as [`ref_from_suffix`]
4131
    /// which do not take an explicit count do not support such types.
4132
    ///
4133
    /// ```
4134
    /// use zerocopy::*;
4135
    /// # use zerocopy_derive::*;
4136
    ///
4137
    /// #[derive(FromBytes, Immutable, KnownLayout)]
4138
    /// #[repr(C)]
4139
    /// struct ZSTy {
4140
    ///     leading_sized: [u8; 2],
4141
    ///     trailing_dst: [()],
4142
    /// }
4143
    ///
4144
    /// let src = &[85, 85][..];
4145
    /// let (_, zsty) = ZSTy::ref_from_suffix_with_elems(src, 42).unwrap();
4146
    /// assert_eq!(zsty.trailing_dst.len(), 42);
4147
    /// ```
4148
    ///
4149
    /// [`ref_from_suffix`]: FromBytes::ref_from_suffix
4150
    #[must_use = "has no side effects"]
4151
    #[inline]
4152
    fn ref_from_suffix_with_elems(
4153
        source: &[u8],
4154
        count: usize,
4155
    ) -> Result<(&[u8], &Self), CastError<&[u8], Self>>
4156
    where
4157
        Self: KnownLayout<PointerMetadata = usize> + Immutable,
4158
    {
4159
        ref_from_prefix_suffix(source, Some(count), CastType::Suffix).map(swap)
4160
    }
4161
4162
    /// Interprets the given `source` as a `&mut Self` with a DST length equal
4163
    /// to `count`.
4164
    ///
4165
    /// This method attempts to return a reference to `source` interpreted as a
4166
    /// `Self` with `count` trailing elements. If the length of `source` is not
4167
    /// equal to the size of `Self` with `count` elements, or if `source` is not
4168
    /// appropriately aligned, this returns `Err`. If [`Self:
4169
    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
4170
    /// error][size-error-from].
4171
    ///
4172
    /// [self-unaligned]: Unaligned
4173
    /// [size-error-from]: error/struct.SizeError.html#method.from-1
4174
    ///
4175
    /// # Examples
4176
    ///
4177
    /// ```
4178
    /// use zerocopy::FromBytes;
4179
    /// # use zerocopy_derive::*;
4180
    ///
4181
    /// # #[derive(Debug, PartialEq, Eq)]
4182
    /// #[derive(KnownLayout, FromBytes, IntoBytes, Immutable)]
4183
    /// #[repr(C)]
4184
    /// struct Pixel {
4185
    ///     r: u8,
4186
    ///     g: u8,
4187
    ///     b: u8,
4188
    ///     a: u8,
4189
    /// }
4190
    ///
4191
    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7][..];
4192
    ///
4193
    /// let pixels = <[Pixel]>::mut_from_bytes_with_elems(bytes, 2).unwrap();
4194
    ///
4195
    /// assert_eq!(pixels, &[
4196
    ///     Pixel { r: 0, g: 1, b: 2, a: 3 },
4197
    ///     Pixel { r: 4, g: 5, b: 6, a: 7 },
4198
    /// ]);
4199
    ///
4200
    /// pixels[1] = Pixel { r: 0, g: 0, b: 0, a: 0 };
4201
    ///
4202
    /// assert_eq!(bytes, [0, 1, 2, 3, 0, 0, 0, 0]);
4203
    /// ```
4204
    ///
4205
    /// Since an explicit `count` is provided, this method supports types with
4206
    /// zero-sized trailing slice elements. Methods such as [`mut_from`] which
4207
    /// do not take an explicit count do not support such types.
4208
    ///
4209
    /// ```
4210
    /// use zerocopy::*;
4211
    /// # use zerocopy_derive::*;
4212
    ///
4213
    /// #[derive(FromBytes, IntoBytes, Immutable, KnownLayout)]
4214
    /// #[repr(C, packed)]
4215
    /// struct ZSTy {
4216
    ///     leading_sized: [u8; 2],
4217
    ///     trailing_dst: [()],
4218
    /// }
4219
    ///
4220
    /// let src = &mut [85, 85][..];
4221
    /// let zsty = ZSTy::mut_from_bytes_with_elems(src, 42).unwrap();
4222
    /// assert_eq!(zsty.trailing_dst.len(), 42);
4223
    /// ```
4224
    ///
4225
    /// [`mut_from`]: FromBytes::mut_from
4226
    #[must_use = "has no side effects"]
4227
    #[inline]
4228
    fn mut_from_bytes_with_elems(
4229
        source: &mut [u8],
4230
        count: usize,
4231
    ) -> Result<&mut Self, CastError<&mut [u8], Self>>
4232
    where
4233
        Self: IntoBytes + KnownLayout<PointerMetadata = usize> + Immutable,
4234
    {
4235
        let source = Ptr::from_mut(source);
4236
        let maybe_slf = source.try_cast_into_no_leftover::<_, BecauseImmutable>(Some(count));
4237
        match maybe_slf {
4238
            Ok(slf) => Ok(slf.bikeshed_recall_valid().as_mut()),
4239
            Err(err) => Err(err.map_src(|s| s.as_mut())),
4240
        }
4241
    }
4242
4243
    /// Interprets the prefix of the given `source` as a `&mut Self` with DST
4244
    /// length equal to `count`.
4245
    ///
4246
    /// This method attempts to return a reference to the prefix of `source`
4247
    /// interpreted as a `Self` with `count` trailing elements, and a reference
4248
    /// to the preceding bytes. If there are insufficient bytes, or if `source`
4249
    /// is not appropriately aligned, this returns `Err`. If [`Self:
4250
    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
4251
    /// error][size-error-from].
4252
    ///
4253
    /// [self-unaligned]: Unaligned
4254
    /// [size-error-from]: error/struct.SizeError.html#method.from-1
4255
    ///
4256
    /// # Examples
4257
    ///
4258
    /// ```
4259
    /// use zerocopy::FromBytes;
4260
    /// # use zerocopy_derive::*;
4261
    ///
4262
    /// # #[derive(Debug, PartialEq, Eq)]
4263
    /// #[derive(KnownLayout, FromBytes, IntoBytes, Immutable)]
4264
    /// #[repr(C)]
4265
    /// struct Pixel {
4266
    ///     r: u8,
4267
    ///     g: u8,
4268
    ///     b: u8,
4269
    ///     a: u8,
4270
    /// }
4271
    ///
4272
    /// // These are more bytes than are needed to encode two `Pixel`s.
4273
    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
4274
    ///
4275
    /// let (pixels, suffix) = <[Pixel]>::mut_from_prefix_with_elems(bytes, 2).unwrap();
4276
    ///
4277
    /// assert_eq!(pixels, &[
4278
    ///     Pixel { r: 0, g: 1, b: 2, a: 3 },
4279
    ///     Pixel { r: 4, g: 5, b: 6, a: 7 },
4280
    /// ]);
4281
    ///
4282
    /// assert_eq!(suffix, &[8, 9]);
4283
    ///
4284
    /// pixels[1] = Pixel { r: 0, g: 0, b: 0, a: 0 };
4285
    /// suffix.fill(1);
4286
    ///
4287
    /// assert_eq!(bytes, [0, 1, 2, 3, 0, 0, 0, 0, 1, 1]);
4288
    /// ```
4289
    ///
4290
    /// Since an explicit `count` is provided, this method supports types with
4291
    /// zero-sized trailing slice elements. Methods such as [`mut_from_prefix`]
4292
    /// which do not take an explicit count do not support such types.
4293
    ///
4294
    /// ```
4295
    /// use zerocopy::*;
4296
    /// # use zerocopy_derive::*;
4297
    ///
4298
    /// #[derive(FromBytes, IntoBytes, Immutable, KnownLayout)]
4299
    /// #[repr(C, packed)]
4300
    /// struct ZSTy {
4301
    ///     leading_sized: [u8; 2],
4302
    ///     trailing_dst: [()],
4303
    /// }
4304
    ///
4305
    /// let src = &mut [85, 85][..];
4306
    /// let (zsty, _) = ZSTy::mut_from_prefix_with_elems(src, 42).unwrap();
4307
    /// assert_eq!(zsty.trailing_dst.len(), 42);
4308
    /// ```
4309
    ///
4310
    /// [`mut_from_prefix`]: FromBytes::mut_from_prefix
4311
    #[must_use = "has no side effects"]
4312
    #[inline]
4313
    fn mut_from_prefix_with_elems(
4314
        source: &mut [u8],
4315
        count: usize,
4316
    ) -> Result<(&mut Self, &mut [u8]), CastError<&mut [u8], Self>>
4317
    where
4318
        Self: IntoBytes + KnownLayout<PointerMetadata = usize>,
4319
    {
4320
        mut_from_prefix_suffix(source, Some(count), CastType::Prefix)
4321
    }
4322
4323
    /// Interprets the suffix of the given `source` as a `&mut Self` with DST
4324
    /// length equal to `count`.
4325
    ///
4326
    /// This method attempts to return a reference to the suffix of `source`
4327
    /// interpreted as a `Self` with `count` trailing elements, and a reference
4328
    /// to the remaining bytes. If there are insufficient bytes, or if that
4329
    /// suffix of `source` is not appropriately aligned, this returns `Err`. If
4330
    /// [`Self: Unaligned`][self-unaligned], you can [infallibly discard the
4331
    /// alignment error][size-error-from].
4332
    ///
4333
    /// [self-unaligned]: Unaligned
4334
    /// [size-error-from]: error/struct.SizeError.html#method.from-1
4335
    ///
4336
    /// # Examples
4337
    ///
4338
    /// ```
4339
    /// use zerocopy::FromBytes;
4340
    /// # use zerocopy_derive::*;
4341
    ///
4342
    /// # #[derive(Debug, PartialEq, Eq)]
4343
    /// #[derive(FromBytes, IntoBytes, Immutable)]
4344
    /// #[repr(C)]
4345
    /// struct Pixel {
4346
    ///     r: u8,
4347
    ///     g: u8,
4348
    ///     b: u8,
4349
    ///     a: u8,
4350
    /// }
4351
    ///
4352
    /// // These are more bytes than are needed to encode two `Pixel`s.
4353
    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
4354
    ///
4355
    /// let (prefix, pixels) = <[Pixel]>::mut_from_suffix_with_elems(bytes, 2).unwrap();
4356
    ///
4357
    /// assert_eq!(prefix, &[0, 1]);
4358
    ///
4359
    /// assert_eq!(pixels, &[
4360
    ///     Pixel { r: 2, g: 3, b: 4, a: 5 },
4361
    ///     Pixel { r: 6, g: 7, b: 8, a: 9 },
4362
    /// ]);
4363
    ///
4364
    /// prefix.fill(9);
4365
    /// pixels[1] = Pixel { r: 0, g: 0, b: 0, a: 0 };
4366
    ///
4367
    /// assert_eq!(bytes, [9, 9, 2, 3, 4, 5, 0, 0, 0, 0]);
4368
    /// ```
4369
    ///
4370
    /// Since an explicit `count` is provided, this method supports types with
4371
    /// zero-sized trailing slice elements. Methods such as [`mut_from_suffix`]
4372
    /// which do not take an explicit count do not support such types.
4373
    ///
4374
    /// ```
4375
    /// use zerocopy::*;
4376
    /// # use zerocopy_derive::*;
4377
    ///
4378
    /// #[derive(FromBytes, IntoBytes, Immutable, KnownLayout)]
4379
    /// #[repr(C, packed)]
4380
    /// struct ZSTy {
4381
    ///     leading_sized: [u8; 2],
4382
    ///     trailing_dst: [()],
4383
    /// }
4384
    ///
4385
    /// let src = &mut [85, 85][..];
4386
    /// let (_, zsty) = ZSTy::mut_from_suffix_with_elems(src, 42).unwrap();
4387
    /// assert_eq!(zsty.trailing_dst.len(), 42);
4388
    /// ```
4389
    ///
4390
    /// [`mut_from_suffix`]: FromBytes::mut_from_suffix
4391
    #[must_use = "has no side effects"]
4392
    #[inline]
4393
    fn mut_from_suffix_with_elems(
4394
        source: &mut [u8],
4395
        count: usize,
4396
    ) -> Result<(&mut [u8], &mut Self), CastError<&mut [u8], Self>>
4397
    where
4398
        Self: IntoBytes + KnownLayout<PointerMetadata = usize>,
4399
    {
4400
        mut_from_prefix_suffix(source, Some(count), CastType::Suffix).map(swap)
4401
    }
4402
4403
    /// Reads a copy of `Self` from the given `source`.
4404
    ///
4405
    /// If `source.len() != size_of::<Self>()`, `read_from_bytes` returns `Err`.
4406
    ///
4407
    /// # Examples
4408
    ///
4409
    /// ```
4410
    /// use zerocopy::FromBytes;
4411
    /// # use zerocopy_derive::*;
4412
    ///
4413
    /// #[derive(FromBytes)]
4414
    /// #[repr(C)]
4415
    /// struct PacketHeader {
4416
    ///     src_port: [u8; 2],
4417
    ///     dst_port: [u8; 2],
4418
    ///     length: [u8; 2],
4419
    ///     checksum: [u8; 2],
4420
    /// }
4421
    ///
4422
    /// // These bytes encode a `PacketHeader`.
4423
    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7][..];
4424
    ///
4425
    /// let header = PacketHeader::read_from_bytes(bytes).unwrap();
4426
    ///
4427
    /// assert_eq!(header.src_port, [0, 1]);
4428
    /// assert_eq!(header.dst_port, [2, 3]);
4429
    /// assert_eq!(header.length, [4, 5]);
4430
    /// assert_eq!(header.checksum, [6, 7]);
4431
    /// ```
4432
    #[must_use = "has no side effects"]
4433
    #[inline]
4434
    fn read_from_bytes(source: &[u8]) -> Result<Self, SizeError<&[u8], Self>>
4435
    where
4436
        Self: Sized,
4437
    {
4438
        match Ref::<_, Unalign<Self>>::sized_from(source) {
4439
            Ok(r) => Ok(Ref::read(&r).into_inner()),
4440
            Err(CastError::Size(e)) => Err(e.with_dst()),
4441
            Err(CastError::Alignment(_)) => {
4442
                // SAFETY: `Unalign<Self>` is trivially aligned, so
4443
                // `Ref::sized_from` cannot fail due to unmet alignment
4444
                // requirements.
4445
                unsafe { core::hint::unreachable_unchecked() }
4446
            }
4447
            Err(CastError::Validity(i)) => match i {},
4448
        }
4449
    }
4450
4451
    /// Reads a copy of `Self` from the prefix of the given `source`.
4452
    ///
4453
    /// This attempts to read a `Self` from the first `size_of::<Self>()` bytes
4454
    /// of `source`, returning that `Self` and any remaining bytes. If
4455
    /// `source.len() < size_of::<Self>()`, it returns `Err`.
4456
    ///
4457
    /// # Examples
4458
    ///
4459
    /// ```
4460
    /// use zerocopy::FromBytes;
4461
    /// # use zerocopy_derive::*;
4462
    ///
4463
    /// #[derive(FromBytes)]
4464
    /// #[repr(C)]
4465
    /// struct PacketHeader {
4466
    ///     src_port: [u8; 2],
4467
    ///     dst_port: [u8; 2],
4468
    ///     length: [u8; 2],
4469
    ///     checksum: [u8; 2],
4470
    /// }
4471
    ///
4472
    /// // These are more bytes than are needed to encode a `PacketHeader`.
4473
    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
4474
    ///
4475
    /// let (header, body) = PacketHeader::read_from_prefix(bytes).unwrap();
4476
    ///
4477
    /// assert_eq!(header.src_port, [0, 1]);
4478
    /// assert_eq!(header.dst_port, [2, 3]);
4479
    /// assert_eq!(header.length, [4, 5]);
4480
    /// assert_eq!(header.checksum, [6, 7]);
4481
    /// assert_eq!(body, [8, 9]);
4482
    /// ```
4483
    #[must_use = "has no side effects"]
4484
    #[inline]
4485
    fn read_from_prefix(source: &[u8]) -> Result<(Self, &[u8]), SizeError<&[u8], Self>>
4486
    where
4487
        Self: Sized,
4488
    {
4489
        match Ref::<_, Unalign<Self>>::sized_from_prefix(source) {
4490
            Ok((r, suffix)) => Ok((Ref::read(&r).into_inner(), suffix)),
4491
            Err(CastError::Size(e)) => Err(e.with_dst()),
4492
            Err(CastError::Alignment(_)) => {
4493
                // SAFETY: `Unalign<Self>` is trivially aligned, so
4494
                // `Ref::sized_from_prefix` cannot fail due to unmet alignment
4495
                // requirements.
4496
                unsafe { core::hint::unreachable_unchecked() }
4497
            }
4498
            Err(CastError::Validity(i)) => match i {},
4499
        }
4500
    }
4501
4502
    /// Reads a copy of `Self` from the suffix of the given `source`.
4503
    ///
4504
    /// This attempts to read a `Self` from the last `size_of::<Self>()` bytes
4505
    /// of `source`, returning that `Self` and any preceding bytes. If
4506
    /// `source.len() < size_of::<Self>()`, it returns `Err`.
4507
    ///
4508
    /// # Examples
4509
    ///
4510
    /// ```
4511
    /// use zerocopy::FromBytes;
4512
    /// # use zerocopy_derive::*;
4513
    ///
4514
    /// #[derive(FromBytes)]
4515
    /// #[repr(C)]
4516
    /// struct PacketTrailer {
4517
    ///     frame_check_sequence: [u8; 4],
4518
    /// }
4519
    ///
4520
    /// // These are more bytes than are needed to encode a `PacketTrailer`.
4521
    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
4522
    ///
4523
    /// let (prefix, trailer) = PacketTrailer::read_from_suffix(bytes).unwrap();
4524
    ///
4525
    /// assert_eq!(prefix, [0, 1, 2, 3, 4, 5]);
4526
    /// assert_eq!(trailer.frame_check_sequence, [6, 7, 8, 9]);
4527
    /// ```
4528
    #[must_use = "has no side effects"]
4529
    #[inline]
4530
    fn read_from_suffix(source: &[u8]) -> Result<(&[u8], Self), SizeError<&[u8], Self>>
4531
    where
4532
        Self: Sized,
4533
    {
4534
        match Ref::<_, Unalign<Self>>::sized_from_suffix(source) {
4535
            Ok((prefix, r)) => Ok((prefix, Ref::read(&r).into_inner())),
4536
            Err(CastError::Size(e)) => Err(e.with_dst()),
4537
            Err(CastError::Alignment(_)) => {
4538
                // SAFETY: `Unalign<Self>` is trivially aligned, so
4539
                // `Ref::sized_from_suffix` cannot fail due to unmet alignment
4540
                // requirements.
4541
                unsafe { core::hint::unreachable_unchecked() }
4542
            }
4543
            Err(CastError::Validity(i)) => match i {},
4544
        }
4545
    }
4546
4547
    /// Reads a copy of `self` from an `io::Read`.
4548
    ///
4549
    /// This is useful for interfacing with operating system byte sinks (files,
4550
    /// sockets, etc.).
4551
    ///
4552
    /// # Examples
4553
    ///
4554
    /// ```no_run
4555
    /// use zerocopy::{byteorder::big_endian::*, FromBytes};
4556
    /// use std::fs::File;
4557
    /// # use zerocopy_derive::*;
4558
    ///
4559
    /// #[derive(FromBytes)]
4560
    /// #[repr(C)]
4561
    /// struct BitmapFileHeader {
4562
    ///     signature: [u8; 2],
4563
    ///     size: U32,
4564
    ///     reserved: U64,
4565
    ///     offset: U64,
4566
    /// }
4567
    ///
4568
    /// let mut file = File::open("image.bin").unwrap();
4569
    /// let header = BitmapFileHeader::read_from_io(&mut file).unwrap();
4570
    /// ```
4571
    #[cfg(feature = "std")]
4572
    #[inline(always)]
4573
    fn read_from_io<R>(mut src: R) -> io::Result<Self>
4574
    where
4575
        Self: Sized,
4576
        R: io::Read,
4577
    {
4578
        // NOTE(#2319, #2320): We do `buf.zero()` separately rather than
4579
        // constructing `let buf = CoreMaybeUninit::zeroed()` because, if `Self`
4580
        // contains padding bytes, then a typed copy of `CoreMaybeUninit<Self>`
4581
        // will not necessarily preserve zeros written to those padding byte
4582
        // locations, and so `buf` could contain uninitialized bytes.
4583
        let mut buf = CoreMaybeUninit::<Self>::uninit();
4584
        buf.zero();
4585
4586
        let ptr = Ptr::from_mut(&mut buf);
4587
        // SAFETY: After `buf.zero()`, `buf` consists entirely of initialized,
4588
        // zeroed bytes.
4589
        let ptr = unsafe { ptr.assume_validity::<invariant::Initialized>() };
4590
        let ptr = ptr.as_bytes::<BecauseExclusive>();
4591
        src.read_exact(ptr.as_mut())?;
4592
        // SAFETY: `buf` entirely consists of initialized bytes, and `Self` is
4593
        // `FromBytes`.
4594
        Ok(unsafe { buf.assume_init() })
4595
    }
4596
4597
    #[deprecated(since = "0.8.0", note = "renamed to `FromBytes::ref_from_bytes`")]
4598
    #[doc(hidden)]
4599
    #[must_use = "has no side effects"]
4600
    #[inline(always)]
4601
    fn ref_from(source: &[u8]) -> Option<&Self>
4602
    where
4603
        Self: KnownLayout + Immutable,
4604
    {
4605
        Self::ref_from_bytes(source).ok()
4606
    }
4607
4608
    #[deprecated(since = "0.8.0", note = "renamed to `FromBytes::mut_from_bytes`")]
4609
    #[doc(hidden)]
4610
    #[must_use = "has no side effects"]
4611
    #[inline(always)]
4612
    fn mut_from(source: &mut [u8]) -> Option<&mut Self>
4613
    where
4614
        Self: KnownLayout + IntoBytes,
4615
    {
4616
        Self::mut_from_bytes(source).ok()
4617
    }
4618
4619
    #[deprecated(since = "0.8.0", note = "renamed to `FromBytes::ref_from_prefix_with_elems`")]
4620
    #[doc(hidden)]
4621
    #[must_use = "has no side effects"]
4622
    #[inline(always)]
4623
    fn slice_from_prefix(source: &[u8], count: usize) -> Option<(&[Self], &[u8])>
4624
    where
4625
        Self: Sized + Immutable,
4626
    {
4627
        <[Self]>::ref_from_prefix_with_elems(source, count).ok()
4628
    }
4629
4630
    #[deprecated(since = "0.8.0", note = "renamed to `FromBytes::ref_from_suffix_with_elems`")]
4631
    #[doc(hidden)]
4632
    #[must_use = "has no side effects"]
4633
    #[inline(always)]
4634
    fn slice_from_suffix(source: &[u8], count: usize) -> Option<(&[u8], &[Self])>
4635
    where
4636
        Self: Sized + Immutable,
4637
    {
4638
        <[Self]>::ref_from_suffix_with_elems(source, count).ok()
4639
    }
4640
4641
    #[deprecated(since = "0.8.0", note = "renamed to `FromBytes::mut_from_prefix_with_elems`")]
4642
    #[doc(hidden)]
4643
    #[must_use = "has no side effects"]
4644
    #[inline(always)]
4645
    fn mut_slice_from_prefix(source: &mut [u8], count: usize) -> Option<(&mut [Self], &mut [u8])>
4646
    where
4647
        Self: Sized + IntoBytes,
4648
    {
4649
        <[Self]>::mut_from_prefix_with_elems(source, count).ok()
4650
    }
4651
4652
    #[deprecated(since = "0.8.0", note = "renamed to `FromBytes::mut_from_suffix_with_elems`")]
4653
    #[doc(hidden)]
4654
    #[must_use = "has no side effects"]
4655
    #[inline(always)]
4656
    fn mut_slice_from_suffix(source: &mut [u8], count: usize) -> Option<(&mut [u8], &mut [Self])>
4657
    where
4658
        Self: Sized + IntoBytes,
4659
    {
4660
        <[Self]>::mut_from_suffix_with_elems(source, count).ok()
4661
    }
4662
4663
    #[deprecated(since = "0.8.0", note = "renamed to `FromBytes::read_from_bytes`")]
4664
    #[doc(hidden)]
4665
    #[must_use = "has no side effects"]
4666
    #[inline(always)]
4667
    fn read_from(source: &[u8]) -> Option<Self>
4668
    where
4669
        Self: Sized,
4670
    {
4671
        Self::read_from_bytes(source).ok()
4672
    }
4673
}
4674
4675
/// Interprets the given affix of the given bytes as a `&Self`.
4676
///
4677
/// This method computes the largest possible size of `Self` that can fit in the
4678
/// prefix or suffix bytes of `source`, then attempts to return both a reference
4679
/// to those bytes interpreted as a `Self`, and a reference to the excess bytes.
4680
/// If there are insufficient bytes, or if that affix of `source` is not
4681
/// appropriately aligned, this returns `Err`.
4682
#[inline(always)]
4683
fn ref_from_prefix_suffix<T: FromBytes + KnownLayout + Immutable + ?Sized>(
4684
    source: &[u8],
4685
    meta: Option<T::PointerMetadata>,
4686
    cast_type: CastType,
4687
) -> Result<(&T, &[u8]), CastError<&[u8], T>> {
4688
    let (slf, prefix_suffix) = Ptr::from_ref(source)
4689
        .try_cast_into::<_, BecauseImmutable>(cast_type, meta)
4690
        .map_err(|err| err.map_src(|s| s.as_ref()))?;
4691
    Ok((slf.bikeshed_recall_valid().as_ref(), prefix_suffix.as_ref()))
4692
}
4693
4694
/// Interprets the given affix of the given bytes as a `&mut Self` without
4695
/// copying.
4696
///
4697
/// This method computes the largest possible size of `Self` that can fit in the
4698
/// prefix or suffix bytes of `source`, then attempts to return both a reference
4699
/// to those bytes interpreted as a `Self`, and a reference to the excess bytes.
4700
/// If there are insufficient bytes, or if that affix of `source` is not
4701
/// appropriately aligned, this returns `Err`.
4702
#[inline(always)]
4703
fn mut_from_prefix_suffix<T: FromBytes + KnownLayout + ?Sized>(
4704
    source: &mut [u8],
4705
    meta: Option<T::PointerMetadata>,
4706
    cast_type: CastType,
4707
) -> Result<(&mut T, &mut [u8]), CastError<&mut [u8], T>> {
4708
    let (slf, prefix_suffix) = Ptr::from_mut(source)
4709
        .try_cast_into::<_, BecauseExclusive>(cast_type, meta)
4710
        .map_err(|err| err.map_src(|s| s.as_mut()))?;
4711
    Ok((slf.bikeshed_recall_valid().as_mut(), prefix_suffix.as_mut()))
4712
}
4713
4714
/// Analyzes whether a type is [`IntoBytes`].
4715
///
4716
/// This derive analyzes, at compile time, whether the annotated type satisfies
4717
/// the [safety conditions] of `IntoBytes` and implements `IntoBytes` if it is
4718
/// sound to do so. This derive can be applied to structs and enums (see below
4719
/// for union support); e.g.:
4720
///
4721
/// ```
4722
/// # use zerocopy_derive::{IntoBytes};
4723
/// #[derive(IntoBytes)]
4724
/// #[repr(C)]
4725
/// struct MyStruct {
4726
/// # /*
4727
///     ...
4728
/// # */
4729
/// }
4730
///
4731
/// #[derive(IntoBytes)]
4732
/// #[repr(u8)]
4733
/// enum MyEnum {
4734
/// #   Variant,
4735
/// # /*
4736
///     ...
4737
/// # */
4738
/// }
4739
/// ```
4740
///
4741
/// [safety conditions]: trait@IntoBytes#safety
4742
///
4743
/// # Error Messages
4744
///
4745
/// On Rust toolchains prior to 1.78.0, due to the way that the custom derive
4746
/// for `IntoBytes` is implemented, you may get an error like this:
4747
///
4748
/// ```text
4749
/// error[E0277]: the trait bound `(): PaddingFree<Foo, true>` is not satisfied
4750
///   --> lib.rs:23:10
4751
///    |
4752
///  1 | #[derive(IntoBytes)]
4753
///    |          ^^^^^^^^^ the trait `PaddingFree<Foo, true>` is not implemented for `()`
4754
///    |
4755
///    = help: the following implementations were found:
4756
///                   <() as PaddingFree<T, false>>
4757
/// ```
4758
///
4759
/// This error indicates that the type being annotated has padding bytes, which
4760
/// is illegal for `IntoBytes` types. Consider reducing the alignment of some
4761
/// fields by using types in the [`byteorder`] module, wrapping field types in
4762
/// [`Unalign`], adding explicit struct fields where those padding bytes would
4763
/// be, or using `#[repr(packed)]`. See the Rust Reference's page on [type
4764
/// layout] for more information about type layout and padding.
4765
///
4766
/// [type layout]: https://doc.rust-lang.org/reference/type-layout.html
4767
///
4768
/// # Unions
4769
///
4770
/// Currently, union bit validity is [up in the air][union-validity], and so
4771
/// zerocopy does not support `#[derive(IntoBytes)]` on unions by default.
4772
/// However, implementing `IntoBytes` on a union type is likely sound on all
4773
/// existing Rust toolchains - it's just that it may become unsound in the
4774
/// future. You can opt-in to `#[derive(IntoBytes)]` support on unions by
4775
/// passing the unstable `zerocopy_derive_union_into_bytes` cfg:
4776
///
4777
/// ```shell
4778
/// $ RUSTFLAGS='--cfg zerocopy_derive_union_into_bytes' cargo build
4779
/// ```
4780
///
4781
/// However, it is your responsibility to ensure that this derive is sound on
4782
/// the specific versions of the Rust toolchain you are using! We make no
4783
/// stability or soundness guarantees regarding this cfg, and may remove it at
4784
/// any point.
4785
///
4786
/// We are actively working with Rust to stabilize the necessary language
4787
/// guarantees to support this in a forwards-compatible way, which will enable
4788
/// us to remove the cfg gate. As part of this effort, we need to know how much
4789
/// demand there is for this feature. If you would like to use `IntoBytes` on
4790
/// unions, [please let us know][discussion].
4791
///
4792
/// [union-validity]: https://github.com/rust-lang/unsafe-code-guidelines/issues/438
4793
/// [discussion]: https://github.com/google/zerocopy/discussions/1802
4794
///
4795
/// # Analysis
4796
///
4797
/// *This section describes, roughly, the analysis performed by this derive to
4798
/// determine whether it is sound to implement `IntoBytes` for a given type.
4799
/// Unless you are modifying the implementation of this derive, or attempting to
4800
/// manually implement `IntoBytes` for a type yourself, you don't need to read
4801
/// this section.*
4802
///
4803
/// If a type has the following properties, then this derive can implement
4804
/// `IntoBytes` for that type:
4805
///
4806
/// - If the type is a struct, its fields must be [`IntoBytes`]. Additionally:
4807
///     - if the type is `repr(transparent)` or `repr(packed)`, it is
4808
///       [`IntoBytes`] if its fields are [`IntoBytes`]; else,
4809
///     - if the type is `repr(C)` with at most one field, it is [`IntoBytes`]
4810
///       if its field is [`IntoBytes`]; else,
4811
///     - if the type has no generic parameters, it is [`IntoBytes`] if the type
4812
///       is sized and has no padding bytes; else,
4813
///     - if the type is `repr(C)`, its fields must be [`Unaligned`].
4814
/// - If the type is an enum:
4815
///   - It must have a defined representation (`repr`s `C`, `u8`, `u16`, `u32`,
4816
///     `u64`, `usize`, `i8`, `i16`, `i32`, `i64`, or `isize`).
4817
///   - It must have no padding bytes.
4818
///   - Its fields must be [`IntoBytes`].
4819
///
4820
/// This analysis is subject to change. Unsafe code may *only* rely on the
4821
/// documented [safety conditions] of `FromBytes`, and must *not* rely on the
4822
/// implementation details of this derive.
4823
///
4824
/// [Rust Reference]: https://doc.rust-lang.org/reference/type-layout.html
4825
#[cfg(any(feature = "derive", test))]
4826
#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
4827
pub use zerocopy_derive::IntoBytes;
4828
4829
/// Types that can be converted to an immutable slice of initialized bytes.
4830
///
4831
/// Any `IntoBytes` type can be converted to a slice of initialized bytes of the
4832
/// same size. This is useful for efficiently serializing structured data as raw
4833
/// bytes.
4834
///
4835
/// # Implementation
4836
///
4837
/// **Do not implement this trait yourself!** Instead, use
4838
/// [`#[derive(IntoBytes)]`][derive]; e.g.:
4839
///
4840
/// ```
4841
/// # use zerocopy_derive::IntoBytes;
4842
/// #[derive(IntoBytes)]
4843
/// #[repr(C)]
4844
/// struct MyStruct {
4845
/// # /*
4846
///     ...
4847
/// # */
4848
/// }
4849
///
4850
/// #[derive(IntoBytes)]
4851
/// #[repr(u8)]
4852
/// enum MyEnum {
4853
/// #   Variant0,
4854
/// # /*
4855
///     ...
4856
/// # */
4857
/// }
4858
/// ```
4859
///
4860
/// This derive performs a sophisticated, compile-time safety analysis to
4861
/// determine whether a type is `IntoBytes`. See the [derive
4862
/// documentation][derive] for guidance on how to interpret error messages
4863
/// produced by the derive's analysis.
4864
///
4865
/// # Safety
4866
///
4867
/// *This section describes what is required in order for `T: IntoBytes`, and
4868
/// what unsafe code may assume of such types. If you don't plan on implementing
4869
/// `IntoBytes` manually, and you don't plan on writing unsafe code that
4870
/// operates on `IntoBytes` types, then you don't need to read this section.*
4871
///
4872
/// If `T: IntoBytes`, then unsafe code may assume that it is sound to treat any
4873
/// `t: T` as an immutable `[u8]` of length `size_of_val(t)`. If a type is
4874
/// marked as `IntoBytes` which violates this contract, it may cause undefined
4875
/// behavior.
4876
///
4877
/// `#[derive(IntoBytes)]` only permits [types which satisfy these
4878
/// requirements][derive-analysis].
4879
///
4880
#[cfg_attr(
4881
    feature = "derive",
4882
    doc = "[derive]: zerocopy_derive::IntoBytes",
4883
    doc = "[derive-analysis]: zerocopy_derive::IntoBytes#analysis"
4884
)]
4885
#[cfg_attr(
4886
    not(feature = "derive"),
4887
    doc = concat!("[derive]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.IntoBytes.html"),
4888
    doc = concat!("[derive-analysis]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.IntoBytes.html#analysis"),
4889
)]
4890
#[cfg_attr(
4891
    zerocopy_diagnostic_on_unimplemented_1_78_0,
4892
    diagnostic::on_unimplemented(note = "Consider adding `#[derive(IntoBytes)]` to `{Self}`")
4893
)]
4894
pub unsafe trait IntoBytes {
4895
    // The `Self: Sized` bound makes it so that this function doesn't prevent
4896
    // `IntoBytes` from being object safe. Note that other `IntoBytes` methods
4897
    // prevent object safety, but those provide a benefit in exchange for object
4898
    // safety. If at some point we remove those methods, change their type
4899
    // signatures, or move them out of this trait so that `IntoBytes` is object
4900
    // safe again, it's important that this function not prevent object safety.
4901
    #[doc(hidden)]
4902
    fn only_derive_is_allowed_to_implement_this_trait()
4903
    where
4904
        Self: Sized;
4905
4906
    /// Gets the bytes of this value.
4907
    ///
4908
    /// # Examples
4909
    ///
4910
    /// ```
4911
    /// use zerocopy::IntoBytes;
4912
    /// # use zerocopy_derive::*;
4913
    ///
4914
    /// #[derive(IntoBytes, Immutable)]
4915
    /// #[repr(C)]
4916
    /// struct PacketHeader {
4917
    ///     src_port: [u8; 2],
4918
    ///     dst_port: [u8; 2],
4919
    ///     length: [u8; 2],
4920
    ///     checksum: [u8; 2],
4921
    /// }
4922
    ///
4923
    /// let header = PacketHeader {
4924
    ///     src_port: [0, 1],
4925
    ///     dst_port: [2, 3],
4926
    ///     length: [4, 5],
4927
    ///     checksum: [6, 7],
4928
    /// };
4929
    ///
4930
    /// let bytes = header.as_bytes();
4931
    ///
4932
    /// assert_eq!(bytes, [0, 1, 2, 3, 4, 5, 6, 7]);
4933
    /// ```
4934
    #[must_use = "has no side effects"]
4935
    #[inline(always)]
4936
0
    fn as_bytes(&self) -> &[u8]
4937
0
    where
4938
0
        Self: Immutable,
4939
    {
4940
        // Note that this method does not have a `Self: Sized` bound;
4941
        // `size_of_val` works for unsized values too.
4942
0
        let len = mem::size_of_val(self);
4943
0
        let slf: *const Self = self;
4944
4945
        // SAFETY:
4946
        // - `slf.cast::<u8>()` is valid for reads for `len * size_of::<u8>()`
4947
        //   many bytes because...
4948
        //   - `slf` is the same pointer as `self`, and `self` is a reference
4949
        //     which points to an object whose size is `len`. Thus...
4950
        //     - The entire region of `len` bytes starting at `slf` is contained
4951
        //       within a single allocation.
4952
        //     - `slf` is non-null.
4953
        //   - `slf` is trivially aligned to `align_of::<u8>() == 1`.
4954
        // - `Self: IntoBytes` ensures that all of the bytes of `slf` are
4955
        //   initialized.
4956
        // - Since `slf` is derived from `self`, and `self` is an immutable
4957
        //   reference, the only other references to this memory region that
4958
        //   could exist are other immutable references, and those don't allow
4959
        //   mutation. `Self: Immutable` prohibits types which contain
4960
        //   `UnsafeCell`s, which are the only types for which this rule
4961
        //   wouldn't be sufficient.
4962
        // - The total size of the resulting slice is no larger than
4963
        //   `isize::MAX` because no allocation produced by safe code can be
4964
        //   larger than `isize::MAX`.
4965
        //
4966
        // TODO(#429): Add references to docs and quotes.
4967
0
        unsafe { slice::from_raw_parts(slf.cast::<u8>(), len) }
4968
0
    }
Unexecuted instantiation: <[u32] as zerocopy::IntoBytes>::as_bytes
Unexecuted instantiation: <[u64] as zerocopy::IntoBytes>::as_bytes
4969
4970
    /// Gets the bytes of this value mutably.
4971
    ///
4972
    /// # Examples
4973
    ///
4974
    /// ```
4975
    /// use zerocopy::IntoBytes;
4976
    /// # use zerocopy_derive::*;
4977
    ///
4978
    /// # #[derive(Eq, PartialEq, Debug)]
4979
    /// #[derive(FromBytes, IntoBytes, Immutable)]
4980
    /// #[repr(C)]
4981
    /// struct PacketHeader {
4982
    ///     src_port: [u8; 2],
4983
    ///     dst_port: [u8; 2],
4984
    ///     length: [u8; 2],
4985
    ///     checksum: [u8; 2],
4986
    /// }
4987
    ///
4988
    /// let mut header = PacketHeader {
4989
    ///     src_port: [0, 1],
4990
    ///     dst_port: [2, 3],
4991
    ///     length: [4, 5],
4992
    ///     checksum: [6, 7],
4993
    /// };
4994
    ///
4995
    /// let bytes = header.as_mut_bytes();
4996
    ///
4997
    /// assert_eq!(bytes, [0, 1, 2, 3, 4, 5, 6, 7]);
4998
    ///
4999
    /// bytes.reverse();
5000
    ///
5001
    /// assert_eq!(header, PacketHeader {
5002
    ///     src_port: [7, 6],
5003
    ///     dst_port: [5, 4],
5004
    ///     length: [3, 2],
5005
    ///     checksum: [1, 0],
5006
    /// });
5007
    /// ```
5008
    #[must_use = "has no side effects"]
5009
    #[inline(always)]
5010
    fn as_mut_bytes(&mut self) -> &mut [u8]
5011
    where
5012
        Self: FromBytes,
5013
    {
5014
        // Note that this method does not have a `Self: Sized` bound;
5015
        // `size_of_val` works for unsized values too.
5016
        let len = mem::size_of_val(self);
5017
        let slf: *mut Self = self;
5018
5019
        // SAFETY:
5020
        // - `slf.cast::<u8>()` is valid for reads and writes for `len *
5021
        //   size_of::<u8>()` many bytes because...
5022
        //   - `slf` is the same pointer as `self`, and `self` is a reference
5023
        //     which points to an object whose size is `len`. Thus...
5024
        //     - The entire region of `len` bytes starting at `slf` is contained
5025
        //       within a single allocation.
5026
        //     - `slf` is non-null.
5027
        //   - `slf` is trivially aligned to `align_of::<u8>() == 1`.
5028
        // - `Self: IntoBytes` ensures that all of the bytes of `slf` are
5029
        //   initialized.
5030
        // - `Self: FromBytes` ensures that no write to this memory region
5031
        //   could result in it containing an invalid `Self`.
5032
        // - Since `slf` is derived from `self`, and `self` is a mutable
5033
        //   reference, no other references to this memory region can exist.
5034
        // - The total size of the resulting slice is no larger than
5035
        //   `isize::MAX` because no allocation produced by safe code can be
5036
        //   larger than `isize::MAX`.
5037
        //
5038
        // TODO(#429): Add references to docs and quotes.
5039
        unsafe { slice::from_raw_parts_mut(slf.cast::<u8>(), len) }
5040
    }
5041
5042
    /// Writes a copy of `self` to `dst`.
5043
    ///
5044
    /// If `dst.len() != size_of_val(self)`, `write_to` returns `Err`.
5045
    ///
5046
    /// # Examples
5047
    ///
5048
    /// ```
5049
    /// use zerocopy::IntoBytes;
5050
    /// # use zerocopy_derive::*;
5051
    ///
5052
    /// #[derive(IntoBytes, Immutable)]
5053
    /// #[repr(C)]
5054
    /// struct PacketHeader {
5055
    ///     src_port: [u8; 2],
5056
    ///     dst_port: [u8; 2],
5057
    ///     length: [u8; 2],
5058
    ///     checksum: [u8; 2],
5059
    /// }
5060
    ///
5061
    /// let header = PacketHeader {
5062
    ///     src_port: [0, 1],
5063
    ///     dst_port: [2, 3],
5064
    ///     length: [4, 5],
5065
    ///     checksum: [6, 7],
5066
    /// };
5067
    ///
5068
    /// let mut bytes = [0, 0, 0, 0, 0, 0, 0, 0];
5069
    ///
5070
    /// header.write_to(&mut bytes[..]);
5071
    ///
5072
    /// assert_eq!(bytes, [0, 1, 2, 3, 4, 5, 6, 7]);
5073
    /// ```
5074
    ///
5075
    /// If too many or too few target bytes are provided, `write_to` returns
5076
    /// `Err` and leaves the target bytes unmodified:
5077
    ///
5078
    /// ```
5079
    /// # use zerocopy::IntoBytes;
5080
    /// # let header = u128::MAX;
5081
    /// let mut excessive_bytes = &mut [0u8; 128][..];
5082
    ///
5083
    /// let write_result = header.write_to(excessive_bytes);
5084
    ///
5085
    /// assert!(write_result.is_err());
5086
    /// assert_eq!(excessive_bytes, [0u8; 128]);
5087
    /// ```
5088
    #[must_use = "callers should check the return value to see if the operation succeeded"]
5089
    #[inline]
5090
    fn write_to(&self, dst: &mut [u8]) -> Result<(), SizeError<&Self, &mut [u8]>>
5091
    where
5092
        Self: Immutable,
5093
    {
5094
        let src = self.as_bytes();
5095
        if dst.len() == src.len() {
5096
            // SAFETY: Within this branch of the conditional, we have ensured
5097
            // that `dst.len()` is equal to `src.len()`. Neither the size of the
5098
            // source nor the size of the destination change between the above
5099
            // size check and the invocation of `copy_unchecked`.
5100
            unsafe { util::copy_unchecked(src, dst) }
5101
            Ok(())
5102
        } else {
5103
            Err(SizeError::new(self))
5104
        }
5105
    }
5106
5107
    /// Writes a copy of `self` to the prefix of `dst`.
5108
    ///
5109
    /// `write_to_prefix` writes `self` to the first `size_of_val(self)` bytes
5110
    /// of `dst`. If `dst.len() < size_of_val(self)`, it returns `Err`.
5111
    ///
5112
    /// # Examples
5113
    ///
5114
    /// ```
5115
    /// use zerocopy::IntoBytes;
5116
    /// # use zerocopy_derive::*;
5117
    ///
5118
    /// #[derive(IntoBytes, Immutable)]
5119
    /// #[repr(C)]
5120
    /// struct PacketHeader {
5121
    ///     src_port: [u8; 2],
5122
    ///     dst_port: [u8; 2],
5123
    ///     length: [u8; 2],
5124
    ///     checksum: [u8; 2],
5125
    /// }
5126
    ///
5127
    /// let header = PacketHeader {
5128
    ///     src_port: [0, 1],
5129
    ///     dst_port: [2, 3],
5130
    ///     length: [4, 5],
5131
    ///     checksum: [6, 7],
5132
    /// };
5133
    ///
5134
    /// let mut bytes = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0];
5135
    ///
5136
    /// header.write_to_prefix(&mut bytes[..]);
5137
    ///
5138
    /// assert_eq!(bytes, [0, 1, 2, 3, 4, 5, 6, 7, 0, 0]);
5139
    /// ```
5140
    ///
5141
    /// If insufficient target bytes are provided, `write_to_prefix` returns
5142
    /// `Err` and leaves the target bytes unmodified:
5143
    ///
5144
    /// ```
5145
    /// # use zerocopy::IntoBytes;
5146
    /// # let header = u128::MAX;
5147
    /// let mut insufficent_bytes = &mut [0, 0][..];
5148
    ///
5149
    /// let write_result = header.write_to_suffix(insufficent_bytes);
5150
    ///
5151
    /// assert!(write_result.is_err());
5152
    /// assert_eq!(insufficent_bytes, [0, 0]);
5153
    /// ```
5154
    #[must_use = "callers should check the return value to see if the operation succeeded"]
5155
    #[inline]
5156
    fn write_to_prefix(&self, dst: &mut [u8]) -> Result<(), SizeError<&Self, &mut [u8]>>
5157
    where
5158
        Self: Immutable,
5159
    {
5160
        let src = self.as_bytes();
5161
        match dst.get_mut(..src.len()) {
5162
            Some(dst) => {
5163
                // SAFETY: Within this branch of the `match`, we have ensured
5164
                // through fallible subslicing that `dst.len()` is equal to
5165
                // `src.len()`. Neither the size of the source nor the size of
5166
                // the destination change between the above subslicing operation
5167
                // and the invocation of `copy_unchecked`.
5168
                unsafe { util::copy_unchecked(src, dst) }
5169
                Ok(())
5170
            }
5171
            None => Err(SizeError::new(self)),
5172
        }
5173
    }
5174
5175
    /// Writes a copy of `self` to the suffix of `dst`.
5176
    ///
5177
    /// `write_to_suffix` writes `self` to the last `size_of_val(self)` bytes of
5178
    /// `dst`. If `dst.len() < size_of_val(self)`, it returns `Err`.
5179
    ///
5180
    /// # Examples
5181
    ///
5182
    /// ```
5183
    /// use zerocopy::IntoBytes;
5184
    /// # use zerocopy_derive::*;
5185
    ///
5186
    /// #[derive(IntoBytes, Immutable)]
5187
    /// #[repr(C)]
5188
    /// struct PacketHeader {
5189
    ///     src_port: [u8; 2],
5190
    ///     dst_port: [u8; 2],
5191
    ///     length: [u8; 2],
5192
    ///     checksum: [u8; 2],
5193
    /// }
5194
    ///
5195
    /// let header = PacketHeader {
5196
    ///     src_port: [0, 1],
5197
    ///     dst_port: [2, 3],
5198
    ///     length: [4, 5],
5199
    ///     checksum: [6, 7],
5200
    /// };
5201
    ///
5202
    /// let mut bytes = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0];
5203
    ///
5204
    /// header.write_to_suffix(&mut bytes[..]);
5205
    ///
5206
    /// assert_eq!(bytes, [0, 0, 0, 1, 2, 3, 4, 5, 6, 7]);
5207
    ///
5208
    /// let mut insufficent_bytes = &mut [0, 0][..];
5209
    ///
5210
    /// let write_result = header.write_to_suffix(insufficent_bytes);
5211
    ///
5212
    /// assert!(write_result.is_err());
5213
    /// assert_eq!(insufficent_bytes, [0, 0]);
5214
    /// ```
5215
    ///
5216
    /// If insufficient target bytes are provided, `write_to_suffix` returns
5217
    /// `Err` and leaves the target bytes unmodified:
5218
    ///
5219
    /// ```
5220
    /// # use zerocopy::IntoBytes;
5221
    /// # let header = u128::MAX;
5222
    /// let mut insufficent_bytes = &mut [0, 0][..];
5223
    ///
5224
    /// let write_result = header.write_to_suffix(insufficent_bytes);
5225
    ///
5226
    /// assert!(write_result.is_err());
5227
    /// assert_eq!(insufficent_bytes, [0, 0]);
5228
    /// ```
5229
    #[must_use = "callers should check the return value to see if the operation succeeded"]
5230
    #[inline]
5231
    fn write_to_suffix(&self, dst: &mut [u8]) -> Result<(), SizeError<&Self, &mut [u8]>>
5232
    where
5233
        Self: Immutable,
5234
    {
5235
        let src = self.as_bytes();
5236
        let start = if let Some(start) = dst.len().checked_sub(src.len()) {
5237
            start
5238
        } else {
5239
            return Err(SizeError::new(self));
5240
        };
5241
        let dst = if let Some(dst) = dst.get_mut(start..) {
5242
            dst
5243
        } else {
5244
            // get_mut() should never return None here. We return a `SizeError`
5245
            // rather than .unwrap() because in the event the branch is not
5246
            // optimized away, returning a value is generally lighter-weight
5247
            // than panicking.
5248
            return Err(SizeError::new(self));
5249
        };
5250
        // SAFETY: Through fallible subslicing of `dst`, we have ensured that
5251
        // `dst.len()` is equal to `src.len()`. Neither the size of the source
5252
        // nor the size of the destination change between the above subslicing
5253
        // operation and the invocation of `copy_unchecked`.
5254
        unsafe {
5255
            util::copy_unchecked(src, dst);
5256
        }
5257
        Ok(())
5258
    }
5259
5260
    /// Writes a copy of `self` to an `io::Write`.
5261
    ///
5262
    /// This is a shorthand for `dst.write_all(self.as_bytes())`, and is useful
5263
    /// for interfacing with operating system byte sinks (files, sockets, etc.).
5264
    ///
5265
    /// # Examples
5266
    ///
5267
    /// ```no_run
5268
    /// use zerocopy::{byteorder::big_endian::U16, FromBytes, IntoBytes};
5269
    /// use std::fs::File;
5270
    /// # use zerocopy_derive::*;
5271
    ///
5272
    /// #[derive(FromBytes, IntoBytes, Immutable, KnownLayout)]
5273
    /// #[repr(C, packed)]
5274
    /// struct GrayscaleImage {
5275
    ///     height: U16,
5276
    ///     width: U16,
5277
    ///     pixels: [U16],
5278
    /// }
5279
    ///
5280
    /// let image = GrayscaleImage::ref_from_bytes(&[0, 0, 0, 0][..]).unwrap();
5281
    /// let mut file = File::create("image.bin").unwrap();
5282
    /// image.write_to_io(&mut file).unwrap();
5283
    /// ```
5284
    ///
5285
    /// If the write fails, `write_to_io` returns `Err` and a partial write may
5286
    /// have occured; e.g.:
5287
    ///
5288
    /// ```
5289
    /// # use zerocopy::IntoBytes;
5290
    ///
5291
    /// let src = u128::MAX;
5292
    /// let mut dst = [0u8; 2];
5293
    ///
5294
    /// let write_result = src.write_to_io(&mut dst[..]);
5295
    ///
5296
    /// assert!(write_result.is_err());
5297
    /// assert_eq!(dst, [255, 255]);
5298
    /// ```
5299
    #[cfg(feature = "std")]
5300
    #[inline(always)]
5301
    fn write_to_io<W>(&self, mut dst: W) -> io::Result<()>
5302
    where
5303
        Self: Immutable,
5304
        W: io::Write,
5305
    {
5306
        dst.write_all(self.as_bytes())
5307
    }
5308
5309
    #[deprecated(since = "0.8.0", note = "`IntoBytes::as_bytes_mut` was renamed to `as_mut_bytes`")]
5310
    #[doc(hidden)]
5311
    #[inline]
5312
    fn as_bytes_mut(&mut self) -> &mut [u8]
5313
    where
5314
        Self: FromBytes,
5315
    {
5316
        self.as_mut_bytes()
5317
    }
5318
}
5319
5320
/// Analyzes whether a type is [`Unaligned`].
5321
///
5322
/// This derive analyzes, at compile time, whether the annotated type satisfies
5323
/// the [safety conditions] of `Unaligned` and implements `Unaligned` if it is
5324
/// sound to do so. This derive can be applied to structs, enums, and unions;
5325
/// e.g.:
5326
///
5327
/// ```
5328
/// # use zerocopy_derive::Unaligned;
5329
/// #[derive(Unaligned)]
5330
/// #[repr(C)]
5331
/// struct MyStruct {
5332
/// # /*
5333
///     ...
5334
/// # */
5335
/// }
5336
///
5337
/// #[derive(Unaligned)]
5338
/// #[repr(u8)]
5339
/// enum MyEnum {
5340
/// #   Variant0,
5341
/// # /*
5342
///     ...
5343
/// # */
5344
/// }
5345
///
5346
/// #[derive(Unaligned)]
5347
/// #[repr(packed)]
5348
/// union MyUnion {
5349
/// #   variant: u8,
5350
/// # /*
5351
///     ...
5352
/// # */
5353
/// }
5354
/// ```
5355
///
5356
/// # Analysis
5357
///
5358
/// *This section describes, roughly, the analysis performed by this derive to
5359
/// determine whether it is sound to implement `Unaligned` for a given type.
5360
/// Unless you are modifying the implementation of this derive, or attempting to
5361
/// manually implement `Unaligned` for a type yourself, you don't need to read
5362
/// this section.*
5363
///
5364
/// If a type has the following properties, then this derive can implement
5365
/// `Unaligned` for that type:
5366
///
5367
/// - If the type is a struct or union:
5368
///   - If `repr(align(N))` is provided, `N` must equal 1.
5369
///   - If the type is `repr(C)` or `repr(transparent)`, all fields must be
5370
///     [`Unaligned`].
5371
///   - If the type is not `repr(C)` or `repr(transparent)`, it must be
5372
///     `repr(packed)` or `repr(packed(1))`.
5373
/// - If the type is an enum:
5374
///   - If `repr(align(N))` is provided, `N` must equal 1.
5375
///   - It must be a field-less enum (meaning that all variants have no fields).
5376
///   - It must be `repr(i8)` or `repr(u8)`.
5377
///
5378
/// [safety conditions]: trait@Unaligned#safety
5379
#[cfg(any(feature = "derive", test))]
5380
#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
5381
pub use zerocopy_derive::Unaligned;
5382
5383
/// Types with no alignment requirement.
5384
///
5385
/// If `T: Unaligned`, then `align_of::<T>() == 1`.
5386
///
5387
/// # Implementation
5388
///
5389
/// **Do not implement this trait yourself!** Instead, use
5390
/// [`#[derive(Unaligned)]`][derive]; e.g.:
5391
///
5392
/// ```
5393
/// # use zerocopy_derive::Unaligned;
5394
/// #[derive(Unaligned)]
5395
/// #[repr(C)]
5396
/// struct MyStruct {
5397
/// # /*
5398
///     ...
5399
/// # */
5400
/// }
5401
///
5402
/// #[derive(Unaligned)]
5403
/// #[repr(u8)]
5404
/// enum MyEnum {
5405
/// #   Variant0,
5406
/// # /*
5407
///     ...
5408
/// # */
5409
/// }
5410
///
5411
/// #[derive(Unaligned)]
5412
/// #[repr(packed)]
5413
/// union MyUnion {
5414
/// #   variant: u8,
5415
/// # /*
5416
///     ...
5417
/// # */
5418
/// }
5419
/// ```
5420
///
5421
/// This derive performs a sophisticated, compile-time safety analysis to
5422
/// determine whether a type is `Unaligned`.
5423
///
5424
/// # Safety
5425
///
5426
/// *This section describes what is required in order for `T: Unaligned`, and
5427
/// what unsafe code may assume of such types. If you don't plan on implementing
5428
/// `Unaligned` manually, and you don't plan on writing unsafe code that
5429
/// operates on `Unaligned` types, then you don't need to read this section.*
5430
///
5431
/// If `T: Unaligned`, then unsafe code may assume that it is sound to produce a
5432
/// reference to `T` at any memory location regardless of alignment. If a type
5433
/// is marked as `Unaligned` which violates this contract, it may cause
5434
/// undefined behavior.
5435
///
5436
/// `#[derive(Unaligned)]` only permits [types which satisfy these
5437
/// requirements][derive-analysis].
5438
///
5439
#[cfg_attr(
5440
    feature = "derive",
5441
    doc = "[derive]: zerocopy_derive::Unaligned",
5442
    doc = "[derive-analysis]: zerocopy_derive::Unaligned#analysis"
5443
)]
5444
#[cfg_attr(
5445
    not(feature = "derive"),
5446
    doc = concat!("[derive]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.Unaligned.html"),
5447
    doc = concat!("[derive-analysis]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.Unaligned.html#analysis"),
5448
)]
5449
#[cfg_attr(
5450
    zerocopy_diagnostic_on_unimplemented_1_78_0,
5451
    diagnostic::on_unimplemented(note = "Consider adding `#[derive(Unaligned)]` to `{Self}`")
5452
)]
5453
pub unsafe trait Unaligned {
5454
    // The `Self: Sized` bound makes it so that `Unaligned` is still object
5455
    // safe.
5456
    #[doc(hidden)]
5457
    fn only_derive_is_allowed_to_implement_this_trait()
5458
    where
5459
        Self: Sized;
5460
}
5461
5462
/// Derives an optimized implementation of [`Hash`] for types that implement
5463
/// [`IntoBytes`] and [`Immutable`].
5464
///
5465
/// The standard library's derive for `Hash` generates a recursive descent
5466
/// into the fields of the type it is applied to. Instead, the implementation
5467
/// derived by this macro makes a single call to [`Hasher::write()`] for both
5468
/// [`Hash::hash()`] and [`Hash::hash_slice()`], feeding the hasher the bytes
5469
/// of the type or slice all at once.
5470
///
5471
/// [`Hash`]: core::hash::Hash
5472
/// [`Hash::hash()`]: core::hash::Hash::hash()
5473
/// [`Hash::hash_slice()`]: core::hash::Hash::hash_slice()
5474
#[cfg(any(feature = "derive", test))]
5475
#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
5476
pub use zerocopy_derive::ByteHash;
5477
5478
/// Derives an optimized implementation of [`PartialEq`] and [`Eq`] for types
5479
/// that implement [`IntoBytes`] and [`Immutable`].
5480
///
5481
/// The standard library's derive for [`PartialEq`] generates a recursive
5482
/// descent into the fields of the type it is applied to. Instead, the
5483
/// implementation derived by this macro performs a single slice comparison of
5484
/// the bytes of the two values being compared.
5485
#[cfg(any(feature = "derive", test))]
5486
#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
5487
pub use zerocopy_derive::ByteEq;
5488
5489
#[cfg(feature = "alloc")]
5490
#[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))]
5491
#[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
5492
mod alloc_support {
5493
    use super::*;
5494
5495
    /// Extends a `Vec<T>` by pushing `additional` new items onto the end of the
5496
    /// vector. The new items are initialized with zeros.
5497
    #[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
5498
    #[doc(hidden)]
5499
    #[deprecated(since = "0.8.0", note = "moved to `FromZeros`")]
5500
    #[inline(always)]
5501
    pub fn extend_vec_zeroed<T: FromZeros>(
5502
        v: &mut Vec<T>,
5503
        additional: usize,
5504
    ) -> Result<(), AllocError> {
5505
        <T as FromZeros>::extend_vec_zeroed(v, additional)
5506
    }
5507
5508
    /// Inserts `additional` new items into `Vec<T>` at `position`. The new
5509
    /// items are initialized with zeros.
5510
    ///
5511
    /// # Panics
5512
    ///
5513
    /// Panics if `position > v.len()`.
5514
    #[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
5515
    #[doc(hidden)]
5516
    #[deprecated(since = "0.8.0", note = "moved to `FromZeros`")]
5517
    #[inline(always)]
5518
    pub fn insert_vec_zeroed<T: FromZeros>(
5519
        v: &mut Vec<T>,
5520
        position: usize,
5521
        additional: usize,
5522
    ) -> Result<(), AllocError> {
5523
        <T as FromZeros>::insert_vec_zeroed(v, position, additional)
5524
    }
5525
}
5526
5527
#[cfg(feature = "alloc")]
5528
#[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
5529
#[doc(hidden)]
5530
pub use alloc_support::*;
5531
5532
#[cfg(test)]
5533
#[allow(clippy::assertions_on_result_states, clippy::unreadable_literal)]
5534
mod tests {
5535
    use static_assertions::assert_impl_all;
5536
5537
    use super::*;
5538
    use crate::util::testutil::*;
5539
5540
    // An unsized type.
5541
    //
5542
    // This is used to test the custom derives of our traits. The `[u8]` type
5543
    // gets a hand-rolled impl, so it doesn't exercise our custom derives.
5544
    #[derive(Debug, Eq, PartialEq, FromBytes, IntoBytes, Unaligned, Immutable)]
5545
    #[repr(transparent)]
5546
    struct Unsized([u8]);
5547
5548
    impl Unsized {
5549
        fn from_mut_slice(slc: &mut [u8]) -> &mut Unsized {
5550
            // SAFETY: This *probably* sound - since the layouts of `[u8]` and
5551
            // `Unsized` are the same, so are the layouts of `&mut [u8]` and
5552
            // `&mut Unsized`. [1] Even if it turns out that this isn't actually
5553
            // guaranteed by the language spec, we can just change this since
5554
            // it's in test code.
5555
            //
5556
            // [1] https://github.com/rust-lang/unsafe-code-guidelines/issues/375
5557
            unsafe { mem::transmute(slc) }
5558
        }
5559
    }
5560
5561
    #[test]
5562
    fn test_known_layout() {
5563
        // Test that `$ty` and `ManuallyDrop<$ty>` have the expected layout.
5564
        // Test that `PhantomData<$ty>` has the same layout as `()` regardless
5565
        // of `$ty`.
5566
        macro_rules! test {
5567
            ($ty:ty, $expect:expr) => {
5568
                let expect = $expect;
5569
                assert_eq!(<$ty as KnownLayout>::LAYOUT, expect);
5570
                assert_eq!(<ManuallyDrop<$ty> as KnownLayout>::LAYOUT, expect);
5571
                assert_eq!(<PhantomData<$ty> as KnownLayout>::LAYOUT, <() as KnownLayout>::LAYOUT);
5572
            };
5573
        }
5574
5575
        let layout = |offset, align, _trailing_slice_elem_size| DstLayout {
5576
            align: NonZeroUsize::new(align).unwrap(),
5577
            size_info: match _trailing_slice_elem_size {
5578
                None => SizeInfo::Sized { size: offset },
5579
                Some(elem_size) => SizeInfo::SliceDst(TrailingSliceLayout { offset, elem_size }),
5580
            },
5581
        };
5582
5583
        test!((), layout(0, 1, None));
5584
        test!(u8, layout(1, 1, None));
5585
        // Use `align_of` because `u64` alignment may be smaller than 8 on some
5586
        // platforms.
5587
        test!(u64, layout(8, mem::align_of::<u64>(), None));
5588
        test!(AU64, layout(8, 8, None));
5589
5590
        test!(Option<&'static ()>, usize::LAYOUT);
5591
5592
        test!([()], layout(0, 1, Some(0)));
5593
        test!([u8], layout(0, 1, Some(1)));
5594
        test!(str, layout(0, 1, Some(1)));
5595
    }
5596
5597
    #[cfg(feature = "derive")]
5598
    #[test]
5599
    fn test_known_layout_derive() {
5600
        // In this and other files (`late_compile_pass.rs`,
5601
        // `mid_compile_pass.rs`, and `struct.rs`), we test success and failure
5602
        // modes of `derive(KnownLayout)` for the following combination of
5603
        // properties:
5604
        //
5605
        // +------------+--------------------------------------+-----------+
5606
        // |            |      trailing field properties       |           |
5607
        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
5608
        // |------------+----------+----------------+----------+-----------|
5609
        // |          N |        N |              N |        N |      KL00 |
5610
        // |          N |        N |              N |        Y |      KL01 |
5611
        // |          N |        N |              Y |        N |      KL02 |
5612
        // |          N |        N |              Y |        Y |      KL03 |
5613
        // |          N |        Y |              N |        N |      KL04 |
5614
        // |          N |        Y |              N |        Y |      KL05 |
5615
        // |          N |        Y |              Y |        N |      KL06 |
5616
        // |          N |        Y |              Y |        Y |      KL07 |
5617
        // |          Y |        N |              N |        N |      KL08 |
5618
        // |          Y |        N |              N |        Y |      KL09 |
5619
        // |          Y |        N |              Y |        N |      KL10 |
5620
        // |          Y |        N |              Y |        Y |      KL11 |
5621
        // |          Y |        Y |              N |        N |      KL12 |
5622
        // |          Y |        Y |              N |        Y |      KL13 |
5623
        // |          Y |        Y |              Y |        N |      KL14 |
5624
        // |          Y |        Y |              Y |        Y |      KL15 |
5625
        // +------------+----------+----------------+----------+-----------+
5626
5627
        struct NotKnownLayout<T = ()> {
5628
            _t: T,
5629
        }
5630
5631
        #[derive(KnownLayout)]
5632
        #[repr(C)]
5633
        struct AlignSize<const ALIGN: usize, const SIZE: usize>
5634
        where
5635
            elain::Align<ALIGN>: elain::Alignment,
5636
        {
5637
            _align: elain::Align<ALIGN>,
5638
            size: [u8; SIZE],
5639
        }
5640
5641
        type AU16 = AlignSize<2, 2>;
5642
        type AU32 = AlignSize<4, 4>;
5643
5644
        fn _assert_kl<T: ?Sized + KnownLayout>(_: &T) {}
5645
5646
        let sized_layout = |align, size| DstLayout {
5647
            align: NonZeroUsize::new(align).unwrap(),
5648
            size_info: SizeInfo::Sized { size },
5649
        };
5650
5651
        let unsized_layout = |align, elem_size, offset| DstLayout {
5652
            align: NonZeroUsize::new(align).unwrap(),
5653
            size_info: SizeInfo::SliceDst(TrailingSliceLayout { offset, elem_size }),
5654
        };
5655
5656
        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
5657
        // |          N |        N |              N |        Y |      KL01 |
5658
        #[allow(dead_code)]
5659
        #[derive(KnownLayout)]
5660
        struct KL01(NotKnownLayout<AU32>, NotKnownLayout<AU16>);
5661
5662
        let expected = DstLayout::for_type::<KL01>();
5663
5664
        assert_eq!(<KL01 as KnownLayout>::LAYOUT, expected);
5665
        assert_eq!(<KL01 as KnownLayout>::LAYOUT, sized_layout(4, 8));
5666
5667
        // ...with `align(N)`:
5668
        #[allow(dead_code)]
5669
        #[derive(KnownLayout)]
5670
        #[repr(align(64))]
5671
        struct KL01Align(NotKnownLayout<AU32>, NotKnownLayout<AU16>);
5672
5673
        let expected = DstLayout::for_type::<KL01Align>();
5674
5675
        assert_eq!(<KL01Align as KnownLayout>::LAYOUT, expected);
5676
        assert_eq!(<KL01Align as KnownLayout>::LAYOUT, sized_layout(64, 64));
5677
5678
        // ...with `packed`:
5679
        #[allow(dead_code)]
5680
        #[derive(KnownLayout)]
5681
        #[repr(packed)]
5682
        struct KL01Packed(NotKnownLayout<AU32>, NotKnownLayout<AU16>);
5683
5684
        let expected = DstLayout::for_type::<KL01Packed>();
5685
5686
        assert_eq!(<KL01Packed as KnownLayout>::LAYOUT, expected);
5687
        assert_eq!(<KL01Packed as KnownLayout>::LAYOUT, sized_layout(1, 6));
5688
5689
        // ...with `packed(N)`:
5690
        #[allow(dead_code)]
5691
        #[derive(KnownLayout)]
5692
        #[repr(packed(2))]
5693
        struct KL01PackedN(NotKnownLayout<AU32>, NotKnownLayout<AU16>);
5694
5695
        assert_impl_all!(KL01PackedN: KnownLayout);
5696
5697
        let expected = DstLayout::for_type::<KL01PackedN>();
5698
5699
        assert_eq!(<KL01PackedN as KnownLayout>::LAYOUT, expected);
5700
        assert_eq!(<KL01PackedN as KnownLayout>::LAYOUT, sized_layout(2, 6));
5701
5702
        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
5703
        // |          N |        N |              Y |        Y |      KL03 |
5704
        #[allow(dead_code)]
5705
        #[derive(KnownLayout)]
5706
        struct KL03(NotKnownLayout, u8);
5707
5708
        let expected = DstLayout::for_type::<KL03>();
5709
5710
        assert_eq!(<KL03 as KnownLayout>::LAYOUT, expected);
5711
        assert_eq!(<KL03 as KnownLayout>::LAYOUT, sized_layout(1, 1));
5712
5713
        // ... with `align(N)`
5714
        #[allow(dead_code)]
5715
        #[derive(KnownLayout)]
5716
        #[repr(align(64))]
5717
        struct KL03Align(NotKnownLayout<AU32>, u8);
5718
5719
        let expected = DstLayout::for_type::<KL03Align>();
5720
5721
        assert_eq!(<KL03Align as KnownLayout>::LAYOUT, expected);
5722
        assert_eq!(<KL03Align as KnownLayout>::LAYOUT, sized_layout(64, 64));
5723
5724
        // ... with `packed`:
5725
        #[allow(dead_code)]
5726
        #[derive(KnownLayout)]
5727
        #[repr(packed)]
5728
        struct KL03Packed(NotKnownLayout<AU32>, u8);
5729
5730
        let expected = DstLayout::for_type::<KL03Packed>();
5731
5732
        assert_eq!(<KL03Packed as KnownLayout>::LAYOUT, expected);
5733
        assert_eq!(<KL03Packed as KnownLayout>::LAYOUT, sized_layout(1, 5));
5734
5735
        // ... with `packed(N)`
5736
        #[allow(dead_code)]
5737
        #[derive(KnownLayout)]
5738
        #[repr(packed(2))]
5739
        struct KL03PackedN(NotKnownLayout<AU32>, u8);
5740
5741
        assert_impl_all!(KL03PackedN: KnownLayout);
5742
5743
        let expected = DstLayout::for_type::<KL03PackedN>();
5744
5745
        assert_eq!(<KL03PackedN as KnownLayout>::LAYOUT, expected);
5746
        assert_eq!(<KL03PackedN as KnownLayout>::LAYOUT, sized_layout(2, 6));
5747
5748
        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
5749
        // |          N |        Y |              N |        Y |      KL05 |
5750
        #[allow(dead_code)]
5751
        #[derive(KnownLayout)]
5752
        struct KL05<T>(u8, T);
5753
5754
        fn _test_kl05<T>(t: T) -> impl KnownLayout {
5755
            KL05(0u8, t)
5756
        }
5757
5758
        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
5759
        // |          N |        Y |              Y |        Y |      KL07 |
5760
        #[allow(dead_code)]
5761
        #[derive(KnownLayout)]
5762
        struct KL07<T: KnownLayout>(u8, T);
5763
5764
        fn _test_kl07<T: KnownLayout>(t: T) -> impl KnownLayout {
5765
            let _ = KL07(0u8, t);
5766
        }
5767
5768
        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
5769
        // |          Y |        N |              Y |        N |      KL10 |
5770
        #[allow(dead_code)]
5771
        #[derive(KnownLayout)]
5772
        #[repr(C)]
5773
        struct KL10(NotKnownLayout<AU32>, [u8]);
5774
5775
        let expected = DstLayout::new_zst(None)
5776
            .extend(DstLayout::for_type::<NotKnownLayout<AU32>>(), None)
5777
            .extend(<[u8] as KnownLayout>::LAYOUT, None)
5778
            .pad_to_align();
5779
5780
        assert_eq!(<KL10 as KnownLayout>::LAYOUT, expected);
5781
        assert_eq!(<KL10 as KnownLayout>::LAYOUT, unsized_layout(4, 1, 4));
5782
5783
        // ...with `align(N)`:
5784
        #[allow(dead_code)]
5785
        #[derive(KnownLayout)]
5786
        #[repr(C, align(64))]
5787
        struct KL10Align(NotKnownLayout<AU32>, [u8]);
5788
5789
        let repr_align = NonZeroUsize::new(64);
5790
5791
        let expected = DstLayout::new_zst(repr_align)
5792
            .extend(DstLayout::for_type::<NotKnownLayout<AU32>>(), None)
5793
            .extend(<[u8] as KnownLayout>::LAYOUT, None)
5794
            .pad_to_align();
5795
5796
        assert_eq!(<KL10Align as KnownLayout>::LAYOUT, expected);
5797
        assert_eq!(<KL10Align as KnownLayout>::LAYOUT, unsized_layout(64, 1, 4));
5798
5799
        // ...with `packed`:
5800
        #[allow(dead_code)]
5801
        #[derive(KnownLayout)]
5802
        #[repr(C, packed)]
5803
        struct KL10Packed(NotKnownLayout<AU32>, [u8]);
5804
5805
        let repr_packed = NonZeroUsize::new(1);
5806
5807
        let expected = DstLayout::new_zst(None)
5808
            .extend(DstLayout::for_type::<NotKnownLayout<AU32>>(), repr_packed)
5809
            .extend(<[u8] as KnownLayout>::LAYOUT, repr_packed)
5810
            .pad_to_align();
5811
5812
        assert_eq!(<KL10Packed as KnownLayout>::LAYOUT, expected);
5813
        assert_eq!(<KL10Packed as KnownLayout>::LAYOUT, unsized_layout(1, 1, 4));
5814
5815
        // ...with `packed(N)`:
5816
        #[allow(dead_code)]
5817
        #[derive(KnownLayout)]
5818
        #[repr(C, packed(2))]
5819
        struct KL10PackedN(NotKnownLayout<AU32>, [u8]);
5820
5821
        let repr_packed = NonZeroUsize::new(2);
5822
5823
        let expected = DstLayout::new_zst(None)
5824
            .extend(DstLayout::for_type::<NotKnownLayout<AU32>>(), repr_packed)
5825
            .extend(<[u8] as KnownLayout>::LAYOUT, repr_packed)
5826
            .pad_to_align();
5827
5828
        assert_eq!(<KL10PackedN as KnownLayout>::LAYOUT, expected);
5829
        assert_eq!(<KL10PackedN as KnownLayout>::LAYOUT, unsized_layout(2, 1, 4));
5830
5831
        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
5832
        // |          Y |        N |              Y |        Y |      KL11 |
5833
        #[allow(dead_code)]
5834
        #[derive(KnownLayout)]
5835
        #[repr(C)]
5836
        struct KL11(NotKnownLayout<AU64>, u8);
5837
5838
        let expected = DstLayout::new_zst(None)
5839
            .extend(DstLayout::for_type::<NotKnownLayout<AU64>>(), None)
5840
            .extend(<u8 as KnownLayout>::LAYOUT, None)
5841
            .pad_to_align();
5842
5843
        assert_eq!(<KL11 as KnownLayout>::LAYOUT, expected);
5844
        assert_eq!(<KL11 as KnownLayout>::LAYOUT, sized_layout(8, 16));
5845
5846
        // ...with `align(N)`:
5847
        #[allow(dead_code)]
5848
        #[derive(KnownLayout)]
5849
        #[repr(C, align(64))]
5850
        struct KL11Align(NotKnownLayout<AU64>, u8);
5851
5852
        let repr_align = NonZeroUsize::new(64);
5853
5854
        let expected = DstLayout::new_zst(repr_align)
5855
            .extend(DstLayout::for_type::<NotKnownLayout<AU64>>(), None)
5856
            .extend(<u8 as KnownLayout>::LAYOUT, None)
5857
            .pad_to_align();
5858
5859
        assert_eq!(<KL11Align as KnownLayout>::LAYOUT, expected);
5860
        assert_eq!(<KL11Align as KnownLayout>::LAYOUT, sized_layout(64, 64));
5861
5862
        // ...with `packed`:
5863
        #[allow(dead_code)]
5864
        #[derive(KnownLayout)]
5865
        #[repr(C, packed)]
5866
        struct KL11Packed(NotKnownLayout<AU64>, u8);
5867
5868
        let repr_packed = NonZeroUsize::new(1);
5869
5870
        let expected = DstLayout::new_zst(None)
5871
            .extend(DstLayout::for_type::<NotKnownLayout<AU64>>(), repr_packed)
5872
            .extend(<u8 as KnownLayout>::LAYOUT, repr_packed)
5873
            .pad_to_align();
5874
5875
        assert_eq!(<KL11Packed as KnownLayout>::LAYOUT, expected);
5876
        assert_eq!(<KL11Packed as KnownLayout>::LAYOUT, sized_layout(1, 9));
5877
5878
        // ...with `packed(N)`:
5879
        #[allow(dead_code)]
5880
        #[derive(KnownLayout)]
5881
        #[repr(C, packed(2))]
5882
        struct KL11PackedN(NotKnownLayout<AU64>, u8);
5883
5884
        let repr_packed = NonZeroUsize::new(2);
5885
5886
        let expected = DstLayout::new_zst(None)
5887
            .extend(DstLayout::for_type::<NotKnownLayout<AU64>>(), repr_packed)
5888
            .extend(<u8 as KnownLayout>::LAYOUT, repr_packed)
5889
            .pad_to_align();
5890
5891
        assert_eq!(<KL11PackedN as KnownLayout>::LAYOUT, expected);
5892
        assert_eq!(<KL11PackedN as KnownLayout>::LAYOUT, sized_layout(2, 10));
5893
5894
        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
5895
        // |          Y |        Y |              Y |        N |      KL14 |
5896
        #[allow(dead_code)]
5897
        #[derive(KnownLayout)]
5898
        #[repr(C)]
5899
        struct KL14<T: ?Sized + KnownLayout>(u8, T);
5900
5901
        fn _test_kl14<T: ?Sized + KnownLayout>(kl: &KL14<T>) {
5902
            _assert_kl(kl)
5903
        }
5904
5905
        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
5906
        // |          Y |        Y |              Y |        Y |      KL15 |
5907
        #[allow(dead_code)]
5908
        #[derive(KnownLayout)]
5909
        #[repr(C)]
5910
        struct KL15<T: KnownLayout>(u8, T);
5911
5912
        fn _test_kl15<T: KnownLayout>(t: T) -> impl KnownLayout {
5913
            let _ = KL15(0u8, t);
5914
        }
5915
5916
        // Test a variety of combinations of field types:
5917
        //  - ()
5918
        //  - u8
5919
        //  - AU16
5920
        //  - [()]
5921
        //  - [u8]
5922
        //  - [AU16]
5923
5924
        #[allow(clippy::upper_case_acronyms, dead_code)]
5925
        #[derive(KnownLayout)]
5926
        #[repr(C)]
5927
        struct KLTU<T, U: ?Sized>(T, U);
5928
5929
        assert_eq!(<KLTU<(), ()> as KnownLayout>::LAYOUT, sized_layout(1, 0));
5930
5931
        assert_eq!(<KLTU<(), u8> as KnownLayout>::LAYOUT, sized_layout(1, 1));
5932
5933
        assert_eq!(<KLTU<(), AU16> as KnownLayout>::LAYOUT, sized_layout(2, 2));
5934
5935
        assert_eq!(<KLTU<(), [()]> as KnownLayout>::LAYOUT, unsized_layout(1, 0, 0));
5936
5937
        assert_eq!(<KLTU<(), [u8]> as KnownLayout>::LAYOUT, unsized_layout(1, 1, 0));
5938
5939
        assert_eq!(<KLTU<(), [AU16]> as KnownLayout>::LAYOUT, unsized_layout(2, 2, 0));
5940
5941
        assert_eq!(<KLTU<u8, ()> as KnownLayout>::LAYOUT, sized_layout(1, 1));
5942
5943
        assert_eq!(<KLTU<u8, u8> as KnownLayout>::LAYOUT, sized_layout(1, 2));
5944
5945
        assert_eq!(<KLTU<u8, AU16> as KnownLayout>::LAYOUT, sized_layout(2, 4));
5946
5947
        assert_eq!(<KLTU<u8, [()]> as KnownLayout>::LAYOUT, unsized_layout(1, 0, 1));
5948
5949
        assert_eq!(<KLTU<u8, [u8]> as KnownLayout>::LAYOUT, unsized_layout(1, 1, 1));
5950
5951
        assert_eq!(<KLTU<u8, [AU16]> as KnownLayout>::LAYOUT, unsized_layout(2, 2, 2));
5952
5953
        assert_eq!(<KLTU<AU16, ()> as KnownLayout>::LAYOUT, sized_layout(2, 2));
5954
5955
        assert_eq!(<KLTU<AU16, u8> as KnownLayout>::LAYOUT, sized_layout(2, 4));
5956
5957
        assert_eq!(<KLTU<AU16, AU16> as KnownLayout>::LAYOUT, sized_layout(2, 4));
5958
5959
        assert_eq!(<KLTU<AU16, [()]> as KnownLayout>::LAYOUT, unsized_layout(2, 0, 2));
5960
5961
        assert_eq!(<KLTU<AU16, [u8]> as KnownLayout>::LAYOUT, unsized_layout(2, 1, 2));
5962
5963
        assert_eq!(<KLTU<AU16, [AU16]> as KnownLayout>::LAYOUT, unsized_layout(2, 2, 2));
5964
5965
        // Test a variety of field counts.
5966
5967
        #[derive(KnownLayout)]
5968
        #[repr(C)]
5969
        struct KLF0;
5970
5971
        assert_eq!(<KLF0 as KnownLayout>::LAYOUT, sized_layout(1, 0));
5972
5973
        #[derive(KnownLayout)]
5974
        #[repr(C)]
5975
        struct KLF1([u8]);
5976
5977
        assert_eq!(<KLF1 as KnownLayout>::LAYOUT, unsized_layout(1, 1, 0));
5978
5979
        #[derive(KnownLayout)]
5980
        #[repr(C)]
5981
        struct KLF2(NotKnownLayout<u8>, [u8]);
5982
5983
        assert_eq!(<KLF2 as KnownLayout>::LAYOUT, unsized_layout(1, 1, 1));
5984
5985
        #[derive(KnownLayout)]
5986
        #[repr(C)]
5987
        struct KLF3(NotKnownLayout<u8>, NotKnownLayout<AU16>, [u8]);
5988
5989
        assert_eq!(<KLF3 as KnownLayout>::LAYOUT, unsized_layout(2, 1, 4));
5990
5991
        #[derive(KnownLayout)]
5992
        #[repr(C)]
5993
        struct KLF4(NotKnownLayout<u8>, NotKnownLayout<AU16>, NotKnownLayout<AU32>, [u8]);
5994
5995
        assert_eq!(<KLF4 as KnownLayout>::LAYOUT, unsized_layout(4, 1, 8));
5996
    }
5997
5998
    #[test]
5999
    fn test_object_safety() {
6000
        fn _takes_no_cell(_: &dyn Immutable) {}
6001
        fn _takes_unaligned(_: &dyn Unaligned) {}
6002
    }
6003
6004
    #[test]
6005
    fn test_from_zeros_only() {
6006
        // Test types that implement `FromZeros` but not `FromBytes`.
6007
6008
        assert!(!bool::new_zeroed());
6009
        assert_eq!(char::new_zeroed(), '\0');
6010
6011
        #[cfg(feature = "alloc")]
6012
        {
6013
            assert_eq!(bool::new_box_zeroed(), Ok(Box::new(false)));
6014
            assert_eq!(char::new_box_zeroed(), Ok(Box::new('\0')));
6015
6016
            assert_eq!(
6017
                <[bool]>::new_box_zeroed_with_elems(3).unwrap().as_ref(),
6018
                [false, false, false]
6019
            );
6020
            assert_eq!(
6021
                <[char]>::new_box_zeroed_with_elems(3).unwrap().as_ref(),
6022
                ['\0', '\0', '\0']
6023
            );
6024
6025
            assert_eq!(bool::new_vec_zeroed(3).unwrap().as_ref(), [false, false, false]);
6026
            assert_eq!(char::new_vec_zeroed(3).unwrap().as_ref(), ['\0', '\0', '\0']);
6027
        }
6028
6029
        let mut string = "hello".to_string();
6030
        let s: &mut str = string.as_mut();
6031
        assert_eq!(s, "hello");
6032
        s.zero();
6033
        assert_eq!(s, "\0\0\0\0\0");
6034
    }
6035
6036
    #[test]
6037
    fn test_zst_count_preserved() {
6038
        // Test that, when an explicit count is provided to for a type with a
6039
        // ZST trailing slice element, that count is preserved. This is
6040
        // important since, for such types, all element counts result in objects
6041
        // of the same size, and so the correct behavior is ambiguous. However,
6042
        // preserving the count as requested by the user is the behavior that we
6043
        // document publicly.
6044
6045
        // FromZeros methods
6046
        #[cfg(feature = "alloc")]
6047
        assert_eq!(<[()]>::new_box_zeroed_with_elems(3).unwrap().len(), 3);
6048
        #[cfg(feature = "alloc")]
6049
        assert_eq!(<()>::new_vec_zeroed(3).unwrap().len(), 3);
6050
6051
        // FromBytes methods
6052
        assert_eq!(<[()]>::ref_from_bytes_with_elems(&[][..], 3).unwrap().len(), 3);
6053
        assert_eq!(<[()]>::ref_from_prefix_with_elems(&[][..], 3).unwrap().0.len(), 3);
6054
        assert_eq!(<[()]>::ref_from_suffix_with_elems(&[][..], 3).unwrap().1.len(), 3);
6055
        assert_eq!(<[()]>::mut_from_bytes_with_elems(&mut [][..], 3).unwrap().len(), 3);
6056
        assert_eq!(<[()]>::mut_from_prefix_with_elems(&mut [][..], 3).unwrap().0.len(), 3);
6057
        assert_eq!(<[()]>::mut_from_suffix_with_elems(&mut [][..], 3).unwrap().1.len(), 3);
6058
    }
6059
6060
    #[test]
6061
    fn test_read_write() {
6062
        const VAL: u64 = 0x12345678;
6063
        #[cfg(target_endian = "big")]
6064
        const VAL_BYTES: [u8; 8] = VAL.to_be_bytes();
6065
        #[cfg(target_endian = "little")]
6066
        const VAL_BYTES: [u8; 8] = VAL.to_le_bytes();
6067
        const ZEROS: [u8; 8] = [0u8; 8];
6068
6069
        // Test `FromBytes::{read_from, read_from_prefix, read_from_suffix}`.
6070
6071
        assert_eq!(u64::read_from_bytes(&VAL_BYTES[..]), Ok(VAL));
6072
        // The first 8 bytes are from `VAL_BYTES` and the second 8 bytes are all
6073
        // zeros.
6074
        let bytes_with_prefix: [u8; 16] = transmute!([VAL_BYTES, [0; 8]]);
6075
        assert_eq!(u64::read_from_prefix(&bytes_with_prefix[..]), Ok((VAL, &ZEROS[..])));
6076
        assert_eq!(u64::read_from_suffix(&bytes_with_prefix[..]), Ok((&VAL_BYTES[..], 0)));
6077
        // The first 8 bytes are all zeros and the second 8 bytes are from
6078
        // `VAL_BYTES`
6079
        let bytes_with_suffix: [u8; 16] = transmute!([[0; 8], VAL_BYTES]);
6080
        assert_eq!(u64::read_from_prefix(&bytes_with_suffix[..]), Ok((0, &VAL_BYTES[..])));
6081
        assert_eq!(u64::read_from_suffix(&bytes_with_suffix[..]), Ok((&ZEROS[..], VAL)));
6082
6083
        // Test `IntoBytes::{write_to, write_to_prefix, write_to_suffix}`.
6084
6085
        let mut bytes = [0u8; 8];
6086
        assert_eq!(VAL.write_to(&mut bytes[..]), Ok(()));
6087
        assert_eq!(bytes, VAL_BYTES);
6088
        let mut bytes = [0u8; 16];
6089
        assert_eq!(VAL.write_to_prefix(&mut bytes[..]), Ok(()));
6090
        let want: [u8; 16] = transmute!([VAL_BYTES, [0; 8]]);
6091
        assert_eq!(bytes, want);
6092
        let mut bytes = [0u8; 16];
6093
        assert_eq!(VAL.write_to_suffix(&mut bytes[..]), Ok(()));
6094
        let want: [u8; 16] = transmute!([[0; 8], VAL_BYTES]);
6095
        assert_eq!(bytes, want);
6096
    }
6097
6098
    #[test]
6099
    #[cfg(feature = "std")]
6100
    fn test_read_io_with_padding_soundness() {
6101
        // This test is designed to exhibit potential UB in
6102
        // `FromBytes::read_from_io`. (see #2319, #2320).
6103
6104
        // On most platforms (where `align_of::<u16>() == 2`), `WithPadding`
6105
        // will have inter-field padding between `x` and `y`.
6106
        #[derive(FromBytes)]
6107
        #[repr(C)]
6108
        struct WithPadding {
6109
            x: u8,
6110
            y: u16,
6111
        }
6112
        struct ReadsInRead;
6113
        impl std::io::Read for ReadsInRead {
6114
            fn read(&mut self, buf: &mut [u8]) -> std::io::Result<usize> {
6115
                // This body branches on every byte of `buf`, ensuring that it
6116
                // exhibits UB if any byte of `buf` is uninitialized.
6117
                if buf.iter().all(|&x| x == 0) {
6118
                    Ok(buf.len())
6119
                } else {
6120
                    buf.iter_mut().for_each(|x| *x = 0);
6121
                    Ok(buf.len())
6122
                }
6123
            }
6124
        }
6125
        assert!(matches!(WithPadding::read_from_io(ReadsInRead), Ok(WithPadding { x: 0, y: 0 })));
6126
    }
6127
6128
    #[test]
6129
    #[cfg(feature = "std")]
6130
    fn test_read_write_io() {
6131
        let mut long_buffer = [0, 0, 0, 0];
6132
        assert!(matches!(u16::MAX.write_to_io(&mut long_buffer[..]), Ok(())));
6133
        assert_eq!(long_buffer, [255, 255, 0, 0]);
6134
        assert!(matches!(u16::read_from_io(&long_buffer[..]), Ok(u16::MAX)));
6135
6136
        let mut short_buffer = [0, 0];
6137
        assert!(u32::MAX.write_to_io(&mut short_buffer[..]).is_err());
6138
        assert_eq!(short_buffer, [255, 255]);
6139
        assert!(u32::read_from_io(&short_buffer[..]).is_err());
6140
    }
6141
6142
    #[test]
6143
    fn test_try_from_bytes_try_read_from() {
6144
        assert_eq!(<bool as TryFromBytes>::try_read_from_bytes(&[0]), Ok(false));
6145
        assert_eq!(<bool as TryFromBytes>::try_read_from_bytes(&[1]), Ok(true));
6146
6147
        assert_eq!(<bool as TryFromBytes>::try_read_from_prefix(&[0, 2]), Ok((false, &[2][..])));
6148
        assert_eq!(<bool as TryFromBytes>::try_read_from_prefix(&[1, 2]), Ok((true, &[2][..])));
6149
6150
        assert_eq!(<bool as TryFromBytes>::try_read_from_suffix(&[2, 0]), Ok((&[2][..], false)));
6151
        assert_eq!(<bool as TryFromBytes>::try_read_from_suffix(&[2, 1]), Ok((&[2][..], true)));
6152
6153
        // If we don't pass enough bytes, it fails.
6154
        assert!(matches!(
6155
            <u8 as TryFromBytes>::try_read_from_bytes(&[]),
6156
            Err(TryReadError::Size(_))
6157
        ));
6158
        assert!(matches!(
6159
            <u8 as TryFromBytes>::try_read_from_prefix(&[]),
6160
            Err(TryReadError::Size(_))
6161
        ));
6162
        assert!(matches!(
6163
            <u8 as TryFromBytes>::try_read_from_suffix(&[]),
6164
            Err(TryReadError::Size(_))
6165
        ));
6166
6167
        // If we pass too many bytes, it fails.
6168
        assert!(matches!(
6169
            <u8 as TryFromBytes>::try_read_from_bytes(&[0, 0]),
6170
            Err(TryReadError::Size(_))
6171
        ));
6172
6173
        // If we pass an invalid value, it fails.
6174
        assert!(matches!(
6175
            <bool as TryFromBytes>::try_read_from_bytes(&[2]),
6176
            Err(TryReadError::Validity(_))
6177
        ));
6178
        assert!(matches!(
6179
            <bool as TryFromBytes>::try_read_from_prefix(&[2, 0]),
6180
            Err(TryReadError::Validity(_))
6181
        ));
6182
        assert!(matches!(
6183
            <bool as TryFromBytes>::try_read_from_suffix(&[0, 2]),
6184
            Err(TryReadError::Validity(_))
6185
        ));
6186
6187
        // Reading from a misaligned buffer should still succeed. Since `AU64`'s
6188
        // alignment is 8, and since we read from two adjacent addresses one
6189
        // byte apart, it is guaranteed that at least one of them (though
6190
        // possibly both) will be misaligned.
6191
        let bytes: [u8; 9] = [0, 0, 0, 0, 0, 0, 0, 0, 0];
6192
        assert_eq!(<AU64 as TryFromBytes>::try_read_from_bytes(&bytes[..8]), Ok(AU64(0)));
6193
        assert_eq!(<AU64 as TryFromBytes>::try_read_from_bytes(&bytes[1..9]), Ok(AU64(0)));
6194
6195
        assert_eq!(
6196
            <AU64 as TryFromBytes>::try_read_from_prefix(&bytes[..8]),
6197
            Ok((AU64(0), &[][..]))
6198
        );
6199
        assert_eq!(
6200
            <AU64 as TryFromBytes>::try_read_from_prefix(&bytes[1..9]),
6201
            Ok((AU64(0), &[][..]))
6202
        );
6203
6204
        assert_eq!(
6205
            <AU64 as TryFromBytes>::try_read_from_suffix(&bytes[..8]),
6206
            Ok((&[][..], AU64(0)))
6207
        );
6208
        assert_eq!(
6209
            <AU64 as TryFromBytes>::try_read_from_suffix(&bytes[1..9]),
6210
            Ok((&[][..], AU64(0)))
6211
        );
6212
    }
6213
6214
    #[test]
6215
    fn test_ref_from_mut_from() {
6216
        // Test `FromBytes::{ref_from, mut_from}{,_prefix,Suffix}` success cases
6217
        // Exhaustive coverage for these methods is covered by the `Ref` tests above,
6218
        // which these helper methods defer to.
6219
6220
        let mut buf =
6221
            Align::<[u8; 16], AU64>::new([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]);
6222
6223
        assert_eq!(
6224
            AU64::ref_from_bytes(&buf.t[8..]).unwrap().0.to_ne_bytes(),
6225
            [8, 9, 10, 11, 12, 13, 14, 15]
6226
        );
6227
        let suffix = AU64::mut_from_bytes(&mut buf.t[8..]).unwrap();
6228
        suffix.0 = 0x0101010101010101;
6229
        // The `[u8:9]` is a non-half size of the full buffer, which would catch
6230
        // `from_prefix` having the same implementation as `from_suffix` (issues #506, #511).
6231
        assert_eq!(
6232
            <[u8; 9]>::ref_from_suffix(&buf.t[..]).unwrap(),
6233
            (&[0, 1, 2, 3, 4, 5, 6][..], &[7u8, 1, 1, 1, 1, 1, 1, 1, 1])
6234
        );
6235
        let (prefix, suffix) = AU64::mut_from_suffix(&mut buf.t[1..]).unwrap();
6236
        assert_eq!(prefix, &mut [1u8, 2, 3, 4, 5, 6, 7][..]);
6237
        suffix.0 = 0x0202020202020202;
6238
        let (prefix, suffix) = <[u8; 10]>::mut_from_suffix(&mut buf.t[..]).unwrap();
6239
        assert_eq!(prefix, &mut [0u8, 1, 2, 3, 4, 5][..]);
6240
        suffix[0] = 42;
6241
        assert_eq!(
6242
            <[u8; 9]>::ref_from_prefix(&buf.t[..]).unwrap(),
6243
            (&[0u8, 1, 2, 3, 4, 5, 42, 7, 2], &[2u8, 2, 2, 2, 2, 2, 2][..])
6244
        );
6245
        <[u8; 2]>::mut_from_prefix(&mut buf.t[..]).unwrap().0[1] = 30;
6246
        assert_eq!(buf.t, [0, 30, 2, 3, 4, 5, 42, 7, 2, 2, 2, 2, 2, 2, 2, 2]);
6247
    }
6248
6249
    #[test]
6250
    fn test_ref_from_mut_from_error() {
6251
        // Test `FromBytes::{ref_from, mut_from}{,_prefix,Suffix}` error cases.
6252
6253
        // Fail because the buffer is too large.
6254
        let mut buf = Align::<[u8; 16], AU64>::default();
6255
        // `buf.t` should be aligned to 8, so only the length check should fail.
6256
        assert!(AU64::ref_from_bytes(&buf.t[..]).is_err());
6257
        assert!(AU64::mut_from_bytes(&mut buf.t[..]).is_err());
6258
        assert!(<[u8; 8]>::ref_from_bytes(&buf.t[..]).is_err());
6259
        assert!(<[u8; 8]>::mut_from_bytes(&mut buf.t[..]).is_err());
6260
6261
        // Fail because the buffer is too small.
6262
        let mut buf = Align::<[u8; 4], AU64>::default();
6263
        assert!(AU64::ref_from_bytes(&buf.t[..]).is_err());
6264
        assert!(AU64::mut_from_bytes(&mut buf.t[..]).is_err());
6265
        assert!(<[u8; 8]>::ref_from_bytes(&buf.t[..]).is_err());
6266
        assert!(<[u8; 8]>::mut_from_bytes(&mut buf.t[..]).is_err());
6267
        assert!(AU64::ref_from_prefix(&buf.t[..]).is_err());
6268
        assert!(AU64::mut_from_prefix(&mut buf.t[..]).is_err());
6269
        assert!(AU64::ref_from_suffix(&buf.t[..]).is_err());
6270
        assert!(AU64::mut_from_suffix(&mut buf.t[..]).is_err());
6271
        assert!(<[u8; 8]>::ref_from_prefix(&buf.t[..]).is_err());
6272
        assert!(<[u8; 8]>::mut_from_prefix(&mut buf.t[..]).is_err());
6273
        assert!(<[u8; 8]>::ref_from_suffix(&buf.t[..]).is_err());
6274
        assert!(<[u8; 8]>::mut_from_suffix(&mut buf.t[..]).is_err());
6275
6276
        // Fail because the alignment is insufficient.
6277
        let mut buf = Align::<[u8; 13], AU64>::default();
6278
        assert!(AU64::ref_from_bytes(&buf.t[1..]).is_err());
6279
        assert!(AU64::mut_from_bytes(&mut buf.t[1..]).is_err());
6280
        assert!(AU64::ref_from_bytes(&buf.t[1..]).is_err());
6281
        assert!(AU64::mut_from_bytes(&mut buf.t[1..]).is_err());
6282
        assert!(AU64::ref_from_prefix(&buf.t[1..]).is_err());
6283
        assert!(AU64::mut_from_prefix(&mut buf.t[1..]).is_err());
6284
        assert!(AU64::ref_from_suffix(&buf.t[..]).is_err());
6285
        assert!(AU64::mut_from_suffix(&mut buf.t[..]).is_err());
6286
    }
6287
6288
    #[test]
6289
    fn test_to_methods() {
6290
        /// Run a series of tests by calling `IntoBytes` methods on `t`.
6291
        ///
6292
        /// `bytes` is the expected byte sequence returned from `t.as_bytes()`
6293
        /// before `t` has been modified. `post_mutation` is the expected
6294
        /// sequence returned from `t.as_bytes()` after `t.as_mut_bytes()[0]`
6295
        /// has had its bits flipped (by applying `^= 0xFF`).
6296
        ///
6297
        /// `N` is the size of `t` in bytes.
6298
        fn test<T: FromBytes + IntoBytes + Immutable + Debug + Eq + ?Sized, const N: usize>(
6299
            t: &mut T,
6300
            bytes: &[u8],
6301
            post_mutation: &T,
6302
        ) {
6303
            // Test that we can access the underlying bytes, and that we get the
6304
            // right bytes and the right number of bytes.
6305
            assert_eq!(t.as_bytes(), bytes);
6306
6307
            // Test that changes to the underlying byte slices are reflected in
6308
            // the original object.
6309
            t.as_mut_bytes()[0] ^= 0xFF;
6310
            assert_eq!(t, post_mutation);
6311
            t.as_mut_bytes()[0] ^= 0xFF;
6312
6313
            // `write_to` rejects slices that are too small or too large.
6314
            assert!(t.write_to(&mut vec![0; N - 1][..]).is_err());
6315
            assert!(t.write_to(&mut vec![0; N + 1][..]).is_err());
6316
6317
            // `write_to` works as expected.
6318
            let mut bytes = [0; N];
6319
            assert_eq!(t.write_to(&mut bytes[..]), Ok(()));
6320
            assert_eq!(bytes, t.as_bytes());
6321
6322
            // `write_to_prefix` rejects slices that are too small.
6323
            assert!(t.write_to_prefix(&mut vec![0; N - 1][..]).is_err());
6324
6325
            // `write_to_prefix` works with exact-sized slices.
6326
            let mut bytes = [0; N];
6327
            assert_eq!(t.write_to_prefix(&mut bytes[..]), Ok(()));
6328
            assert_eq!(bytes, t.as_bytes());
6329
6330
            // `write_to_prefix` works with too-large slices, and any bytes past
6331
            // the prefix aren't modified.
6332
            let mut too_many_bytes = vec![0; N + 1];
6333
            too_many_bytes[N] = 123;
6334
            assert_eq!(t.write_to_prefix(&mut too_many_bytes[..]), Ok(()));
6335
            assert_eq!(&too_many_bytes[..N], t.as_bytes());
6336
            assert_eq!(too_many_bytes[N], 123);
6337
6338
            // `write_to_suffix` rejects slices that are too small.
6339
            assert!(t.write_to_suffix(&mut vec![0; N - 1][..]).is_err());
6340
6341
            // `write_to_suffix` works with exact-sized slices.
6342
            let mut bytes = [0; N];
6343
            assert_eq!(t.write_to_suffix(&mut bytes[..]), Ok(()));
6344
            assert_eq!(bytes, t.as_bytes());
6345
6346
            // `write_to_suffix` works with too-large slices, and any bytes
6347
            // before the suffix aren't modified.
6348
            let mut too_many_bytes = vec![0; N + 1];
6349
            too_many_bytes[0] = 123;
6350
            assert_eq!(t.write_to_suffix(&mut too_many_bytes[..]), Ok(()));
6351
            assert_eq!(&too_many_bytes[1..], t.as_bytes());
6352
            assert_eq!(too_many_bytes[0], 123);
6353
        }
6354
6355
        #[derive(Debug, Eq, PartialEq, FromBytes, IntoBytes, Immutable)]
6356
        #[repr(C)]
6357
        struct Foo {
6358
            a: u32,
6359
            b: Wrapping<u32>,
6360
            c: Option<NonZeroU32>,
6361
        }
6362
6363
        let expected_bytes: Vec<u8> = if cfg!(target_endian = "little") {
6364
            vec![1, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0]
6365
        } else {
6366
            vec![0, 0, 0, 1, 0, 0, 0, 2, 0, 0, 0, 0]
6367
        };
6368
        let post_mutation_expected_a =
6369
            if cfg!(target_endian = "little") { 0x00_00_00_FE } else { 0xFF_00_00_01 };
6370
        test::<_, 12>(
6371
            &mut Foo { a: 1, b: Wrapping(2), c: None },
6372
            expected_bytes.as_bytes(),
6373
            &Foo { a: post_mutation_expected_a, b: Wrapping(2), c: None },
6374
        );
6375
        test::<_, 3>(
6376
            Unsized::from_mut_slice(&mut [1, 2, 3]),
6377
            &[1, 2, 3],
6378
            Unsized::from_mut_slice(&mut [0xFE, 2, 3]),
6379
        );
6380
    }
6381
6382
    #[test]
6383
    fn test_array() {
6384
        #[derive(FromBytes, IntoBytes, Immutable)]
6385
        #[repr(C)]
6386
        struct Foo {
6387
            a: [u16; 33],
6388
        }
6389
6390
        let foo = Foo { a: [0xFFFF; 33] };
6391
        let expected = [0xFFu8; 66];
6392
        assert_eq!(foo.as_bytes(), &expected[..]);
6393
    }
6394
6395
    #[test]
6396
    fn test_new_zeroed() {
6397
        assert!(!bool::new_zeroed());
6398
        assert_eq!(u64::new_zeroed(), 0);
6399
        // This test exists in order to exercise unsafe code, especially when
6400
        // running under Miri.
6401
        #[allow(clippy::unit_cmp)]
6402
        {
6403
            assert_eq!(<()>::new_zeroed(), ());
6404
        }
6405
    }
6406
6407
    #[test]
6408
    fn test_transparent_packed_generic_struct() {
6409
        #[derive(IntoBytes, FromBytes, Unaligned)]
6410
        #[repr(transparent)]
6411
        #[allow(dead_code)] // We never construct this type
6412
        struct Foo<T> {
6413
            _t: T,
6414
            _phantom: PhantomData<()>,
6415
        }
6416
6417
        assert_impl_all!(Foo<u32>: FromZeros, FromBytes, IntoBytes);
6418
        assert_impl_all!(Foo<u8>: Unaligned);
6419
6420
        #[derive(IntoBytes, FromBytes, Unaligned)]
6421
        #[repr(C, packed)]
6422
        #[allow(dead_code)] // We never construct this type
6423
        struct Bar<T, U> {
6424
            _t: T,
6425
            _u: U,
6426
        }
6427
6428
        assert_impl_all!(Bar<u8, AU64>: FromZeros, FromBytes, IntoBytes, Unaligned);
6429
    }
6430
6431
    #[cfg(feature = "alloc")]
6432
    mod alloc {
6433
        use super::*;
6434
6435
        #[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
6436
        #[test]
6437
        fn test_extend_vec_zeroed() {
6438
            // Test extending when there is an existing allocation.
6439
            let mut v = vec![100u16, 200, 300];
6440
            FromZeros::extend_vec_zeroed(&mut v, 3).unwrap();
6441
            assert_eq!(v.len(), 6);
6442
            assert_eq!(&*v, &[100, 200, 300, 0, 0, 0]);
6443
            drop(v);
6444
6445
            // Test extending when there is no existing allocation.
6446
            let mut v: Vec<u64> = Vec::new();
6447
            FromZeros::extend_vec_zeroed(&mut v, 3).unwrap();
6448
            assert_eq!(v.len(), 3);
6449
            assert_eq!(&*v, &[0, 0, 0]);
6450
            drop(v);
6451
        }
6452
6453
        #[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
6454
        #[test]
6455
        fn test_extend_vec_zeroed_zst() {
6456
            // Test extending when there is an existing (fake) allocation.
6457
            let mut v = vec![(), (), ()];
6458
            <()>::extend_vec_zeroed(&mut v, 3).unwrap();
6459
            assert_eq!(v.len(), 6);
6460
            assert_eq!(&*v, &[(), (), (), (), (), ()]);
6461
            drop(v);
6462
6463
            // Test extending when there is no existing (fake) allocation.
6464
            let mut v: Vec<()> = Vec::new();
6465
            <()>::extend_vec_zeroed(&mut v, 3).unwrap();
6466
            assert_eq!(&*v, &[(), (), ()]);
6467
            drop(v);
6468
        }
6469
6470
        #[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
6471
        #[test]
6472
        fn test_insert_vec_zeroed() {
6473
            // Insert at start (no existing allocation).
6474
            let mut v: Vec<u64> = Vec::new();
6475
            u64::insert_vec_zeroed(&mut v, 0, 2).unwrap();
6476
            assert_eq!(v.len(), 2);
6477
            assert_eq!(&*v, &[0, 0]);
6478
            drop(v);
6479
6480
            // Insert at start.
6481
            let mut v = vec![100u64, 200, 300];
6482
            u64::insert_vec_zeroed(&mut v, 0, 2).unwrap();
6483
            assert_eq!(v.len(), 5);
6484
            assert_eq!(&*v, &[0, 0, 100, 200, 300]);
6485
            drop(v);
6486
6487
            // Insert at middle.
6488
            let mut v = vec![100u64, 200, 300];
6489
            u64::insert_vec_zeroed(&mut v, 1, 1).unwrap();
6490
            assert_eq!(v.len(), 4);
6491
            assert_eq!(&*v, &[100, 0, 200, 300]);
6492
            drop(v);
6493
6494
            // Insert at end.
6495
            let mut v = vec![100u64, 200, 300];
6496
            u64::insert_vec_zeroed(&mut v, 3, 1).unwrap();
6497
            assert_eq!(v.len(), 4);
6498
            assert_eq!(&*v, &[100, 200, 300, 0]);
6499
            drop(v);
6500
        }
6501
6502
        #[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
6503
        #[test]
6504
        fn test_insert_vec_zeroed_zst() {
6505
            // Insert at start (no existing fake allocation).
6506
            let mut v: Vec<()> = Vec::new();
6507
            <()>::insert_vec_zeroed(&mut v, 0, 2).unwrap();
6508
            assert_eq!(v.len(), 2);
6509
            assert_eq!(&*v, &[(), ()]);
6510
            drop(v);
6511
6512
            // Insert at start.
6513
            let mut v = vec![(), (), ()];
6514
            <()>::insert_vec_zeroed(&mut v, 0, 2).unwrap();
6515
            assert_eq!(v.len(), 5);
6516
            assert_eq!(&*v, &[(), (), (), (), ()]);
6517
            drop(v);
6518
6519
            // Insert at middle.
6520
            let mut v = vec![(), (), ()];
6521
            <()>::insert_vec_zeroed(&mut v, 1, 1).unwrap();
6522
            assert_eq!(v.len(), 4);
6523
            assert_eq!(&*v, &[(), (), (), ()]);
6524
            drop(v);
6525
6526
            // Insert at end.
6527
            let mut v = vec![(), (), ()];
6528
            <()>::insert_vec_zeroed(&mut v, 3, 1).unwrap();
6529
            assert_eq!(v.len(), 4);
6530
            assert_eq!(&*v, &[(), (), (), ()]);
6531
            drop(v);
6532
        }
6533
6534
        #[test]
6535
        fn test_new_box_zeroed() {
6536
            assert_eq!(u64::new_box_zeroed(), Ok(Box::new(0)));
6537
        }
6538
6539
        #[test]
6540
        fn test_new_box_zeroed_array() {
6541
            drop(<[u32; 0x1000]>::new_box_zeroed());
6542
        }
6543
6544
        #[test]
6545
        fn test_new_box_zeroed_zst() {
6546
            // This test exists in order to exercise unsafe code, especially
6547
            // when running under Miri.
6548
            #[allow(clippy::unit_cmp)]
6549
            {
6550
                assert_eq!(<()>::new_box_zeroed(), Ok(Box::new(())));
6551
            }
6552
        }
6553
6554
        #[test]
6555
        fn test_new_box_zeroed_with_elems() {
6556
            let mut s: Box<[u64]> = <[u64]>::new_box_zeroed_with_elems(3).unwrap();
6557
            assert_eq!(s.len(), 3);
6558
            assert_eq!(&*s, &[0, 0, 0]);
6559
            s[1] = 3;
6560
            assert_eq!(&*s, &[0, 3, 0]);
6561
        }
6562
6563
        #[test]
6564
        fn test_new_box_zeroed_with_elems_empty() {
6565
            let s: Box<[u64]> = <[u64]>::new_box_zeroed_with_elems(0).unwrap();
6566
            assert_eq!(s.len(), 0);
6567
        }
6568
6569
        #[test]
6570
        fn test_new_box_zeroed_with_elems_zst() {
6571
            let mut s: Box<[()]> = <[()]>::new_box_zeroed_with_elems(3).unwrap();
6572
            assert_eq!(s.len(), 3);
6573
            assert!(s.get(10).is_none());
6574
            // This test exists in order to exercise unsafe code, especially
6575
            // when running under Miri.
6576
            #[allow(clippy::unit_cmp)]
6577
            {
6578
                assert_eq!(s[1], ());
6579
            }
6580
            s[2] = ();
6581
        }
6582
6583
        #[test]
6584
        fn test_new_box_zeroed_with_elems_zst_empty() {
6585
            let s: Box<[()]> = <[()]>::new_box_zeroed_with_elems(0).unwrap();
6586
            assert_eq!(s.len(), 0);
6587
        }
6588
6589
        #[test]
6590
        fn new_box_zeroed_with_elems_errors() {
6591
            assert_eq!(<[u16]>::new_box_zeroed_with_elems(usize::MAX), Err(AllocError));
6592
6593
            let max = <usize as core::convert::TryFrom<_>>::try_from(isize::MAX).unwrap();
6594
            assert_eq!(
6595
                <[u16]>::new_box_zeroed_with_elems((max / mem::size_of::<u16>()) + 1),
6596
                Err(AllocError)
6597
            );
6598
        }
6599
    }
6600
}
6601
6602
#[cfg(kani)]
6603
mod proofs {
6604
    use super::*;
6605
6606
    impl kani::Arbitrary for DstLayout {
6607
        fn any() -> Self {
6608
            let align: NonZeroUsize = kani::any();
6609
            let size_info: SizeInfo = kani::any();
6610
6611
            kani::assume(align.is_power_of_two());
6612
            kani::assume(align < DstLayout::THEORETICAL_MAX_ALIGN);
6613
6614
            // For testing purposes, we most care about instantiations of
6615
            // `DstLayout` that can correspond to actual Rust types. We use
6616
            // `Layout` to verify that our `DstLayout` satisfies the validity
6617
            // conditions of Rust layouts.
6618
            kani::assume(
6619
                match size_info {
6620
                    SizeInfo::Sized { size } => Layout::from_size_align(size, align.get()),
6621
                    SizeInfo::SliceDst(TrailingSliceLayout { offset, elem_size: _ }) => {
6622
                        // `SliceDst`` cannot encode an exact size, but we know
6623
                        // it is at least `offset` bytes.
6624
                        Layout::from_size_align(offset, align.get())
6625
                    }
6626
                }
6627
                .is_ok(),
6628
            );
6629
6630
            Self { align: align, size_info: size_info }
6631
        }
6632
    }
6633
6634
    impl kani::Arbitrary for SizeInfo {
6635
        fn any() -> Self {
6636
            let is_sized: bool = kani::any();
6637
6638
            match is_sized {
6639
                true => {
6640
                    let size: usize = kani::any();
6641
6642
                    kani::assume(size <= isize::MAX as _);
6643
6644
                    SizeInfo::Sized { size }
6645
                }
6646
                false => SizeInfo::SliceDst(kani::any()),
6647
            }
6648
        }
6649
    }
6650
6651
    impl kani::Arbitrary for TrailingSliceLayout {
6652
        fn any() -> Self {
6653
            let elem_size: usize = kani::any();
6654
            let offset: usize = kani::any();
6655
6656
            kani::assume(elem_size < isize::MAX as _);
6657
            kani::assume(offset < isize::MAX as _);
6658
6659
            TrailingSliceLayout { elem_size, offset }
6660
        }
6661
    }
6662
6663
    #[kani::proof]
6664
    fn prove_dst_layout_extend() {
6665
        use crate::util::{max, min, padding_needed_for};
6666
6667
        let base: DstLayout = kani::any();
6668
        let field: DstLayout = kani::any();
6669
        let packed: Option<NonZeroUsize> = kani::any();
6670
6671
        if let Some(max_align) = packed {
6672
            kani::assume(max_align.is_power_of_two());
6673
            kani::assume(base.align <= max_align);
6674
        }
6675
6676
        // The base can only be extended if it's sized.
6677
        kani::assume(matches!(base.size_info, SizeInfo::Sized { .. }));
6678
        let base_size = if let SizeInfo::Sized { size } = base.size_info {
6679
            size
6680
        } else {
6681
            unreachable!();
6682
        };
6683
6684
        // Under the above conditions, `DstLayout::extend` will not panic.
6685
        let composite = base.extend(field, packed);
6686
6687
        // The field's alignment is clamped by `max_align` (i.e., the
6688
        // `packed` attribute, if any) [1].
6689
        //
6690
        // [1] Per https://doc.rust-lang.org/reference/type-layout.html#the-alignment-modifiers:
6691
        //
6692
        //   The alignments of each field, for the purpose of positioning
6693
        //   fields, is the smaller of the specified alignment and the
6694
        //   alignment of the field's type.
6695
        let field_align = min(field.align, packed.unwrap_or(DstLayout::THEORETICAL_MAX_ALIGN));
6696
6697
        // The struct's alignment is the maximum of its previous alignment and
6698
        // `field_align`.
6699
        assert_eq!(composite.align, max(base.align, field_align));
6700
6701
        // Compute the minimum amount of inter-field padding needed to
6702
        // satisfy the field's alignment, and offset of the trailing field.
6703
        // [1]
6704
        //
6705
        // [1] Per https://doc.rust-lang.org/reference/type-layout.html#the-alignment-modifiers:
6706
        //
6707
        //   Inter-field padding is guaranteed to be the minimum required in
6708
        //   order to satisfy each field's (possibly altered) alignment.
6709
        let padding = padding_needed_for(base_size, field_align);
6710
        let offset = base_size + padding;
6711
6712
        // For testing purposes, we'll also construct `alloc::Layout`
6713
        // stand-ins for `DstLayout`, and show that `extend` behaves
6714
        // comparably on both types.
6715
        let base_analog = Layout::from_size_align(base_size, base.align.get()).unwrap();
6716
6717
        match field.size_info {
6718
            SizeInfo::Sized { size: field_size } => {
6719
                if let SizeInfo::Sized { size: composite_size } = composite.size_info {
6720
                    // If the trailing field is sized, the resulting layout will
6721
                    // be sized. Its size will be the sum of the preceding
6722
                    // layout, the size of the new field, and the size of
6723
                    // inter-field padding between the two.
6724
                    assert_eq!(composite_size, offset + field_size);
6725
6726
                    let field_analog =
6727
                        Layout::from_size_align(field_size, field_align.get()).unwrap();
6728
6729
                    if let Ok((actual_composite, actual_offset)) = base_analog.extend(field_analog)
6730
                    {
6731
                        assert_eq!(actual_offset, offset);
6732
                        assert_eq!(actual_composite.size(), composite_size);
6733
                        assert_eq!(actual_composite.align(), composite.align.get());
6734
                    } else {
6735
                        // An error here reflects that composite of `base`
6736
                        // and `field` cannot correspond to a real Rust type
6737
                        // fragment, because such a fragment would violate
6738
                        // the basic invariants of a valid Rust layout. At
6739
                        // the time of writing, `DstLayout` is a little more
6740
                        // permissive than `Layout`, so we don't assert
6741
                        // anything in this branch (e.g., unreachability).
6742
                    }
6743
                } else {
6744
                    panic!("The composite of two sized layouts must be sized.")
6745
                }
6746
            }
6747
            SizeInfo::SliceDst(TrailingSliceLayout {
6748
                offset: field_offset,
6749
                elem_size: field_elem_size,
6750
            }) => {
6751
                if let SizeInfo::SliceDst(TrailingSliceLayout {
6752
                    offset: composite_offset,
6753
                    elem_size: composite_elem_size,
6754
                }) = composite.size_info
6755
                {
6756
                    // The offset of the trailing slice component is the sum
6757
                    // of the offset of the trailing field and the trailing
6758
                    // slice offset within that field.
6759
                    assert_eq!(composite_offset, offset + field_offset);
6760
                    // The elem size is unchanged.
6761
                    assert_eq!(composite_elem_size, field_elem_size);
6762
6763
                    let field_analog =
6764
                        Layout::from_size_align(field_offset, field_align.get()).unwrap();
6765
6766
                    if let Ok((actual_composite, actual_offset)) = base_analog.extend(field_analog)
6767
                    {
6768
                        assert_eq!(actual_offset, offset);
6769
                        assert_eq!(actual_composite.size(), composite_offset);
6770
                        assert_eq!(actual_composite.align(), composite.align.get());
6771
                    } else {
6772
                        // An error here reflects that composite of `base`
6773
                        // and `field` cannot correspond to a real Rust type
6774
                        // fragment, because such a fragment would violate
6775
                        // the basic invariants of a valid Rust layout. At
6776
                        // the time of writing, `DstLayout` is a little more
6777
                        // permissive than `Layout`, so we don't assert
6778
                        // anything in this branch (e.g., unreachability).
6779
                    }
6780
                } else {
6781
                    panic!("The extension of a layout with a DST must result in a DST.")
6782
                }
6783
            }
6784
        }
6785
    }
6786
6787
    #[kani::proof]
6788
    #[kani::should_panic]
6789
    fn prove_dst_layout_extend_dst_panics() {
6790
        let base: DstLayout = kani::any();
6791
        let field: DstLayout = kani::any();
6792
        let packed: Option<NonZeroUsize> = kani::any();
6793
6794
        if let Some(max_align) = packed {
6795
            kani::assume(max_align.is_power_of_two());
6796
            kani::assume(base.align <= max_align);
6797
        }
6798
6799
        kani::assume(matches!(base.size_info, SizeInfo::SliceDst(..)));
6800
6801
        let _ = base.extend(field, packed);
6802
    }
6803
6804
    #[kani::proof]
6805
    fn prove_dst_layout_pad_to_align() {
6806
        use crate::util::padding_needed_for;
6807
6808
        let layout: DstLayout = kani::any();
6809
6810
        let padded: DstLayout = layout.pad_to_align();
6811
6812
        // Calling `pad_to_align` does not alter the `DstLayout`'s alignment.
6813
        assert_eq!(padded.align, layout.align);
6814
6815
        if let SizeInfo::Sized { size: unpadded_size } = layout.size_info {
6816
            if let SizeInfo::Sized { size: padded_size } = padded.size_info {
6817
                // If the layout is sized, it will remain sized after padding is
6818
                // added. Its sum will be its unpadded size and the size of the
6819
                // trailing padding needed to satisfy its alignment
6820
                // requirements.
6821
                let padding = padding_needed_for(unpadded_size, layout.align);
6822
                assert_eq!(padded_size, unpadded_size + padding);
6823
6824
                // Prove that calling `DstLayout::pad_to_align` behaves
6825
                // identically to `Layout::pad_to_align`.
6826
                let layout_analog =
6827
                    Layout::from_size_align(unpadded_size, layout.align.get()).unwrap();
6828
                let padded_analog = layout_analog.pad_to_align();
6829
                assert_eq!(padded_analog.align(), layout.align.get());
6830
                assert_eq!(padded_analog.size(), padded_size);
6831
            } else {
6832
                panic!("The padding of a sized layout must result in a sized layout.")
6833
            }
6834
        } else {
6835
            // If the layout is a DST, padding cannot be statically added.
6836
            assert_eq!(padded.size_info, layout.size_info);
6837
        }
6838
    }
6839
}