Coverage Report

Created: 2025-12-20 06:48

next uncovered line (L), next uncovered region (R), next uncovered branch (B)
/rust/registry/src/index.crates.io-1949cf8c6b5b557f/zerocopy-0.8.31/src/lib.rs
Line
Count
Source
1
// Copyright 2018 The Fuchsia Authors
2
//
3
// Licensed under the 2-Clause BSD License <LICENSE-BSD or
4
// https://opensource.org/license/bsd-2-clause>, Apache License, Version 2.0
5
// <LICENSE-APACHE or https://www.apache.org/licenses/LICENSE-2.0>, or the MIT
6
// license <LICENSE-MIT or https://opensource.org/licenses/MIT>, at your option.
7
// This file may not be copied, modified, or distributed except according to
8
// those terms.
9
10
// After updating the following doc comment, make sure to run the following
11
// command to update `README.md` based on its contents:
12
//
13
//   cargo -q run --manifest-path tools/Cargo.toml -p generate-readme > README.md
14
15
//! ***<span style="font-size: 140%">Fast, safe, <span
16
//! style="color:red;">compile error</span>. Pick two.</span>***
17
//!
18
//! Zerocopy makes zero-cost memory manipulation effortless. We write `unsafe`
19
//! so you don't have to.
20
//!
21
//! *For an overview of what's changed from zerocopy 0.7, check out our [release
22
//! notes][release-notes], which include a step-by-step upgrading guide.*
23
//!
24
//! *Have questions? Need more out of zerocopy? Submit a [customer request
25
//! issue][customer-request-issue] or ask the maintainers on
26
//! [GitHub][github-q-a] or [Discord][discord]!*
27
//!
28
//! [customer-request-issue]: https://github.com/google/zerocopy/issues/new/choose
29
//! [release-notes]: https://github.com/google/zerocopy/discussions/1680
30
//! [github-q-a]: https://github.com/google/zerocopy/discussions/categories/q-a
31
//! [discord]: https://discord.gg/MAvWH2R6zk
32
//!
33
//! # Overview
34
//!
35
//! ##### Conversion Traits
36
//!
37
//! Zerocopy provides four derivable traits for zero-cost conversions:
38
//! - [`TryFromBytes`] indicates that a type may safely be converted from
39
//!   certain byte sequences (conditional on runtime checks)
40
//! - [`FromZeros`] indicates that a sequence of zero bytes represents a valid
41
//!   instance of a type
42
//! - [`FromBytes`] indicates that a type may safely be converted from an
43
//!   arbitrary byte sequence
44
//! - [`IntoBytes`] indicates that a type may safely be converted *to* a byte
45
//!   sequence
46
//!
47
//! These traits support sized types, slices, and [slice DSTs][slice-dsts].
48
//!
49
//! [slice-dsts]: KnownLayout#dynamically-sized-types
50
//!
51
//! ##### Marker Traits
52
//!
53
//! Zerocopy provides three derivable marker traits that do not provide any
54
//! functionality themselves, but are required to call certain methods provided
55
//! by the conversion traits:
56
//! - [`KnownLayout`] indicates that zerocopy can reason about certain layout
57
//!   qualities of a type
58
//! - [`Immutable`] indicates that a type is free from interior mutability,
59
//!   except by ownership or an exclusive (`&mut`) borrow
60
//! - [`Unaligned`] indicates that a type's alignment requirement is 1
61
//!
62
//! You should generally derive these marker traits whenever possible.
63
//!
64
//! ##### Conversion Macros
65
//!
66
//! Zerocopy provides six macros for safe casting between types:
67
//!
68
//! - ([`try_`][try_transmute])[`transmute`] (conditionally) converts a value of
69
//!   one type to a value of another type of the same size
70
//! - ([`try_`][try_transmute_mut])[`transmute_mut`] (conditionally) converts a
71
//!   mutable reference of one type to a mutable reference of another type of
72
//!   the same size
73
//! - ([`try_`][try_transmute_ref])[`transmute_ref`] (conditionally) converts a
74
//!   mutable or immutable reference of one type to an immutable reference of
75
//!   another type of the same size
76
//!
77
//! These macros perform *compile-time* size and alignment checks, meaning that
78
//! unconditional casts have zero cost at runtime. Conditional casts do not need
79
//! to validate size or alignment runtime, but do need to validate contents.
80
//!
81
//! These macros cannot be used in generic contexts. For generic conversions,
82
//! use the methods defined by the [conversion traits](#conversion-traits).
83
//!
84
//! ##### Byteorder-Aware Numerics
85
//!
86
//! Zerocopy provides byte-order aware integer types that support these
87
//! conversions; see the [`byteorder`] module. These types are especially useful
88
//! for network parsing.
89
//!
90
//! # Cargo Features
91
//!
92
//! - **`alloc`**
93
//!   By default, `zerocopy` is `no_std`. When the `alloc` feature is enabled,
94
//!   the `alloc` crate is added as a dependency, and some allocation-related
95
//!   functionality is added.
96
//!
97
//! - **`std`**
98
//!   By default, `zerocopy` is `no_std`. When the `std` feature is enabled, the
99
//!   `std` crate is added as a dependency (ie, `no_std` is disabled), and
100
//!   support for some `std` types is added. `std` implies `alloc`.
101
//!
102
//! - **`derive`**
103
//!   Provides derives for the core marker traits via the `zerocopy-derive`
104
//!   crate. These derives are re-exported from `zerocopy`, so it is not
105
//!   necessary to depend on `zerocopy-derive` directly.
106
//!
107
//!   However, you may experience better compile times if you instead directly
108
//!   depend on both `zerocopy` and `zerocopy-derive` in your `Cargo.toml`,
109
//!   since doing so will allow Rust to compile these crates in parallel. To do
110
//!   so, do *not* enable the `derive` feature, and list both dependencies in
111
//!   your `Cargo.toml` with the same leading non-zero version number; e.g:
112
//!
113
//!   ```toml
114
//!   [dependencies]
115
//!   zerocopy = "0.X"
116
//!   zerocopy-derive = "0.X"
117
//!   ```
118
//!
119
//!   To avoid the risk of [duplicate import errors][duplicate-import-errors] if
120
//!   one of your dependencies enables zerocopy's `derive` feature, import
121
//!   derives as `use zerocopy_derive::*` rather than by name (e.g., `use
122
//!   zerocopy_derive::FromBytes`).
123
//!
124
//! - **`simd`**
125
//!   When the `simd` feature is enabled, `FromZeros`, `FromBytes`, and
126
//!   `IntoBytes` impls are emitted for all stable SIMD types which exist on the
127
//!   target platform. Note that the layout of SIMD types is not yet stabilized,
128
//!   so these impls may be removed in the future if layout changes make them
129
//!   invalid. For more information, see the Unsafe Code Guidelines Reference
130
//!   page on the [layout of packed SIMD vectors][simd-layout].
131
//!
132
//! - **`simd-nightly`**
133
//!   Enables the `simd` feature and adds support for SIMD types which are only
134
//!   available on nightly. Since these types are unstable, support for any type
135
//!   may be removed at any point in the future.
136
//!
137
//! - **`float-nightly`**
138
//!   Adds support for the unstable `f16` and `f128` types. These types are
139
//!   not yet fully implemented and may not be supported on all platforms.
140
//!
141
//! [duplicate-import-errors]: https://github.com/google/zerocopy/issues/1587
142
//! [simd-layout]: https://rust-lang.github.io/unsafe-code-guidelines/layout/packed-simd-vectors.html
143
//!
144
//! # Security Ethos
145
//!
146
//! Zerocopy is expressly designed for use in security-critical contexts. We
147
//! strive to ensure that that zerocopy code is sound under Rust's current
148
//! memory model, and *any future memory model*. We ensure this by:
149
//! - **...not 'guessing' about Rust's semantics.**
150
//!   We annotate `unsafe` code with a precise rationale for its soundness that
151
//!   cites a relevant section of Rust's official documentation. When Rust's
152
//!   documented semantics are unclear, we work with the Rust Operational
153
//!   Semantics Team to clarify Rust's documentation.
154
//! - **...rigorously testing our implementation.**
155
//!   We run tests using [Miri], ensuring that zerocopy is sound across a wide
156
//!   array of supported target platforms of varying endianness and pointer
157
//!   width, and across both current and experimental memory models of Rust.
158
//! - **...formally proving the correctness of our implementation.**
159
//!   We apply formal verification tools like [Kani][kani] to prove zerocopy's
160
//!   correctness.
161
//!
162
//! For more information, see our full [soundness policy].
163
//!
164
//! [Miri]: https://github.com/rust-lang/miri
165
//! [Kani]: https://github.com/model-checking/kani
166
//! [soundness policy]: https://github.com/google/zerocopy/blob/main/POLICIES.md#soundness
167
//!
168
//! # Relationship to Project Safe Transmute
169
//!
170
//! [Project Safe Transmute] is an official initiative of the Rust Project to
171
//! develop language-level support for safer transmutation. The Project consults
172
//! with crates like zerocopy to identify aspects of safer transmutation that
173
//! would benefit from compiler support, and has developed an [experimental,
174
//! compiler-supported analysis][mcp-transmutability] which determines whether,
175
//! for a given type, any value of that type may be soundly transmuted into
176
//! another type. Once this functionality is sufficiently mature, zerocopy
177
//! intends to replace its internal transmutability analysis (implemented by our
178
//! custom derives) with the compiler-supported one. This change will likely be
179
//! an implementation detail that is invisible to zerocopy's users.
180
//!
181
//! Project Safe Transmute will not replace the need for most of zerocopy's
182
//! higher-level abstractions. The experimental compiler analysis is a tool for
183
//! checking the soundness of `unsafe` code, not a tool to avoid writing
184
//! `unsafe` code altogether. For the foreseeable future, crates like zerocopy
185
//! will still be required in order to provide higher-level abstractions on top
186
//! of the building block provided by Project Safe Transmute.
187
//!
188
//! [Project Safe Transmute]: https://rust-lang.github.io/rfcs/2835-project-safe-transmute.html
189
//! [mcp-transmutability]: https://github.com/rust-lang/compiler-team/issues/411
190
//!
191
//! # MSRV
192
//!
193
//! See our [MSRV policy].
194
//!
195
//! [MSRV policy]: https://github.com/google/zerocopy/blob/main/POLICIES.md#msrv
196
//!
197
//! # Changelog
198
//!
199
//! Zerocopy uses [GitHub Releases].
200
//!
201
//! [GitHub Releases]: https://github.com/google/zerocopy/releases
202
//!
203
//! # Thanks
204
//!
205
//! Zerocopy is maintained by engineers at Google with help from [many wonderful
206
//! contributors][contributors]. Thank you to everyone who has lent a hand in
207
//! making Rust a little more secure!
208
//!
209
//! [contributors]: https://github.com/google/zerocopy/graphs/contributors
210
211
// Sometimes we want to use lints which were added after our MSRV.
212
// `unknown_lints` is `warn` by default and we deny warnings in CI, so without
213
// this attribute, any unknown lint would cause a CI failure when testing with
214
// our MSRV.
215
#![allow(unknown_lints, non_local_definitions, unreachable_patterns)]
216
#![deny(renamed_and_removed_lints)]
217
#![deny(
218
    anonymous_parameters,
219
    deprecated_in_future,
220
    late_bound_lifetime_arguments,
221
    missing_copy_implementations,
222
    missing_debug_implementations,
223
    missing_docs,
224
    path_statements,
225
    patterns_in_fns_without_body,
226
    rust_2018_idioms,
227
    trivial_numeric_casts,
228
    unreachable_pub,
229
    unsafe_op_in_unsafe_fn,
230
    unused_extern_crates,
231
    // We intentionally choose not to deny `unused_qualifications`. When items
232
    // are added to the prelude (e.g., `core::mem::size_of`), this has the
233
    // consequence of making some uses trigger this lint on the latest toolchain
234
    // (e.g., `mem::size_of`), but fixing it (e.g. by replacing with `size_of`)
235
    // does not work on older toolchains.
236
    //
237
    // We tested a more complicated fix in #1413, but ultimately decided that,
238
    // since this lint is just a minor style lint, the complexity isn't worth it
239
    // - it's fine to occasionally have unused qualifications slip through,
240
    // especially since these do not affect our user-facing API in any way.
241
    variant_size_differences
242
)]
243
#![cfg_attr(
244
    __ZEROCOPY_INTERNAL_USE_ONLY_NIGHTLY_FEATURES_IN_TESTS,
245
    deny(fuzzy_provenance_casts, lossy_provenance_casts)
246
)]
247
#![deny(
248
    clippy::all,
249
    clippy::alloc_instead_of_core,
250
    clippy::arithmetic_side_effects,
251
    clippy::as_underscore,
252
    clippy::assertions_on_result_states,
253
    clippy::as_conversions,
254
    clippy::correctness,
255
    clippy::dbg_macro,
256
    clippy::decimal_literal_representation,
257
    clippy::double_must_use,
258
    clippy::get_unwrap,
259
    clippy::indexing_slicing,
260
    clippy::missing_inline_in_public_items,
261
    clippy::missing_safety_doc,
262
    clippy::multiple_unsafe_ops_per_block,
263
    clippy::must_use_candidate,
264
    clippy::must_use_unit,
265
    clippy::obfuscated_if_else,
266
    clippy::perf,
267
    clippy::print_stdout,
268
    clippy::return_self_not_must_use,
269
    clippy::std_instead_of_core,
270
    clippy::style,
271
    clippy::suspicious,
272
    clippy::todo,
273
    clippy::undocumented_unsafe_blocks,
274
    clippy::unimplemented,
275
    clippy::unnested_or_patterns,
276
    clippy::unwrap_used,
277
    clippy::use_debug
278
)]
279
// `clippy::incompatible_msrv` (implied by `clippy::suspicious`): This sometimes
280
// has false positives, and we test on our MSRV in CI, so it doesn't help us
281
// anyway.
282
#![allow(clippy::needless_lifetimes, clippy::type_complexity, clippy::incompatible_msrv)]
283
#![deny(
284
    rustdoc::bare_urls,
285
    rustdoc::broken_intra_doc_links,
286
    rustdoc::invalid_codeblock_attributes,
287
    rustdoc::invalid_html_tags,
288
    rustdoc::invalid_rust_codeblocks,
289
    rustdoc::missing_crate_level_docs,
290
    rustdoc::private_intra_doc_links
291
)]
292
// In test code, it makes sense to weight more heavily towards concise, readable
293
// code over correct or debuggable code.
294
#![cfg_attr(any(test, kani), allow(
295
    // In tests, you get line numbers and have access to source code, so panic
296
    // messages are less important. You also often unwrap a lot, which would
297
    // make expect'ing instead very verbose.
298
    clippy::unwrap_used,
299
    // In tests, there's no harm to "panic risks" - the worst that can happen is
300
    // that your test will fail, and you'll fix it. By contrast, panic risks in
301
    // production code introduce the possibly of code panicking unexpectedly "in
302
    // the field".
303
    clippy::arithmetic_side_effects,
304
    clippy::indexing_slicing,
305
))]
306
#![cfg_attr(not(any(test, kani, feature = "std")), no_std)]
307
#![cfg_attr(
308
    all(feature = "simd-nightly", target_arch = "arm"),
309
    feature(stdarch_arm_neon_intrinsics)
310
)]
311
#![cfg_attr(
312
    all(feature = "simd-nightly", any(target_arch = "powerpc", target_arch = "powerpc64")),
313
    feature(stdarch_powerpc)
314
)]
315
#![cfg_attr(feature = "float-nightly", feature(f16, f128))]
316
#![cfg_attr(doc_cfg, feature(doc_cfg))]
317
#![cfg_attr(__ZEROCOPY_INTERNAL_USE_ONLY_NIGHTLY_FEATURES_IN_TESTS, feature(coverage_attribute))]
318
#![cfg_attr(
319
    any(__ZEROCOPY_INTERNAL_USE_ONLY_NIGHTLY_FEATURES_IN_TESTS, miri),
320
    feature(layout_for_ptr)
321
)]
322
323
// This is a hack to allow zerocopy-derive derives to work in this crate. They
324
// assume that zerocopy is linked as an extern crate, so they access items from
325
// it as `zerocopy::Xxx`. This makes that still work.
326
#[cfg(any(feature = "derive", test))]
327
extern crate self as zerocopy;
328
329
#[doc(hidden)]
330
#[macro_use]
331
pub mod util;
332
333
pub mod byte_slice;
334
pub mod byteorder;
335
mod deprecated;
336
337
#[doc(hidden)]
338
pub mod doctests;
339
340
// This module is `pub` so that zerocopy's error types and error handling
341
// documentation is grouped together in a cohesive module. In practice, we
342
// expect most users to use the re-export of `error`'s items to avoid identifier
343
// stuttering.
344
pub mod error;
345
mod impls;
346
#[doc(hidden)]
347
pub mod layout;
348
mod macros;
349
#[doc(hidden)]
350
pub mod pointer;
351
mod r#ref;
352
mod split_at;
353
// FIXME(#252): If we make this pub, come up with a better name.
354
mod wrappers;
355
356
use core::{
357
    cell::{Cell, UnsafeCell},
358
    cmp::Ordering,
359
    fmt::{self, Debug, Display, Formatter},
360
    hash::Hasher,
361
    marker::PhantomData,
362
    mem::{self, ManuallyDrop, MaybeUninit as CoreMaybeUninit},
363
    num::{
364
        NonZeroI128, NonZeroI16, NonZeroI32, NonZeroI64, NonZeroI8, NonZeroIsize, NonZeroU128,
365
        NonZeroU16, NonZeroU32, NonZeroU64, NonZeroU8, NonZeroUsize, Wrapping,
366
    },
367
    ops::{Deref, DerefMut},
368
    ptr::{self, NonNull},
369
    slice,
370
};
371
#[cfg(feature = "std")]
372
use std::io;
373
374
use crate::pointer::invariant::{self, BecauseExclusive};
375
pub use crate::{
376
    byte_slice::*,
377
    byteorder::*,
378
    error::*,
379
    r#ref::*,
380
    split_at::{Split, SplitAt},
381
    wrappers::*,
382
};
383
384
#[cfg(any(feature = "alloc", test, kani))]
385
extern crate alloc;
386
#[cfg(any(feature = "alloc", test))]
387
use alloc::{boxed::Box, vec::Vec};
388
#[cfg(any(feature = "alloc", test))]
389
use core::alloc::Layout;
390
391
use util::MetadataOf;
392
393
// Used by `KnownLayout`.
394
#[doc(hidden)]
395
pub use crate::layout::*;
396
// Used by `TryFromBytes::is_bit_valid`.
397
#[doc(hidden)]
398
pub use crate::pointer::{invariant::BecauseImmutable, Maybe, Ptr};
399
// For each trait polyfill, as soon as the corresponding feature is stable, the
400
// polyfill import will be unused because method/function resolution will prefer
401
// the inherent method/function over a trait method/function. Thus, we suppress
402
// the `unused_imports` warning.
403
//
404
// See the documentation on `util::polyfills` for more information.
405
#[allow(unused_imports)]
406
use crate::util::polyfills::{self, NonNullExt as _, NumExt as _};
407
408
#[rustversion::nightly]
409
#[cfg(all(test, not(__ZEROCOPY_INTERNAL_USE_ONLY_NIGHTLY_FEATURES_IN_TESTS)))]
410
const _: () = {
411
    #[deprecated = "some tests may be skipped due to missing RUSTFLAGS=\"--cfg __ZEROCOPY_INTERNAL_USE_ONLY_NIGHTLY_FEATURES_IN_TESTS\""]
412
    const _WARNING: () = ();
413
    #[warn(deprecated)]
414
    _WARNING
415
};
416
417
// These exist so that code which was written against the old names will get
418
// less confusing error messages when they upgrade to a more recent version of
419
// zerocopy. On our MSRV toolchain, the error messages read, for example:
420
//
421
//   error[E0603]: trait `FromZeroes` is private
422
//       --> examples/deprecated.rs:1:15
423
//        |
424
//   1    | use zerocopy::FromZeroes;
425
//        |               ^^^^^^^^^^ private trait
426
//        |
427
//   note: the trait `FromZeroes` is defined here
428
//       --> /Users/josh/workspace/zerocopy/src/lib.rs:1845:5
429
//        |
430
//   1845 | use FromZeros as FromZeroes;
431
//        |     ^^^^^^^^^^^^^^^^^^^^^^^
432
//
433
// The "note" provides enough context to make it easy to figure out how to fix
434
// the error.
435
/// Implements [`KnownLayout`].
436
///
437
/// This derive analyzes various aspects of a type's layout that are needed for
438
/// some of zerocopy's APIs. It can be applied to structs, enums, and unions;
439
/// e.g.:
440
///
441
/// ```
442
/// # use zerocopy_derive::KnownLayout;
443
/// #[derive(KnownLayout)]
444
/// struct MyStruct {
445
/// # /*
446
///     ...
447
/// # */
448
/// }
449
///
450
/// #[derive(KnownLayout)]
451
/// enum MyEnum {
452
/// #   V00,
453
/// # /*
454
///     ...
455
/// # */
456
/// }
457
///
458
/// #[derive(KnownLayout)]
459
/// union MyUnion {
460
/// #   variant: u8,
461
/// # /*
462
///     ...
463
/// # */
464
/// }
465
/// ```
466
///
467
/// # Limitations
468
///
469
/// This derive cannot currently be applied to unsized structs without an
470
/// explicit `repr` attribute.
471
///
472
/// Some invocations of this derive run afoul of a [known bug] in Rust's type
473
/// privacy checker. For example, this code:
474
///
475
/// ```compile_fail,E0446
476
/// use zerocopy::*;
477
/// # use zerocopy_derive::*;
478
///
479
/// #[derive(KnownLayout)]
480
/// #[repr(C)]
481
/// pub struct PublicType {
482
///     leading: Foo,
483
///     trailing: Bar,
484
/// }
485
///
486
/// #[derive(KnownLayout)]
487
/// struct Foo;
488
///
489
/// #[derive(KnownLayout)]
490
/// struct Bar;
491
/// ```
492
///
493
/// ...results in a compilation error:
494
///
495
/// ```text
496
/// error[E0446]: private type `Bar` in public interface
497
///  --> examples/bug.rs:3:10
498
///    |
499
/// 3  | #[derive(KnownLayout)]
500
///    |          ^^^^^^^^^^^ can't leak private type
501
/// ...
502
/// 14 | struct Bar;
503
///    | ---------- `Bar` declared as private
504
///    |
505
///    = note: this error originates in the derive macro `KnownLayout` (in Nightly builds, run with -Z macro-backtrace for more info)
506
/// ```
507
///
508
/// This issue arises when `#[derive(KnownLayout)]` is applied to `repr(C)`
509
/// structs whose trailing field type is less public than the enclosing struct.
510
///
511
/// To work around this, mark the trailing field type `pub` and annotate it with
512
/// `#[doc(hidden)]`; e.g.:
513
///
514
/// ```no_run
515
/// use zerocopy::*;
516
/// # use zerocopy_derive::*;
517
///
518
/// #[derive(KnownLayout)]
519
/// #[repr(C)]
520
/// pub struct PublicType {
521
///     leading: Foo,
522
///     trailing: Bar,
523
/// }
524
///
525
/// #[derive(KnownLayout)]
526
/// struct Foo;
527
///
528
/// #[doc(hidden)]
529
/// #[derive(KnownLayout)]
530
/// pub struct Bar; // <- `Bar` is now also `pub`
531
/// ```
532
///
533
/// [known bug]: https://github.com/rust-lang/rust/issues/45713
534
#[cfg(any(feature = "derive", test))]
535
#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
536
pub use zerocopy_derive::KnownLayout;
537
#[allow(unused)]
538
use {FromZeros as FromZeroes, IntoBytes as AsBytes, Ref as LayoutVerified};
539
540
/// Indicates that zerocopy can reason about certain aspects of a type's layout.
541
///
542
/// This trait is required by many of zerocopy's APIs. It supports sized types,
543
/// slices, and [slice DSTs](#dynamically-sized-types).
544
///
545
/// # Implementation
546
///
547
/// **Do not implement this trait yourself!** Instead, use
548
/// [`#[derive(KnownLayout)]`][derive]; e.g.:
549
///
550
/// ```
551
/// # use zerocopy_derive::KnownLayout;
552
/// #[derive(KnownLayout)]
553
/// struct MyStruct {
554
/// # /*
555
///     ...
556
/// # */
557
/// }
558
///
559
/// #[derive(KnownLayout)]
560
/// enum MyEnum {
561
/// # /*
562
///     ...
563
/// # */
564
/// }
565
///
566
/// #[derive(KnownLayout)]
567
/// union MyUnion {
568
/// #   variant: u8,
569
/// # /*
570
///     ...
571
/// # */
572
/// }
573
/// ```
574
///
575
/// This derive performs a sophisticated analysis to deduce the layout
576
/// characteristics of types. You **must** implement this trait via the derive.
577
///
578
/// # Dynamically-sized types
579
///
580
/// `KnownLayout` supports slice-based dynamically sized types ("slice DSTs").
581
///
582
/// A slice DST is a type whose trailing field is either a slice or another
583
/// slice DST, rather than a type with fixed size. For example:
584
///
585
/// ```
586
/// #[repr(C)]
587
/// struct PacketHeader {
588
/// # /*
589
///     ...
590
/// # */
591
/// }
592
///
593
/// #[repr(C)]
594
/// struct Packet {
595
///     header: PacketHeader,
596
///     body: [u8],
597
/// }
598
/// ```
599
///
600
/// It can be useful to think of slice DSTs as a generalization of slices - in
601
/// other words, a normal slice is just the special case of a slice DST with
602
/// zero leading fields. In particular:
603
/// - Like slices, slice DSTs can have different lengths at runtime
604
/// - Like slices, slice DSTs cannot be passed by-value, but only by reference
605
///   or via other indirection such as `Box`
606
/// - Like slices, a reference (or `Box`, or other pointer type) to a slice DST
607
///   encodes the number of elements in the trailing slice field
608
///
609
/// ## Slice DST layout
610
///
611
/// Just like other composite Rust types, the layout of a slice DST is not
612
/// well-defined unless it is specified using an explicit `#[repr(...)]`
613
/// attribute such as `#[repr(C)]`. [Other representations are
614
/// supported][reprs], but in this section, we'll use `#[repr(C)]` as our
615
/// example.
616
///
617
/// A `#[repr(C)]` slice DST is laid out [just like sized `#[repr(C)]`
618
/// types][repr-c-structs], but the presence of a variable-length field
619
/// introduces the possibility of *dynamic padding*. In particular, it may be
620
/// necessary to add trailing padding *after* the trailing slice field in order
621
/// to satisfy the outer type's alignment, and the amount of padding required
622
/// may be a function of the length of the trailing slice field. This is just a
623
/// natural consequence of the normal `#[repr(C)]` rules applied to slice DSTs,
624
/// but it can result in surprising behavior. For example, consider the
625
/// following type:
626
///
627
/// ```
628
/// #[repr(C)]
629
/// struct Foo {
630
///     a: u32,
631
///     b: u8,
632
///     z: [u16],
633
/// }
634
/// ```
635
///
636
/// Assuming that `u32` has alignment 4 (this is not true on all platforms),
637
/// then `Foo` has alignment 4 as well. Here is the smallest possible value for
638
/// `Foo`:
639
///
640
/// ```text
641
/// byte offset | 01234567
642
///       field | aaaab---
643
///                    ><
644
/// ```
645
///
646
/// In this value, `z` has length 0. Abiding by `#[repr(C)]`, the lowest offset
647
/// that we can place `z` at is 5, but since `z` has alignment 2, we need to
648
/// round up to offset 6. This means that there is one byte of padding between
649
/// `b` and `z`, then 0 bytes of `z` itself (denoted `><` in this diagram), and
650
/// then two bytes of padding after `z` in order to satisfy the overall
651
/// alignment of `Foo`. The size of this instance is 8 bytes.
652
///
653
/// What about if `z` has length 1?
654
///
655
/// ```text
656
/// byte offset | 01234567
657
///       field | aaaab-zz
658
/// ```
659
///
660
/// In this instance, `z` has length 1, and thus takes up 2 bytes. That means
661
/// that we no longer need padding after `z` in order to satisfy `Foo`'s
662
/// alignment. We've now seen two different values of `Foo` with two different
663
/// lengths of `z`, but they both have the same size - 8 bytes.
664
///
665
/// What about if `z` has length 2?
666
///
667
/// ```text
668
/// byte offset | 012345678901
669
///       field | aaaab-zzzz--
670
/// ```
671
///
672
/// Now `z` has length 2, and thus takes up 4 bytes. This brings our un-padded
673
/// size to 10, and so we now need another 2 bytes of padding after `z` to
674
/// satisfy `Foo`'s alignment.
675
///
676
/// Again, all of this is just a logical consequence of the `#[repr(C)]` rules
677
/// applied to slice DSTs, but it can be surprising that the amount of trailing
678
/// padding becomes a function of the trailing slice field's length, and thus
679
/// can only be computed at runtime.
680
///
681
/// [reprs]: https://doc.rust-lang.org/reference/type-layout.html#representations
682
/// [repr-c-structs]: https://doc.rust-lang.org/reference/type-layout.html#reprc-structs
683
///
684
/// ## What is a valid size?
685
///
686
/// There are two places in zerocopy's API that we refer to "a valid size" of a
687
/// type. In normal casts or conversions, where the source is a byte slice, we
688
/// need to know whether the source byte slice is a valid size of the
689
/// destination type. In prefix or suffix casts, we need to know whether *there
690
/// exists* a valid size of the destination type which fits in the source byte
691
/// slice and, if so, what the largest such size is.
692
///
693
/// As outlined above, a slice DST's size is defined by the number of elements
694
/// in its trailing slice field. However, there is not necessarily a 1-to-1
695
/// mapping between trailing slice field length and overall size. As we saw in
696
/// the previous section with the type `Foo`, instances with both 0 and 1
697
/// elements in the trailing `z` field result in a `Foo` whose size is 8 bytes.
698
///
699
/// When we say "x is a valid size of `T`", we mean one of two things:
700
/// - If `T: Sized`, then we mean that `x == size_of::<T>()`
701
/// - If `T` is a slice DST, then we mean that there exists a `len` such that the instance of
702
///   `T` with `len` trailing slice elements has size `x`
703
///
704
/// When we say "largest possible size of `T` that fits in a byte slice", we
705
/// mean one of two things:
706
/// - If `T: Sized`, then we mean `size_of::<T>()` if the byte slice is at least
707
///   `size_of::<T>()` bytes long
708
/// - If `T` is a slice DST, then we mean to consider all values, `len`, such
709
///   that the instance of `T` with `len` trailing slice elements fits in the
710
///   byte slice, and to choose the largest such `len`, if any
711
///
712
///
713
/// # Safety
714
///
715
/// This trait does not convey any safety guarantees to code outside this crate.
716
///
717
/// You must not rely on the `#[doc(hidden)]` internals of `KnownLayout`. Future
718
/// releases of zerocopy may make backwards-breaking changes to these items,
719
/// including changes that only affect soundness, which may cause code which
720
/// uses those items to silently become unsound.
721
///
722
#[cfg_attr(feature = "derive", doc = "[derive]: zerocopy_derive::KnownLayout")]
723
#[cfg_attr(
724
    not(feature = "derive"),
725
    doc = concat!("[derive]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.KnownLayout.html"),
726
)]
727
#[cfg_attr(
728
    not(no_zerocopy_diagnostic_on_unimplemented_1_78_0),
729
    diagnostic::on_unimplemented(note = "Consider adding `#[derive(KnownLayout)]` to `{Self}`")
730
)]
731
pub unsafe trait KnownLayout {
732
    // The `Self: Sized` bound makes it so that `KnownLayout` can still be
733
    // object safe. It's not currently object safe thanks to `const LAYOUT`, and
734
    // it likely won't be in the future, but there's no reason not to be
735
    // forwards-compatible with object safety.
736
    #[doc(hidden)]
737
    fn only_derive_is_allowed_to_implement_this_trait()
738
    where
739
        Self: Sized;
740
741
    /// The type of metadata stored in a pointer to `Self`.
742
    ///
743
    /// This is `()` for sized types and `usize` for slice DSTs.
744
    type PointerMetadata: PointerMetadata;
745
746
    /// A maybe-uninitialized analog of `Self`
747
    ///
748
    /// # Safety
749
    ///
750
    /// `Self::LAYOUT` and `Self::MaybeUninit::LAYOUT` are identical.
751
    /// `Self::MaybeUninit` admits uninitialized bytes in all positions.
752
    #[doc(hidden)]
753
    type MaybeUninit: ?Sized + KnownLayout<PointerMetadata = Self::PointerMetadata>;
754
755
    /// The layout of `Self`.
756
    ///
757
    /// # Safety
758
    ///
759
    /// Callers may assume that `LAYOUT` accurately reflects the layout of
760
    /// `Self`. In particular:
761
    /// - `LAYOUT.align` is equal to `Self`'s alignment
762
    /// - If `Self: Sized`, then `LAYOUT.size_info == SizeInfo::Sized { size }`
763
    ///   where `size == size_of::<Self>()`
764
    /// - If `Self` is a slice DST, then `LAYOUT.size_info ==
765
    ///   SizeInfo::SliceDst(slice_layout)` where:
766
    ///   - The size, `size`, of an instance of `Self` with `elems` trailing
767
    ///     slice elements is equal to `slice_layout.offset +
768
    ///     slice_layout.elem_size * elems` rounded up to the nearest multiple
769
    ///     of `LAYOUT.align`
770
    ///   - For such an instance, any bytes in the range `[slice_layout.offset +
771
    ///     slice_layout.elem_size * elems, size)` are padding and must not be
772
    ///     assumed to be initialized
773
    #[doc(hidden)]
774
    const LAYOUT: DstLayout;
775
776
    /// SAFETY: The returned pointer has the same address and provenance as
777
    /// `bytes`. If `Self` is a DST, the returned pointer's referent has `elems`
778
    /// elements in its trailing slice.
779
    #[doc(hidden)]
780
    fn raw_from_ptr_len(bytes: NonNull<u8>, meta: Self::PointerMetadata) -> NonNull<Self>;
781
782
    /// Extracts the metadata from a pointer to `Self`.
783
    ///
784
    /// # Safety
785
    ///
786
    /// `pointer_to_metadata` always returns the correct metadata stored in
787
    /// `ptr`.
788
    #[doc(hidden)]
789
    fn pointer_to_metadata(ptr: *mut Self) -> Self::PointerMetadata;
790
791
    /// Computes the length of the byte range addressed by `ptr`.
792
    ///
793
    /// Returns `None` if the resulting length would not fit in an `usize`.
794
    ///
795
    /// # Safety
796
    ///
797
    /// Callers may assume that `size_of_val_raw` always returns the correct
798
    /// size.
799
    ///
800
    /// Callers may assume that, if `ptr` addresses a byte range whose length
801
    /// fits in an `usize`, this will return `Some`.
802
    #[doc(hidden)]
803
    #[must_use]
804
    #[inline(always)]
805
0
    fn size_of_val_raw(ptr: NonNull<Self>) -> Option<usize> {
806
0
        let meta = Self::pointer_to_metadata(ptr.as_ptr());
807
        // SAFETY: `size_for_metadata` promises to only return `None` if the
808
        // resulting size would not fit in a `usize`.
809
0
        Self::size_for_metadata(meta)
810
0
    }
811
812
    #[doc(hidden)]
813
    #[must_use]
814
    #[inline(always)]
815
0
    fn raw_dangling() -> NonNull<Self> {
816
0
        let meta = Self::PointerMetadata::from_elem_count(0);
817
0
        Self::raw_from_ptr_len(NonNull::dangling(), meta)
818
0
    }
Unexecuted instantiation: <[half::binary16::f16] as zerocopy::KnownLayout>::raw_dangling
Unexecuted instantiation: <[half::binary16::f16] as zerocopy::KnownLayout>::raw_dangling
Unexecuted instantiation: <[half::binary16::f16] as zerocopy::KnownLayout>::raw_dangling
Unexecuted instantiation: <_ as zerocopy::KnownLayout>::raw_dangling
Unexecuted instantiation: <[half::binary16::f16] as zerocopy::KnownLayout>::raw_dangling
819
820
    /// Computes the size of an object of type `Self` with the given pointer
821
    /// metadata.
822
    ///
823
    /// # Safety
824
    ///
825
    /// `size_for_metadata` promises to return `None` if and only if the
826
    /// resulting size would not fit in a `usize`. Note that the returned size
827
    /// could exceed the actual maximum valid size of an allocated object,
828
    /// `isize::MAX`.
829
    ///
830
    /// # Examples
831
    ///
832
    /// ```
833
    /// use zerocopy::KnownLayout;
834
    ///
835
    /// assert_eq!(u8::size_for_metadata(()), Some(1));
836
    /// assert_eq!(u16::size_for_metadata(()), Some(2));
837
    /// assert_eq!(<[u8]>::size_for_metadata(42), Some(42));
838
    /// assert_eq!(<[u16]>::size_for_metadata(42), Some(84));
839
    ///
840
    /// // This size exceeds the maximum valid object size (`isize::MAX`):
841
    /// assert_eq!(<[u8]>::size_for_metadata(usize::MAX), Some(usize::MAX));
842
    ///
843
    /// // This size, if computed, would exceed `usize::MAX`:
844
    /// assert_eq!(<[u16]>::size_for_metadata(usize::MAX), None);
845
    /// ```
846
    #[inline(always)]
847
0
    fn size_for_metadata(meta: Self::PointerMetadata) -> Option<usize> {
848
0
        meta.size_for_metadata(Self::LAYOUT)
849
0
    }
850
}
851
852
/// Efficiently produces the [`TrailingSliceLayout`] of `T`.
853
#[inline(always)]
854
0
pub(crate) fn trailing_slice_layout<T>() -> TrailingSliceLayout
855
0
where
856
0
    T: ?Sized + KnownLayout<PointerMetadata = usize>,
857
{
858
    trait LayoutFacts {
859
        const SIZE_INFO: TrailingSliceLayout;
860
    }
861
862
    impl<T: ?Sized> LayoutFacts for T
863
    where
864
        T: KnownLayout<PointerMetadata = usize>,
865
    {
866
        const SIZE_INFO: TrailingSliceLayout = match T::LAYOUT.size_info {
867
            crate::SizeInfo::Sized { .. } => const_panic!("unreachable"),
868
            crate::SizeInfo::SliceDst(info) => info,
869
        };
870
    }
871
872
0
    T::SIZE_INFO
873
0
}
874
875
/// The metadata associated with a [`KnownLayout`] type.
876
#[doc(hidden)]
877
pub trait PointerMetadata: Copy + Eq + Debug {
878
    /// Constructs a `Self` from an element count.
879
    ///
880
    /// If `Self = ()`, this returns `()`. If `Self = usize`, this returns
881
    /// `elems`. No other types are currently supported.
882
    fn from_elem_count(elems: usize) -> Self;
883
884
    /// Computes the size of the object with the given layout and pointer
885
    /// metadata.
886
    ///
887
    /// # Panics
888
    ///
889
    /// If `Self = ()`, `layout` must describe a sized type. If `Self = usize`,
890
    /// `layout` must describe a slice DST. Otherwise, `size_for_metadata` may
891
    /// panic.
892
    ///
893
    /// # Safety
894
    ///
895
    /// `size_for_metadata` promises to only return `None` if the resulting size
896
    /// would not fit in a `usize`.
897
    fn size_for_metadata(self, layout: DstLayout) -> Option<usize>;
898
}
899
900
impl PointerMetadata for () {
901
    #[inline]
902
    #[allow(clippy::unused_unit)]
903
0
    fn from_elem_count(_elems: usize) -> () {}
904
905
    #[inline]
906
0
    fn size_for_metadata(self, layout: DstLayout) -> Option<usize> {
907
0
        match layout.size_info {
908
0
            SizeInfo::Sized { size } => Some(size),
909
            // NOTE: This branch is unreachable, but we return `None` rather
910
            // than `unreachable!()` to avoid generating panic paths.
911
0
            SizeInfo::SliceDst(_) => None,
912
        }
913
0
    }
914
}
915
916
impl PointerMetadata for usize {
917
    #[inline]
918
0
    fn from_elem_count(elems: usize) -> usize {
919
0
        elems
920
0
    }
Unexecuted instantiation: <usize as zerocopy::PointerMetadata>::from_elem_count
Unexecuted instantiation: <usize as zerocopy::PointerMetadata>::from_elem_count
Unexecuted instantiation: <usize as zerocopy::PointerMetadata>::from_elem_count
Unexecuted instantiation: <usize as zerocopy::PointerMetadata>::from_elem_count
Unexecuted instantiation: <usize as zerocopy::PointerMetadata>::from_elem_count
921
922
    #[inline]
923
0
    fn size_for_metadata(self, layout: DstLayout) -> Option<usize> {
924
0
        match layout.size_info {
925
0
            SizeInfo::SliceDst(TrailingSliceLayout { offset, elem_size }) => {
926
0
                let slice_len = elem_size.checked_mul(self)?;
927
0
                let without_padding = offset.checked_add(slice_len)?;
928
0
                without_padding.checked_add(util::padding_needed_for(without_padding, layout.align))
929
            }
930
            // NOTE: This branch is unreachable, but we return `None` rather
931
            // than `unreachable!()` to avoid generating panic paths.
932
0
            SizeInfo::Sized { .. } => None,
933
        }
934
0
    }
935
}
936
937
// SAFETY: Delegates safety to `DstLayout::for_slice`.
938
unsafe impl<T> KnownLayout for [T] {
939
    #[allow(clippy::missing_inline_in_public_items, dead_code)]
940
    #[cfg_attr(
941
        all(coverage_nightly, __ZEROCOPY_INTERNAL_USE_ONLY_NIGHTLY_FEATURES_IN_TESTS),
942
        coverage(off)
943
    )]
944
0
    fn only_derive_is_allowed_to_implement_this_trait()
945
0
    where
946
0
        Self: Sized,
947
    {
948
0
    }
949
950
    type PointerMetadata = usize;
951
952
    // SAFETY: `CoreMaybeUninit<T>::LAYOUT` and `T::LAYOUT` are identical
953
    // because `CoreMaybeUninit<T>` has the same size and alignment as `T` [1].
954
    // Consequently, `[CoreMaybeUninit<T>]::LAYOUT` and `[T]::LAYOUT` are
955
    // identical, because they both lack a fixed-sized prefix and because they
956
    // inherit the alignments of their inner element type (which are identical)
957
    // [2][3].
958
    //
959
    // `[CoreMaybeUninit<T>]` admits uninitialized bytes at all positions
960
    // because `CoreMaybeUninit<T>` admits uninitialized bytes at all positions
961
    // and because the inner elements of `[CoreMaybeUninit<T>]` are laid out
962
    // back-to-back [2][3].
963
    //
964
    // [1] Per https://doc.rust-lang.org/1.81.0/std/mem/union.MaybeUninit.html#layout-1:
965
    //
966
    //   `MaybeUninit<T>` is guaranteed to have the same size, alignment, and ABI as
967
    //   `T`
968
    //
969
    // [2] Per https://doc.rust-lang.org/1.82.0/reference/type-layout.html#slice-layout:
970
    //
971
    //   Slices have the same layout as the section of the array they slice.
972
    //
973
    // [3] Per https://doc.rust-lang.org/1.82.0/reference/type-layout.html#array-layout:
974
    //
975
    //   An array of `[T; N]` has a size of `size_of::<T>() * N` and the same
976
    //   alignment of `T`. Arrays are laid out so that the zero-based `nth`
977
    //   element of the array is offset from the start of the array by `n *
978
    //   size_of::<T>()` bytes.
979
    type MaybeUninit = [CoreMaybeUninit<T>];
980
981
    const LAYOUT: DstLayout = DstLayout::for_slice::<T>();
982
983
    // SAFETY: `.cast` preserves address and provenance. The returned pointer
984
    // refers to an object with `elems` elements by construction.
985
    #[inline(always)]
986
302k
    fn raw_from_ptr_len(data: NonNull<u8>, elems: usize) -> NonNull<Self> {
987
        // FIXME(#67): Remove this allow. See NonNullExt for more details.
988
        #[allow(unstable_name_collisions)]
989
302k
        NonNull::slice_from_raw_parts(data.cast::<T>(), elems)
990
302k
    }
Unexecuted instantiation: <[half::binary16::f16] as zerocopy::KnownLayout>::raw_from_ptr_len
Unexecuted instantiation: <[half::binary16::f16] as zerocopy::KnownLayout>::raw_from_ptr_len
<[u16] as zerocopy::KnownLayout>::raw_from_ptr_len
Line
Count
Source
986
302k
    fn raw_from_ptr_len(data: NonNull<u8>, elems: usize) -> NonNull<Self> {
987
        // FIXME(#67): Remove this allow. See NonNullExt for more details.
988
        #[allow(unstable_name_collisions)]
989
302k
        NonNull::slice_from_raw_parts(data.cast::<T>(), elems)
990
302k
    }
Unexecuted instantiation: <[half::binary16::f16] as zerocopy::KnownLayout>::raw_from_ptr_len
Unexecuted instantiation: <[_] as zerocopy::KnownLayout>::raw_from_ptr_len
Unexecuted instantiation: <[half::binary16::f16] as zerocopy::KnownLayout>::raw_from_ptr_len
991
992
    #[inline(always)]
993
302k
    fn pointer_to_metadata(ptr: *mut [T]) -> usize {
994
        #[allow(clippy::as_conversions)]
995
302k
        let slc = ptr as *const [()];
996
997
        // SAFETY:
998
        // - `()` has alignment 1, so `slc` is trivially aligned.
999
        // - `slc` was derived from a non-null pointer.
1000
        // - The size is 0 regardless of the length, so it is sound to
1001
        //   materialize a reference regardless of location.
1002
        // - By invariant, `self.ptr` has valid provenance.
1003
302k
        let slc = unsafe { &*slc };
1004
1005
        // This is correct because the preceding `as` cast preserves the number
1006
        // of slice elements. [1]
1007
        //
1008
        // [1] Per https://doc.rust-lang.org/reference/expressions/operator-expr.html#pointer-to-pointer-cast:
1009
        //
1010
        //   For slice types like `[T]` and `[U]`, the raw pointer types `*const
1011
        //   [T]`, `*mut [T]`, `*const [U]`, and `*mut [U]` encode the number of
1012
        //   elements in this slice. Casts between these raw pointer types
1013
        //   preserve the number of elements. ... The same holds for `str` and
1014
        //   any compound type whose unsized tail is a slice type, such as
1015
        //   struct `Foo(i32, [u8])` or `(u64, Foo)`.
1016
302k
        slc.len()
1017
302k
    }
<[half::binary16::f16] as zerocopy::KnownLayout>::pointer_to_metadata
Line
Count
Source
993
302k
    fn pointer_to_metadata(ptr: *mut [T]) -> usize {
994
        #[allow(clippy::as_conversions)]
995
302k
        let slc = ptr as *const [()];
996
997
        // SAFETY:
998
        // - `()` has alignment 1, so `slc` is trivially aligned.
999
        // - `slc` was derived from a non-null pointer.
1000
        // - The size is 0 regardless of the length, so it is sound to
1001
        //   materialize a reference regardless of location.
1002
        // - By invariant, `self.ptr` has valid provenance.
1003
302k
        let slc = unsafe { &*slc };
1004
1005
        // This is correct because the preceding `as` cast preserves the number
1006
        // of slice elements. [1]
1007
        //
1008
        // [1] Per https://doc.rust-lang.org/reference/expressions/operator-expr.html#pointer-to-pointer-cast:
1009
        //
1010
        //   For slice types like `[T]` and `[U]`, the raw pointer types `*const
1011
        //   [T]`, `*mut [T]`, `*const [U]`, and `*mut [U]` encode the number of
1012
        //   elements in this slice. Casts between these raw pointer types
1013
        //   preserve the number of elements. ... The same holds for `str` and
1014
        //   any compound type whose unsized tail is a slice type, such as
1015
        //   struct `Foo(i32, [u8])` or `(u64, Foo)`.
1016
302k
        slc.len()
1017
302k
    }
Unexecuted instantiation: <[_] as zerocopy::KnownLayout>::pointer_to_metadata
1018
}
1019
1020
#[rustfmt::skip]
1021
impl_known_layout!(
1022
    (),
1023
    u8, i8, u16, i16, u32, i32, u64, i64, u128, i128, usize, isize, f32, f64,
1024
    bool, char,
1025
    NonZeroU8, NonZeroI8, NonZeroU16, NonZeroI16, NonZeroU32, NonZeroI32,
1026
    NonZeroU64, NonZeroI64, NonZeroU128, NonZeroI128, NonZeroUsize, NonZeroIsize
1027
);
1028
#[rustfmt::skip]
1029
#[cfg(feature = "float-nightly")]
1030
impl_known_layout!(
1031
    #[cfg_attr(doc_cfg, doc(cfg(feature = "float-nightly")))]
1032
    f16,
1033
    #[cfg_attr(doc_cfg, doc(cfg(feature = "float-nightly")))]
1034
    f128
1035
);
1036
#[rustfmt::skip]
1037
impl_known_layout!(
1038
    T         => Option<T>,
1039
    T: ?Sized => PhantomData<T>,
1040
    T         => Wrapping<T>,
1041
    T         => CoreMaybeUninit<T>,
1042
    T: ?Sized => *const T,
1043
    T: ?Sized => *mut T,
1044
    T: ?Sized => &'_ T,
1045
    T: ?Sized => &'_ mut T,
1046
);
1047
impl_known_layout!(const N: usize, T => [T; N]);
1048
1049
// SAFETY: `str` has the same representation as `[u8]`. `ManuallyDrop<T>` [1],
1050
// `UnsafeCell<T>` [2], and `Cell<T>` [3] have the same representation as `T`.
1051
//
1052
// [1] Per https://doc.rust-lang.org/1.85.0/std/mem/struct.ManuallyDrop.html:
1053
//
1054
//   `ManuallyDrop<T>` is guaranteed to have the same layout and bit validity as
1055
//   `T`
1056
//
1057
// [2] Per https://doc.rust-lang.org/1.85.0/core/cell/struct.UnsafeCell.html#memory-layout:
1058
//
1059
//   `UnsafeCell<T>` has the same in-memory representation as its inner type
1060
//   `T`.
1061
//
1062
// [3] Per https://doc.rust-lang.org/1.85.0/core/cell/struct.Cell.html#memory-layout:
1063
//
1064
//   `Cell<T>` has the same in-memory representation as `T`.
1065
#[allow(clippy::multiple_unsafe_ops_per_block)]
1066
const _: () = unsafe {
1067
    unsafe_impl_known_layout!(
1068
        #[repr([u8])]
1069
        str
1070
    );
1071
    unsafe_impl_known_layout!(T: ?Sized + KnownLayout => #[repr(T)] ManuallyDrop<T>);
1072
    unsafe_impl_known_layout!(T: ?Sized + KnownLayout => #[repr(T)] UnsafeCell<T>);
1073
    unsafe_impl_known_layout!(T: ?Sized + KnownLayout => #[repr(T)] Cell<T>);
1074
};
1075
1076
// SAFETY:
1077
// - By consequence of the invariant on `T::MaybeUninit` that `T::LAYOUT` and
1078
//   `T::MaybeUninit::LAYOUT` are equal, `T` and `T::MaybeUninit` have the same:
1079
//   - Fixed prefix size
1080
//   - Alignment
1081
//   - (For DSTs) trailing slice element size
1082
// - By consequence of the above, referents `T::MaybeUninit` and `T` have the
1083
//   require the same kind of pointer metadata, and thus it is valid to perform
1084
//   an `as` cast from `*mut T` and `*mut T::MaybeUninit`, and this operation
1085
//   preserves referent size (ie, `size_of_val_raw`).
1086
const _: () = unsafe {
1087
    unsafe_impl_known_layout!(T: ?Sized + KnownLayout => #[repr(T::MaybeUninit)] MaybeUninit<T>)
1088
};
1089
1090
/// Analyzes whether a type is [`FromZeros`].
1091
///
1092
/// This derive analyzes, at compile time, whether the annotated type satisfies
1093
/// the [safety conditions] of `FromZeros` and implements `FromZeros` and its
1094
/// supertraits if it is sound to do so. This derive can be applied to structs,
1095
/// enums, and unions; e.g.:
1096
///
1097
/// ```
1098
/// # use zerocopy_derive::{FromZeros, Immutable};
1099
/// #[derive(FromZeros)]
1100
/// struct MyStruct {
1101
/// # /*
1102
///     ...
1103
/// # */
1104
/// }
1105
///
1106
/// #[derive(FromZeros)]
1107
/// #[repr(u8)]
1108
/// enum MyEnum {
1109
/// #   Variant0,
1110
/// # /*
1111
///     ...
1112
/// # */
1113
/// }
1114
///
1115
/// #[derive(FromZeros, Immutable)]
1116
/// union MyUnion {
1117
/// #   variant: u8,
1118
/// # /*
1119
///     ...
1120
/// # */
1121
/// }
1122
/// ```
1123
///
1124
/// [safety conditions]: trait@FromZeros#safety
1125
///
1126
/// # Analysis
1127
///
1128
/// *This section describes, roughly, the analysis performed by this derive to
1129
/// determine whether it is sound to implement `FromZeros` for a given type.
1130
/// Unless you are modifying the implementation of this derive, or attempting to
1131
/// manually implement `FromZeros` for a type yourself, you don't need to read
1132
/// this section.*
1133
///
1134
/// If a type has the following properties, then this derive can implement
1135
/// `FromZeros` for that type:
1136
///
1137
/// - If the type is a struct, all of its fields must be `FromZeros`.
1138
/// - If the type is an enum:
1139
///   - It must have a defined representation (`repr`s `C`, `u8`, `u16`, `u32`,
1140
///     `u64`, `usize`, `i8`, `i16`, `i32`, `i64`, or `isize`).
1141
///   - It must have a variant with a discriminant/tag of `0`, and its fields
1142
///     must be `FromZeros`. See [the reference] for a description of
1143
///     discriminant values are specified.
1144
///   - The fields of that variant must be `FromZeros`.
1145
///
1146
/// This analysis is subject to change. Unsafe code may *only* rely on the
1147
/// documented [safety conditions] of `FromZeros`, and must *not* rely on the
1148
/// implementation details of this derive.
1149
///
1150
/// [the reference]: https://doc.rust-lang.org/reference/items/enumerations.html#custom-discriminant-values-for-fieldless-enumerations
1151
///
1152
/// ## Why isn't an explicit representation required for structs?
1153
///
1154
/// Neither this derive, nor the [safety conditions] of `FromZeros`, requires
1155
/// that structs are marked with `#[repr(C)]`.
1156
///
1157
/// Per the [Rust reference](reference),
1158
///
1159
/// > The representation of a type can change the padding between fields, but
1160
/// > does not change the layout of the fields themselves.
1161
///
1162
/// [reference]: https://doc.rust-lang.org/reference/type-layout.html#representations
1163
///
1164
/// Since the layout of structs only consists of padding bytes and field bytes,
1165
/// a struct is soundly `FromZeros` if:
1166
/// 1. its padding is soundly `FromZeros`, and
1167
/// 2. its fields are soundly `FromZeros`.
1168
///
1169
/// The answer to the first question is always yes: padding bytes do not have
1170
/// any validity constraints. A [discussion] of this question in the Unsafe Code
1171
/// Guidelines Working Group concluded that it would be virtually unimaginable
1172
/// for future versions of rustc to add validity constraints to padding bytes.
1173
///
1174
/// [discussion]: https://github.com/rust-lang/unsafe-code-guidelines/issues/174
1175
///
1176
/// Whether a struct is soundly `FromZeros` therefore solely depends on whether
1177
/// its fields are `FromZeros`.
1178
// FIXME(#146): Document why we don't require an enum to have an explicit `repr`
1179
// attribute.
1180
#[cfg(any(feature = "derive", test))]
1181
#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
1182
pub use zerocopy_derive::FromZeros;
1183
/// Analyzes whether a type is [`Immutable`].
1184
///
1185
/// This derive analyzes, at compile time, whether the annotated type satisfies
1186
/// the [safety conditions] of `Immutable` and implements `Immutable` if it is
1187
/// sound to do so. This derive can be applied to structs, enums, and unions;
1188
/// e.g.:
1189
///
1190
/// ```
1191
/// # use zerocopy_derive::Immutable;
1192
/// #[derive(Immutable)]
1193
/// struct MyStruct {
1194
/// # /*
1195
///     ...
1196
/// # */
1197
/// }
1198
///
1199
/// #[derive(Immutable)]
1200
/// enum MyEnum {
1201
/// #   Variant0,
1202
/// # /*
1203
///     ...
1204
/// # */
1205
/// }
1206
///
1207
/// #[derive(Immutable)]
1208
/// union MyUnion {
1209
/// #   variant: u8,
1210
/// # /*
1211
///     ...
1212
/// # */
1213
/// }
1214
/// ```
1215
///
1216
/// # Analysis
1217
///
1218
/// *This section describes, roughly, the analysis performed by this derive to
1219
/// determine whether it is sound to implement `Immutable` for a given type.
1220
/// Unless you are modifying the implementation of this derive, you don't need
1221
/// to read this section.*
1222
///
1223
/// If a type has the following properties, then this derive can implement
1224
/// `Immutable` for that type:
1225
///
1226
/// - All fields must be `Immutable`.
1227
///
1228
/// This analysis is subject to change. Unsafe code may *only* rely on the
1229
/// documented [safety conditions] of `Immutable`, and must *not* rely on the
1230
/// implementation details of this derive.
1231
///
1232
/// [safety conditions]: trait@Immutable#safety
1233
#[cfg(any(feature = "derive", test))]
1234
#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
1235
pub use zerocopy_derive::Immutable;
1236
1237
/// Types which are free from interior mutability.
1238
///
1239
/// `T: Immutable` indicates that `T` does not permit interior mutation, except
1240
/// by ownership or an exclusive (`&mut`) borrow.
1241
///
1242
/// # Implementation
1243
///
1244
/// **Do not implement this trait yourself!** Instead, use
1245
/// [`#[derive(Immutable)]`][derive] (requires the `derive` Cargo feature);
1246
/// e.g.:
1247
///
1248
/// ```
1249
/// # use zerocopy_derive::Immutable;
1250
/// #[derive(Immutable)]
1251
/// struct MyStruct {
1252
/// # /*
1253
///     ...
1254
/// # */
1255
/// }
1256
///
1257
/// #[derive(Immutable)]
1258
/// enum MyEnum {
1259
/// # /*
1260
///     ...
1261
/// # */
1262
/// }
1263
///
1264
/// #[derive(Immutable)]
1265
/// union MyUnion {
1266
/// #   variant: u8,
1267
/// # /*
1268
///     ...
1269
/// # */
1270
/// }
1271
/// ```
1272
///
1273
/// This derive performs a sophisticated, compile-time safety analysis to
1274
/// determine whether a type is `Immutable`.
1275
///
1276
/// # Safety
1277
///
1278
/// Unsafe code outside of this crate must not make any assumptions about `T`
1279
/// based on `T: Immutable`. We reserve the right to relax the requirements for
1280
/// `Immutable` in the future, and if unsafe code outside of this crate makes
1281
/// assumptions based on `T: Immutable`, future relaxations may cause that code
1282
/// to become unsound.
1283
///
1284
// # Safety (Internal)
1285
//
1286
// If `T: Immutable`, unsafe code *inside of this crate* may assume that, given
1287
// `t: &T`, `t` does not contain any [`UnsafeCell`]s at any byte location
1288
// within the byte range addressed by `t`. This includes ranges of length 0
1289
// (e.g., `UnsafeCell<()>` and `[UnsafeCell<u8>; 0]`). If a type implements
1290
// `Immutable` which violates this assumptions, it may cause this crate to
1291
// exhibit [undefined behavior].
1292
//
1293
// [`UnsafeCell`]: core::cell::UnsafeCell
1294
// [undefined behavior]: https://raphlinus.github.io/programming/rust/2018/08/17/undefined-behavior.html
1295
#[cfg_attr(
1296
    feature = "derive",
1297
    doc = "[derive]: zerocopy_derive::Immutable",
1298
    doc = "[derive-analysis]: zerocopy_derive::Immutable#analysis"
1299
)]
1300
#[cfg_attr(
1301
    not(feature = "derive"),
1302
    doc = concat!("[derive]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.Immutable.html"),
1303
    doc = concat!("[derive-analysis]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.Immutable.html#analysis"),
1304
)]
1305
#[cfg_attr(
1306
    not(no_zerocopy_diagnostic_on_unimplemented_1_78_0),
1307
    diagnostic::on_unimplemented(note = "Consider adding `#[derive(Immutable)]` to `{Self}`")
1308
)]
1309
pub unsafe trait Immutable {
1310
    // The `Self: Sized` bound makes it so that `Immutable` is still object
1311
    // safe.
1312
    #[doc(hidden)]
1313
    fn only_derive_is_allowed_to_implement_this_trait()
1314
    where
1315
        Self: Sized;
1316
}
1317
1318
/// Implements [`TryFromBytes`].
1319
///
1320
/// This derive synthesizes the runtime checks required to check whether a
1321
/// sequence of initialized bytes corresponds to a valid instance of a type.
1322
/// This derive can be applied to structs, enums, and unions; e.g.:
1323
///
1324
/// ```
1325
/// # use zerocopy_derive::{TryFromBytes, Immutable};
1326
/// #[derive(TryFromBytes)]
1327
/// struct MyStruct {
1328
/// # /*
1329
///     ...
1330
/// # */
1331
/// }
1332
///
1333
/// #[derive(TryFromBytes)]
1334
/// #[repr(u8)]
1335
/// enum MyEnum {
1336
/// #   V00,
1337
/// # /*
1338
///     ...
1339
/// # */
1340
/// }
1341
///
1342
/// #[derive(TryFromBytes, Immutable)]
1343
/// union MyUnion {
1344
/// #   variant: u8,
1345
/// # /*
1346
///     ...
1347
/// # */
1348
/// }
1349
/// ```
1350
///
1351
/// # Portability
1352
///
1353
/// To ensure consistent endianness for enums with multi-byte representations,
1354
/// explicitly specify and convert each discriminant using `.to_le()` or
1355
/// `.to_be()`; e.g.:
1356
///
1357
/// ```
1358
/// # use zerocopy_derive::TryFromBytes;
1359
/// // `DataStoreVersion` is encoded in little-endian.
1360
/// #[derive(TryFromBytes)]
1361
/// #[repr(u32)]
1362
/// pub enum DataStoreVersion {
1363
///     /// Version 1 of the data store.
1364
///     V1 = 9u32.to_le(),
1365
///
1366
///     /// Version 2 of the data store.
1367
///     V2 = 10u32.to_le(),
1368
/// }
1369
/// ```
1370
///
1371
/// [safety conditions]: trait@TryFromBytes#safety
1372
#[cfg(any(feature = "derive", test))]
1373
#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
1374
pub use zerocopy_derive::TryFromBytes;
1375
1376
/// Types for which some bit patterns are valid.
1377
///
1378
/// A memory region of the appropriate length which contains initialized bytes
1379
/// can be viewed as a `TryFromBytes` type so long as the runtime value of those
1380
/// bytes corresponds to a [*valid instance*] of that type. For example,
1381
/// [`bool`] is `TryFromBytes`, so zerocopy can transmute a [`u8`] into a
1382
/// [`bool`] so long as it first checks that the value of the [`u8`] is `0` or
1383
/// `1`.
1384
///
1385
/// # Implementation
1386
///
1387
/// **Do not implement this trait yourself!** Instead, use
1388
/// [`#[derive(TryFromBytes)]`][derive]; e.g.:
1389
///
1390
/// ```
1391
/// # use zerocopy_derive::{TryFromBytes, Immutable};
1392
/// #[derive(TryFromBytes)]
1393
/// struct MyStruct {
1394
/// # /*
1395
///     ...
1396
/// # */
1397
/// }
1398
///
1399
/// #[derive(TryFromBytes)]
1400
/// #[repr(u8)]
1401
/// enum MyEnum {
1402
/// #   V00,
1403
/// # /*
1404
///     ...
1405
/// # */
1406
/// }
1407
///
1408
/// #[derive(TryFromBytes, Immutable)]
1409
/// union MyUnion {
1410
/// #   variant: u8,
1411
/// # /*
1412
///     ...
1413
/// # */
1414
/// }
1415
/// ```
1416
///
1417
/// This derive ensures that the runtime check of whether bytes correspond to a
1418
/// valid instance is sound. You **must** implement this trait via the derive.
1419
///
1420
/// # What is a "valid instance"?
1421
///
1422
/// In Rust, each type has *bit validity*, which refers to the set of bit
1423
/// patterns which may appear in an instance of that type. It is impossible for
1424
/// safe Rust code to produce values which violate bit validity (ie, values
1425
/// outside of the "valid" set of bit patterns). If `unsafe` code produces an
1426
/// invalid value, this is considered [undefined behavior].
1427
///
1428
/// Rust's bit validity rules are currently being decided, which means that some
1429
/// types have three classes of bit patterns: those which are definitely valid,
1430
/// and whose validity is documented in the language; those which may or may not
1431
/// be considered valid at some point in the future; and those which are
1432
/// definitely invalid.
1433
///
1434
/// Zerocopy takes a conservative approach, and only considers a bit pattern to
1435
/// be valid if its validity is a documented guarantee provided by the
1436
/// language.
1437
///
1438
/// For most use cases, Rust's current guarantees align with programmers'
1439
/// intuitions about what ought to be valid. As a result, zerocopy's
1440
/// conservatism should not affect most users.
1441
///
1442
/// If you are negatively affected by lack of support for a particular type,
1443
/// we encourage you to let us know by [filing an issue][github-repo].
1444
///
1445
/// # `TryFromBytes` is not symmetrical with [`IntoBytes`]
1446
///
1447
/// There are some types which implement both `TryFromBytes` and [`IntoBytes`],
1448
/// but for which `TryFromBytes` is not guaranteed to accept all byte sequences
1449
/// produced by `IntoBytes`. In other words, for some `T: TryFromBytes +
1450
/// IntoBytes`, there exist values of `t: T` such that
1451
/// `TryFromBytes::try_ref_from_bytes(t.as_bytes()) == None`. Code should not
1452
/// generally assume that values produced by `IntoBytes` will necessarily be
1453
/// accepted as valid by `TryFromBytes`.
1454
///
1455
/// # Safety
1456
///
1457
/// On its own, `T: TryFromBytes` does not make any guarantees about the layout
1458
/// or representation of `T`. It merely provides the ability to perform a
1459
/// validity check at runtime via methods like [`try_ref_from_bytes`].
1460
///
1461
/// You must not rely on the `#[doc(hidden)]` internals of `TryFromBytes`.
1462
/// Future releases of zerocopy may make backwards-breaking changes to these
1463
/// items, including changes that only affect soundness, which may cause code
1464
/// which uses those items to silently become unsound.
1465
///
1466
/// [undefined behavior]: https://raphlinus.github.io/programming/rust/2018/08/17/undefined-behavior.html
1467
/// [github-repo]: https://github.com/google/zerocopy
1468
/// [`try_ref_from_bytes`]: TryFromBytes::try_ref_from_bytes
1469
/// [*valid instance*]: #what-is-a-valid-instance
1470
#[cfg_attr(feature = "derive", doc = "[derive]: zerocopy_derive::TryFromBytes")]
1471
#[cfg_attr(
1472
    not(feature = "derive"),
1473
    doc = concat!("[derive]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.TryFromBytes.html"),
1474
)]
1475
#[cfg_attr(
1476
    not(no_zerocopy_diagnostic_on_unimplemented_1_78_0),
1477
    diagnostic::on_unimplemented(note = "Consider adding `#[derive(TryFromBytes)]` to `{Self}`")
1478
)]
1479
pub unsafe trait TryFromBytes {
1480
    // The `Self: Sized` bound makes it so that `TryFromBytes` is still object
1481
    // safe.
1482
    #[doc(hidden)]
1483
    fn only_derive_is_allowed_to_implement_this_trait()
1484
    where
1485
        Self: Sized;
1486
1487
    /// Does a given memory range contain a valid instance of `Self`?
1488
    ///
1489
    /// # Safety
1490
    ///
1491
    /// Unsafe code may assume that, if `is_bit_valid(candidate)` returns true,
1492
    /// `*candidate` contains a valid `Self`.
1493
    ///
1494
    /// # Panics
1495
    ///
1496
    /// `is_bit_valid` may panic. Callers are responsible for ensuring that any
1497
    /// `unsafe` code remains sound even in the face of `is_bit_valid`
1498
    /// panicking. (We support user-defined validation routines; so long as
1499
    /// these routines are not required to be `unsafe`, there is no way to
1500
    /// ensure that these do not generate panics.)
1501
    ///
1502
    /// Besides user-defined validation routines panicking, `is_bit_valid` will
1503
    /// either panic or fail to compile if called on a pointer with [`Shared`]
1504
    /// aliasing when `Self: !Immutable`.
1505
    ///
1506
    /// [`UnsafeCell`]: core::cell::UnsafeCell
1507
    /// [`Shared`]: invariant::Shared
1508
    #[doc(hidden)]
1509
    fn is_bit_valid<A: invariant::Reference>(candidate: Maybe<'_, Self, A>) -> bool;
1510
1511
    /// Attempts to interpret the given `source` as a `&Self`.
1512
    ///
1513
    /// If the bytes of `source` are a valid instance of `Self`, this method
1514
    /// returns a reference to those bytes interpreted as a `Self`. If the
1515
    /// length of `source` is not a [valid size of `Self`][valid-size], or if
1516
    /// `source` is not appropriately aligned, or if `source` is not a valid
1517
    /// instance of `Self`, this returns `Err`. If [`Self:
1518
    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
1519
    /// error][ConvertError::from].
1520
    ///
1521
    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
1522
    ///
1523
    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
1524
    /// [self-unaligned]: Unaligned
1525
    /// [slice-dst]: KnownLayout#dynamically-sized-types
1526
    ///
1527
    /// # Compile-Time Assertions
1528
    ///
1529
    /// This method cannot yet be used on unsized types whose dynamically-sized
1530
    /// component is zero-sized. Attempting to use this method on such types
1531
    /// results in a compile-time assertion error; e.g.:
1532
    ///
1533
    /// ```compile_fail,E0080
1534
    /// use zerocopy::*;
1535
    /// # use zerocopy_derive::*;
1536
    ///
1537
    /// #[derive(TryFromBytes, Immutable, KnownLayout)]
1538
    /// #[repr(C)]
1539
    /// struct ZSTy {
1540
    ///     leading_sized: u16,
1541
    ///     trailing_dst: [()],
1542
    /// }
1543
    ///
1544
    /// let _ = ZSTy::try_ref_from_bytes(0u16.as_bytes()); // âš  Compile Error!
1545
    /// ```
1546
    ///
1547
    /// # Examples
1548
    ///
1549
    /// ```
1550
    /// use zerocopy::TryFromBytes;
1551
    /// # use zerocopy_derive::*;
1552
    ///
1553
    /// // The only valid value of this type is the byte `0xC0`
1554
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1555
    /// #[repr(u8)]
1556
    /// enum C0 { xC0 = 0xC0 }
1557
    ///
1558
    /// // The only valid value of this type is the byte sequence `0xC0C0`.
1559
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1560
    /// #[repr(C)]
1561
    /// struct C0C0(C0, C0);
1562
    ///
1563
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1564
    /// #[repr(C)]
1565
    /// struct Packet {
1566
    ///     magic_number: C0C0,
1567
    ///     mug_size: u8,
1568
    ///     temperature: u8,
1569
    ///     marshmallows: [[u8; 2]],
1570
    /// }
1571
    ///
1572
    /// let bytes = &[0xC0, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5][..];
1573
    ///
1574
    /// let packet = Packet::try_ref_from_bytes(bytes).unwrap();
1575
    ///
1576
    /// assert_eq!(packet.mug_size, 240);
1577
    /// assert_eq!(packet.temperature, 77);
1578
    /// assert_eq!(packet.marshmallows, [[0, 1], [2, 3], [4, 5]]);
1579
    ///
1580
    /// // These bytes are not valid instance of `Packet`.
1581
    /// let bytes = &[0x10, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5][..];
1582
    /// assert!(Packet::try_ref_from_bytes(bytes).is_err());
1583
    /// ```
1584
    #[must_use = "has no side effects"]
1585
    #[inline]
1586
0
    fn try_ref_from_bytes(source: &[u8]) -> Result<&Self, TryCastError<&[u8], Self>>
1587
0
    where
1588
0
        Self: KnownLayout + Immutable,
1589
    {
1590
0
        static_assert_dst_is_not_zst!(Self);
1591
0
        match Ptr::from_ref(source).try_cast_into_no_leftover::<Self, BecauseImmutable>(None) {
1592
0
            Ok(source) => {
1593
                // This call may panic. If that happens, it doesn't cause any soundness
1594
                // issues, as we have not generated any invalid state which we need to
1595
                // fix before returning.
1596
                //
1597
                // Note that one panic or post-monomorphization error condition is
1598
                // calling `try_into_valid` (and thus `is_bit_valid`) with a shared
1599
                // pointer when `Self: !Immutable`. Since `Self: Immutable`, this panic
1600
                // condition will not happen.
1601
0
                match source.try_into_valid() {
1602
0
                    Ok(valid) => Ok(valid.as_ref()),
1603
0
                    Err(e) => {
1604
0
                        Err(e.map_src(|src| src.as_bytes::<BecauseImmutable>().as_ref()).into())
1605
                    }
1606
                }
1607
            }
1608
0
            Err(e) => Err(e.map_src(Ptr::as_ref).into()),
1609
        }
1610
0
    }
1611
1612
    /// Attempts to interpret the prefix of the given `source` as a `&Self`.
1613
    ///
1614
    /// This method computes the [largest possible size of `Self`][valid-size]
1615
    /// that can fit in the leading bytes of `source`. If that prefix is a valid
1616
    /// instance of `Self`, this method returns a reference to those bytes
1617
    /// interpreted as `Self`, and a reference to the remaining bytes. If there
1618
    /// are insufficient bytes, or if `source` is not appropriately aligned, or
1619
    /// if those bytes are not a valid instance of `Self`, this returns `Err`.
1620
    /// If [`Self: Unaligned`][self-unaligned], you can [infallibly discard the
1621
    /// alignment error][ConvertError::from].
1622
    ///
1623
    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
1624
    ///
1625
    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
1626
    /// [self-unaligned]: Unaligned
1627
    /// [slice-dst]: KnownLayout#dynamically-sized-types
1628
    ///
1629
    /// # Compile-Time Assertions
1630
    ///
1631
    /// This method cannot yet be used on unsized types whose dynamically-sized
1632
    /// component is zero-sized. Attempting to use this method on such types
1633
    /// results in a compile-time assertion error; e.g.:
1634
    ///
1635
    /// ```compile_fail,E0080
1636
    /// use zerocopy::*;
1637
    /// # use zerocopy_derive::*;
1638
    ///
1639
    /// #[derive(TryFromBytes, Immutable, KnownLayout)]
1640
    /// #[repr(C)]
1641
    /// struct ZSTy {
1642
    ///     leading_sized: u16,
1643
    ///     trailing_dst: [()],
1644
    /// }
1645
    ///
1646
    /// let _ = ZSTy::try_ref_from_prefix(0u16.as_bytes()); // âš  Compile Error!
1647
    /// ```
1648
    ///
1649
    /// # Examples
1650
    ///
1651
    /// ```
1652
    /// use zerocopy::TryFromBytes;
1653
    /// # use zerocopy_derive::*;
1654
    ///
1655
    /// // The only valid value of this type is the byte `0xC0`
1656
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1657
    /// #[repr(u8)]
1658
    /// enum C0 { xC0 = 0xC0 }
1659
    ///
1660
    /// // The only valid value of this type is the bytes `0xC0C0`.
1661
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1662
    /// #[repr(C)]
1663
    /// struct C0C0(C0, C0);
1664
    ///
1665
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1666
    /// #[repr(C)]
1667
    /// struct Packet {
1668
    ///     magic_number: C0C0,
1669
    ///     mug_size: u8,
1670
    ///     temperature: u8,
1671
    ///     marshmallows: [[u8; 2]],
1672
    /// }
1673
    ///
1674
    /// // These are more bytes than are needed to encode a `Packet`.
1675
    /// let bytes = &[0xC0, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5, 6][..];
1676
    ///
1677
    /// let (packet, suffix) = Packet::try_ref_from_prefix(bytes).unwrap();
1678
    ///
1679
    /// assert_eq!(packet.mug_size, 240);
1680
    /// assert_eq!(packet.temperature, 77);
1681
    /// assert_eq!(packet.marshmallows, [[0, 1], [2, 3], [4, 5]]);
1682
    /// assert_eq!(suffix, &[6u8][..]);
1683
    ///
1684
    /// // These bytes are not valid instance of `Packet`.
1685
    /// let bytes = &[0x10, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5, 6][..];
1686
    /// assert!(Packet::try_ref_from_prefix(bytes).is_err());
1687
    /// ```
1688
    #[must_use = "has no side effects"]
1689
    #[inline]
1690
0
    fn try_ref_from_prefix(source: &[u8]) -> Result<(&Self, &[u8]), TryCastError<&[u8], Self>>
1691
0
    where
1692
0
        Self: KnownLayout + Immutable,
1693
    {
1694
0
        static_assert_dst_is_not_zst!(Self);
1695
0
        try_ref_from_prefix_suffix(source, CastType::Prefix, None)
1696
0
    }
1697
1698
    /// Attempts to interpret the suffix of the given `source` as a `&Self`.
1699
    ///
1700
    /// This method computes the [largest possible size of `Self`][valid-size]
1701
    /// that can fit in the trailing bytes of `source`. If that suffix is a
1702
    /// valid instance of `Self`, this method returns a reference to those bytes
1703
    /// interpreted as `Self`, and a reference to the preceding bytes. If there
1704
    /// are insufficient bytes, or if the suffix of `source` would not be
1705
    /// appropriately aligned, or if the suffix is not a valid instance of
1706
    /// `Self`, this returns `Err`. If [`Self: Unaligned`][self-unaligned], you
1707
    /// can [infallibly discard the alignment error][ConvertError::from].
1708
    ///
1709
    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
1710
    ///
1711
    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
1712
    /// [self-unaligned]: Unaligned
1713
    /// [slice-dst]: KnownLayout#dynamically-sized-types
1714
    ///
1715
    /// # Compile-Time Assertions
1716
    ///
1717
    /// This method cannot yet be used on unsized types whose dynamically-sized
1718
    /// component is zero-sized. Attempting to use this method on such types
1719
    /// results in a compile-time assertion error; e.g.:
1720
    ///
1721
    /// ```compile_fail,E0080
1722
    /// use zerocopy::*;
1723
    /// # use zerocopy_derive::*;
1724
    ///
1725
    /// #[derive(TryFromBytes, Immutable, KnownLayout)]
1726
    /// #[repr(C)]
1727
    /// struct ZSTy {
1728
    ///     leading_sized: u16,
1729
    ///     trailing_dst: [()],
1730
    /// }
1731
    ///
1732
    /// let _ = ZSTy::try_ref_from_suffix(0u16.as_bytes()); // âš  Compile Error!
1733
    /// ```
1734
    ///
1735
    /// # Examples
1736
    ///
1737
    /// ```
1738
    /// use zerocopy::TryFromBytes;
1739
    /// # use zerocopy_derive::*;
1740
    ///
1741
    /// // The only valid value of this type is the byte `0xC0`
1742
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1743
    /// #[repr(u8)]
1744
    /// enum C0 { xC0 = 0xC0 }
1745
    ///
1746
    /// // The only valid value of this type is the bytes `0xC0C0`.
1747
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1748
    /// #[repr(C)]
1749
    /// struct C0C0(C0, C0);
1750
    ///
1751
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1752
    /// #[repr(C)]
1753
    /// struct Packet {
1754
    ///     magic_number: C0C0,
1755
    ///     mug_size: u8,
1756
    ///     temperature: u8,
1757
    ///     marshmallows: [[u8; 2]],
1758
    /// }
1759
    ///
1760
    /// // These are more bytes than are needed to encode a `Packet`.
1761
    /// let bytes = &[0, 0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7][..];
1762
    ///
1763
    /// let (prefix, packet) = Packet::try_ref_from_suffix(bytes).unwrap();
1764
    ///
1765
    /// assert_eq!(packet.mug_size, 240);
1766
    /// assert_eq!(packet.temperature, 77);
1767
    /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
1768
    /// assert_eq!(prefix, &[0u8][..]);
1769
    ///
1770
    /// // These bytes are not valid instance of `Packet`.
1771
    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 77, 240, 0xC0, 0x10][..];
1772
    /// assert!(Packet::try_ref_from_suffix(bytes).is_err());
1773
    /// ```
1774
    #[must_use = "has no side effects"]
1775
    #[inline]
1776
0
    fn try_ref_from_suffix(source: &[u8]) -> Result<(&[u8], &Self), TryCastError<&[u8], Self>>
1777
0
    where
1778
0
        Self: KnownLayout + Immutable,
1779
    {
1780
0
        static_assert_dst_is_not_zst!(Self);
1781
0
        try_ref_from_prefix_suffix(source, CastType::Suffix, None).map(swap)
1782
0
    }
1783
1784
    /// Attempts to interpret the given `source` as a `&mut Self` without
1785
    /// copying.
1786
    ///
1787
    /// If the bytes of `source` are a valid instance of `Self`, this method
1788
    /// returns a reference to those bytes interpreted as a `Self`. If the
1789
    /// length of `source` is not a [valid size of `Self`][valid-size], or if
1790
    /// `source` is not appropriately aligned, or if `source` is not a valid
1791
    /// instance of `Self`, this returns `Err`. If [`Self:
1792
    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
1793
    /// error][ConvertError::from].
1794
    ///
1795
    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
1796
    ///
1797
    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
1798
    /// [self-unaligned]: Unaligned
1799
    /// [slice-dst]: KnownLayout#dynamically-sized-types
1800
    ///
1801
    /// # Compile-Time Assertions
1802
    ///
1803
    /// This method cannot yet be used on unsized types whose dynamically-sized
1804
    /// component is zero-sized. Attempting to use this method on such types
1805
    /// results in a compile-time assertion error; e.g.:
1806
    ///
1807
    /// ```compile_fail,E0080
1808
    /// use zerocopy::*;
1809
    /// # use zerocopy_derive::*;
1810
    ///
1811
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1812
    /// #[repr(C, packed)]
1813
    /// struct ZSTy {
1814
    ///     leading_sized: [u8; 2],
1815
    ///     trailing_dst: [()],
1816
    /// }
1817
    ///
1818
    /// let mut source = [85, 85];
1819
    /// let _ = ZSTy::try_mut_from_bytes(&mut source[..]); // âš  Compile Error!
1820
    /// ```
1821
    ///
1822
    /// # Examples
1823
    ///
1824
    /// ```
1825
    /// use zerocopy::TryFromBytes;
1826
    /// # use zerocopy_derive::*;
1827
    ///
1828
    /// // The only valid value of this type is the byte `0xC0`
1829
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1830
    /// #[repr(u8)]
1831
    /// enum C0 { xC0 = 0xC0 }
1832
    ///
1833
    /// // The only valid value of this type is the bytes `0xC0C0`.
1834
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1835
    /// #[repr(C)]
1836
    /// struct C0C0(C0, C0);
1837
    ///
1838
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1839
    /// #[repr(C, packed)]
1840
    /// struct Packet {
1841
    ///     magic_number: C0C0,
1842
    ///     mug_size: u8,
1843
    ///     temperature: u8,
1844
    ///     marshmallows: [[u8; 2]],
1845
    /// }
1846
    ///
1847
    /// let bytes = &mut [0xC0, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5][..];
1848
    ///
1849
    /// let packet = Packet::try_mut_from_bytes(bytes).unwrap();
1850
    ///
1851
    /// assert_eq!(packet.mug_size, 240);
1852
    /// assert_eq!(packet.temperature, 77);
1853
    /// assert_eq!(packet.marshmallows, [[0, 1], [2, 3], [4, 5]]);
1854
    ///
1855
    /// packet.temperature = 111;
1856
    ///
1857
    /// assert_eq!(bytes, [0xC0, 0xC0, 240, 111, 0, 1, 2, 3, 4, 5]);
1858
    ///
1859
    /// // These bytes are not valid instance of `Packet`.
1860
    /// let bytes = &mut [0x10, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5, 6][..];
1861
    /// assert!(Packet::try_mut_from_bytes(bytes).is_err());
1862
    /// ```
1863
    #[must_use = "has no side effects"]
1864
    #[inline]
1865
0
    fn try_mut_from_bytes(bytes: &mut [u8]) -> Result<&mut Self, TryCastError<&mut [u8], Self>>
1866
0
    where
1867
0
        Self: KnownLayout + IntoBytes,
1868
    {
1869
0
        static_assert_dst_is_not_zst!(Self);
1870
0
        match Ptr::from_mut(bytes).try_cast_into_no_leftover::<Self, BecauseExclusive>(None) {
1871
0
            Ok(source) => {
1872
                // This call may panic. If that happens, it doesn't cause any soundness
1873
                // issues, as we have not generated any invalid state which we need to
1874
                // fix before returning.
1875
                //
1876
                // Note that one panic or post-monomorphization error condition is
1877
                // calling `try_into_valid` (and thus `is_bit_valid`) with a shared
1878
                // pointer when `Self: !Immutable`. Since `Self: Immutable`, this panic
1879
                // condition will not happen.
1880
0
                match source.try_into_valid() {
1881
0
                    Ok(source) => Ok(source.as_mut()),
1882
0
                    Err(e) => {
1883
0
                        Err(e.map_src(|src| src.as_bytes::<BecauseExclusive>().as_mut()).into())
1884
                    }
1885
                }
1886
            }
1887
0
            Err(e) => Err(e.map_src(Ptr::as_mut).into()),
1888
        }
1889
0
    }
1890
1891
    /// Attempts to interpret the prefix of the given `source` as a `&mut
1892
    /// Self`.
1893
    ///
1894
    /// This method computes the [largest possible size of `Self`][valid-size]
1895
    /// that can fit in the leading bytes of `source`. If that prefix is a valid
1896
    /// instance of `Self`, this method returns a reference to those bytes
1897
    /// interpreted as `Self`, and a reference to the remaining bytes. If there
1898
    /// are insufficient bytes, or if `source` is not appropriately aligned, or
1899
    /// if the bytes are not a valid instance of `Self`, this returns `Err`. If
1900
    /// [`Self: Unaligned`][self-unaligned], you can [infallibly discard the
1901
    /// alignment error][ConvertError::from].
1902
    ///
1903
    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
1904
    ///
1905
    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
1906
    /// [self-unaligned]: Unaligned
1907
    /// [slice-dst]: KnownLayout#dynamically-sized-types
1908
    ///
1909
    /// # Compile-Time Assertions
1910
    ///
1911
    /// This method cannot yet be used on unsized types whose dynamically-sized
1912
    /// component is zero-sized. Attempting to use this method on such types
1913
    /// results in a compile-time assertion error; e.g.:
1914
    ///
1915
    /// ```compile_fail,E0080
1916
    /// use zerocopy::*;
1917
    /// # use zerocopy_derive::*;
1918
    ///
1919
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1920
    /// #[repr(C, packed)]
1921
    /// struct ZSTy {
1922
    ///     leading_sized: [u8; 2],
1923
    ///     trailing_dst: [()],
1924
    /// }
1925
    ///
1926
    /// let mut source = [85, 85];
1927
    /// let _ = ZSTy::try_mut_from_prefix(&mut source[..]); // âš  Compile Error!
1928
    /// ```
1929
    ///
1930
    /// # Examples
1931
    ///
1932
    /// ```
1933
    /// use zerocopy::TryFromBytes;
1934
    /// # use zerocopy_derive::*;
1935
    ///
1936
    /// // The only valid value of this type is the byte `0xC0`
1937
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1938
    /// #[repr(u8)]
1939
    /// enum C0 { xC0 = 0xC0 }
1940
    ///
1941
    /// // The only valid value of this type is the bytes `0xC0C0`.
1942
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1943
    /// #[repr(C)]
1944
    /// struct C0C0(C0, C0);
1945
    ///
1946
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1947
    /// #[repr(C, packed)]
1948
    /// struct Packet {
1949
    ///     magic_number: C0C0,
1950
    ///     mug_size: u8,
1951
    ///     temperature: u8,
1952
    ///     marshmallows: [[u8; 2]],
1953
    /// }
1954
    ///
1955
    /// // These are more bytes than are needed to encode a `Packet`.
1956
    /// let bytes = &mut [0xC0, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5, 6][..];
1957
    ///
1958
    /// let (packet, suffix) = Packet::try_mut_from_prefix(bytes).unwrap();
1959
    ///
1960
    /// assert_eq!(packet.mug_size, 240);
1961
    /// assert_eq!(packet.temperature, 77);
1962
    /// assert_eq!(packet.marshmallows, [[0, 1], [2, 3], [4, 5]]);
1963
    /// assert_eq!(suffix, &[6u8][..]);
1964
    ///
1965
    /// packet.temperature = 111;
1966
    /// suffix[0] = 222;
1967
    ///
1968
    /// assert_eq!(bytes, [0xC0, 0xC0, 240, 111, 0, 1, 2, 3, 4, 5, 222]);
1969
    ///
1970
    /// // These bytes are not valid instance of `Packet`.
1971
    /// let bytes = &mut [0x10, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5, 6][..];
1972
    /// assert!(Packet::try_mut_from_prefix(bytes).is_err());
1973
    /// ```
1974
    #[must_use = "has no side effects"]
1975
    #[inline]
1976
0
    fn try_mut_from_prefix(
1977
0
        source: &mut [u8],
1978
0
    ) -> Result<(&mut Self, &mut [u8]), TryCastError<&mut [u8], Self>>
1979
0
    where
1980
0
        Self: KnownLayout + IntoBytes,
1981
    {
1982
0
        static_assert_dst_is_not_zst!(Self);
1983
0
        try_mut_from_prefix_suffix(source, CastType::Prefix, None)
1984
0
    }
1985
1986
    /// Attempts to interpret the suffix of the given `source` as a `&mut
1987
    /// Self`.
1988
    ///
1989
    /// This method computes the [largest possible size of `Self`][valid-size]
1990
    /// that can fit in the trailing bytes of `source`. If that suffix is a
1991
    /// valid instance of `Self`, this method returns a reference to those bytes
1992
    /// interpreted as `Self`, and a reference to the preceding bytes. If there
1993
    /// are insufficient bytes, or if the suffix of `source` would not be
1994
    /// appropriately aligned, or if the suffix is not a valid instance of
1995
    /// `Self`, this returns `Err`. If [`Self: Unaligned`][self-unaligned], you
1996
    /// can [infallibly discard the alignment error][ConvertError::from].
1997
    ///
1998
    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
1999
    ///
2000
    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
2001
    /// [self-unaligned]: Unaligned
2002
    /// [slice-dst]: KnownLayout#dynamically-sized-types
2003
    ///
2004
    /// # Compile-Time Assertions
2005
    ///
2006
    /// This method cannot yet be used on unsized types whose dynamically-sized
2007
    /// component is zero-sized. Attempting to use this method on such types
2008
    /// results in a compile-time assertion error; e.g.:
2009
    ///
2010
    /// ```compile_fail,E0080
2011
    /// use zerocopy::*;
2012
    /// # use zerocopy_derive::*;
2013
    ///
2014
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2015
    /// #[repr(C, packed)]
2016
    /// struct ZSTy {
2017
    ///     leading_sized: u16,
2018
    ///     trailing_dst: [()],
2019
    /// }
2020
    ///
2021
    /// let mut source = [85, 85];
2022
    /// let _ = ZSTy::try_mut_from_suffix(&mut source[..]); // âš  Compile Error!
2023
    /// ```
2024
    ///
2025
    /// # Examples
2026
    ///
2027
    /// ```
2028
    /// use zerocopy::TryFromBytes;
2029
    /// # use zerocopy_derive::*;
2030
    ///
2031
    /// // The only valid value of this type is the byte `0xC0`
2032
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2033
    /// #[repr(u8)]
2034
    /// enum C0 { xC0 = 0xC0 }
2035
    ///
2036
    /// // The only valid value of this type is the bytes `0xC0C0`.
2037
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2038
    /// #[repr(C)]
2039
    /// struct C0C0(C0, C0);
2040
    ///
2041
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2042
    /// #[repr(C, packed)]
2043
    /// struct Packet {
2044
    ///     magic_number: C0C0,
2045
    ///     mug_size: u8,
2046
    ///     temperature: u8,
2047
    ///     marshmallows: [[u8; 2]],
2048
    /// }
2049
    ///
2050
    /// // These are more bytes than are needed to encode a `Packet`.
2051
    /// let bytes = &mut [0, 0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7][..];
2052
    ///
2053
    /// let (prefix, packet) = Packet::try_mut_from_suffix(bytes).unwrap();
2054
    ///
2055
    /// assert_eq!(packet.mug_size, 240);
2056
    /// assert_eq!(packet.temperature, 77);
2057
    /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
2058
    /// assert_eq!(prefix, &[0u8][..]);
2059
    ///
2060
    /// prefix[0] = 111;
2061
    /// packet.temperature = 222;
2062
    ///
2063
    /// assert_eq!(bytes, [111, 0xC0, 0xC0, 240, 222, 2, 3, 4, 5, 6, 7]);
2064
    ///
2065
    /// // These bytes are not valid instance of `Packet`.
2066
    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 77, 240, 0xC0, 0x10][..];
2067
    /// assert!(Packet::try_mut_from_suffix(bytes).is_err());
2068
    /// ```
2069
    #[must_use = "has no side effects"]
2070
    #[inline]
2071
0
    fn try_mut_from_suffix(
2072
0
        source: &mut [u8],
2073
0
    ) -> Result<(&mut [u8], &mut Self), TryCastError<&mut [u8], Self>>
2074
0
    where
2075
0
        Self: KnownLayout + IntoBytes,
2076
    {
2077
0
        static_assert_dst_is_not_zst!(Self);
2078
0
        try_mut_from_prefix_suffix(source, CastType::Suffix, None).map(swap)
2079
0
    }
2080
2081
    /// Attempts to interpret the given `source` as a `&Self` with a DST length
2082
    /// equal to `count`.
2083
    ///
2084
    /// This method attempts to return a reference to `source` interpreted as a
2085
    /// `Self` with `count` trailing elements. If the length of `source` is not
2086
    /// equal to the size of `Self` with `count` elements, if `source` is not
2087
    /// appropriately aligned, or if `source` does not contain a valid instance
2088
    /// of `Self`, this returns `Err`. If [`Self: Unaligned`][self-unaligned],
2089
    /// you can [infallibly discard the alignment error][ConvertError::from].
2090
    ///
2091
    /// [self-unaligned]: Unaligned
2092
    /// [slice-dst]: KnownLayout#dynamically-sized-types
2093
    ///
2094
    /// # Examples
2095
    ///
2096
    /// ```
2097
    /// # #![allow(non_camel_case_types)] // For C0::xC0
2098
    /// use zerocopy::TryFromBytes;
2099
    /// # use zerocopy_derive::*;
2100
    ///
2101
    /// // The only valid value of this type is the byte `0xC0`
2102
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2103
    /// #[repr(u8)]
2104
    /// enum C0 { xC0 = 0xC0 }
2105
    ///
2106
    /// // The only valid value of this type is the bytes `0xC0C0`.
2107
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2108
    /// #[repr(C)]
2109
    /// struct C0C0(C0, C0);
2110
    ///
2111
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2112
    /// #[repr(C)]
2113
    /// struct Packet {
2114
    ///     magic_number: C0C0,
2115
    ///     mug_size: u8,
2116
    ///     temperature: u8,
2117
    ///     marshmallows: [[u8; 2]],
2118
    /// }
2119
    ///
2120
    /// let bytes = &[0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7][..];
2121
    ///
2122
    /// let packet = Packet::try_ref_from_bytes_with_elems(bytes, 3).unwrap();
2123
    ///
2124
    /// assert_eq!(packet.mug_size, 240);
2125
    /// assert_eq!(packet.temperature, 77);
2126
    /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
2127
    ///
2128
    /// // These bytes are not valid instance of `Packet`.
2129
    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 77, 240, 0xC0, 0xC0][..];
2130
    /// assert!(Packet::try_ref_from_bytes_with_elems(bytes, 3).is_err());
2131
    /// ```
2132
    ///
2133
    /// Since an explicit `count` is provided, this method supports types with
2134
    /// zero-sized trailing slice elements. Methods such as [`try_ref_from_bytes`]
2135
    /// which do not take an explicit count do not support such types.
2136
    ///
2137
    /// ```
2138
    /// use core::num::NonZeroU16;
2139
    /// use zerocopy::*;
2140
    /// # use zerocopy_derive::*;
2141
    ///
2142
    /// #[derive(TryFromBytes, Immutable, KnownLayout)]
2143
    /// #[repr(C)]
2144
    /// struct ZSTy {
2145
    ///     leading_sized: NonZeroU16,
2146
    ///     trailing_dst: [()],
2147
    /// }
2148
    ///
2149
    /// let src = 0xCAFEu16.as_bytes();
2150
    /// let zsty = ZSTy::try_ref_from_bytes_with_elems(src, 42).unwrap();
2151
    /// assert_eq!(zsty.trailing_dst.len(), 42);
2152
    /// ```
2153
    ///
2154
    /// [`try_ref_from_bytes`]: TryFromBytes::try_ref_from_bytes
2155
    #[must_use = "has no side effects"]
2156
    #[inline]
2157
0
    fn try_ref_from_bytes_with_elems(
2158
0
        source: &[u8],
2159
0
        count: usize,
2160
0
    ) -> Result<&Self, TryCastError<&[u8], Self>>
2161
0
    where
2162
0
        Self: KnownLayout<PointerMetadata = usize> + Immutable,
2163
    {
2164
0
        match Ptr::from_ref(source).try_cast_into_no_leftover::<Self, BecauseImmutable>(Some(count))
2165
        {
2166
0
            Ok(source) => {
2167
                // This call may panic. If that happens, it doesn't cause any soundness
2168
                // issues, as we have not generated any invalid state which we need to
2169
                // fix before returning.
2170
                //
2171
                // Note that one panic or post-monomorphization error condition is
2172
                // calling `try_into_valid` (and thus `is_bit_valid`) with a shared
2173
                // pointer when `Self: !Immutable`. Since `Self: Immutable`, this panic
2174
                // condition will not happen.
2175
0
                match source.try_into_valid() {
2176
0
                    Ok(source) => Ok(source.as_ref()),
2177
0
                    Err(e) => {
2178
0
                        Err(e.map_src(|src| src.as_bytes::<BecauseImmutable>().as_ref()).into())
2179
                    }
2180
                }
2181
            }
2182
0
            Err(e) => Err(e.map_src(Ptr::as_ref).into()),
2183
        }
2184
0
    }
2185
2186
    /// Attempts to interpret the prefix of the given `source` as a `&Self` with
2187
    /// a DST length equal to `count`.
2188
    ///
2189
    /// This method attempts to return a reference to the prefix of `source`
2190
    /// interpreted as a `Self` with `count` trailing elements, and a reference
2191
    /// to the remaining bytes. If the length of `source` is less than the size
2192
    /// of `Self` with `count` elements, if `source` is not appropriately
2193
    /// aligned, or if the prefix of `source` does not contain a valid instance
2194
    /// of `Self`, this returns `Err`. If [`Self: Unaligned`][self-unaligned],
2195
    /// you can [infallibly discard the alignment error][ConvertError::from].
2196
    ///
2197
    /// [self-unaligned]: Unaligned
2198
    /// [slice-dst]: KnownLayout#dynamically-sized-types
2199
    ///
2200
    /// # Examples
2201
    ///
2202
    /// ```
2203
    /// # #![allow(non_camel_case_types)] // For C0::xC0
2204
    /// use zerocopy::TryFromBytes;
2205
    /// # use zerocopy_derive::*;
2206
    ///
2207
    /// // The only valid value of this type is the byte `0xC0`
2208
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2209
    /// #[repr(u8)]
2210
    /// enum C0 { xC0 = 0xC0 }
2211
    ///
2212
    /// // The only valid value of this type is the bytes `0xC0C0`.
2213
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2214
    /// #[repr(C)]
2215
    /// struct C0C0(C0, C0);
2216
    ///
2217
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2218
    /// #[repr(C)]
2219
    /// struct Packet {
2220
    ///     magic_number: C0C0,
2221
    ///     mug_size: u8,
2222
    ///     temperature: u8,
2223
    ///     marshmallows: [[u8; 2]],
2224
    /// }
2225
    ///
2226
    /// let bytes = &[0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7, 8][..];
2227
    ///
2228
    /// let (packet, suffix) = Packet::try_ref_from_prefix_with_elems(bytes, 3).unwrap();
2229
    ///
2230
    /// assert_eq!(packet.mug_size, 240);
2231
    /// assert_eq!(packet.temperature, 77);
2232
    /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
2233
    /// assert_eq!(suffix, &[8u8][..]);
2234
    ///
2235
    /// // These bytes are not valid instance of `Packet`.
2236
    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7, 8, 77, 240, 0xC0, 0xC0][..];
2237
    /// assert!(Packet::try_ref_from_prefix_with_elems(bytes, 3).is_err());
2238
    /// ```
2239
    ///
2240
    /// Since an explicit `count` is provided, this method supports types with
2241
    /// zero-sized trailing slice elements. Methods such as [`try_ref_from_prefix`]
2242
    /// which do not take an explicit count do not support such types.
2243
    ///
2244
    /// ```
2245
    /// use core::num::NonZeroU16;
2246
    /// use zerocopy::*;
2247
    /// # use zerocopy_derive::*;
2248
    ///
2249
    /// #[derive(TryFromBytes, Immutable, KnownLayout)]
2250
    /// #[repr(C)]
2251
    /// struct ZSTy {
2252
    ///     leading_sized: NonZeroU16,
2253
    ///     trailing_dst: [()],
2254
    /// }
2255
    ///
2256
    /// let src = 0xCAFEu16.as_bytes();
2257
    /// let (zsty, _) = ZSTy::try_ref_from_prefix_with_elems(src, 42).unwrap();
2258
    /// assert_eq!(zsty.trailing_dst.len(), 42);
2259
    /// ```
2260
    ///
2261
    /// [`try_ref_from_prefix`]: TryFromBytes::try_ref_from_prefix
2262
    #[must_use = "has no side effects"]
2263
    #[inline]
2264
0
    fn try_ref_from_prefix_with_elems(
2265
0
        source: &[u8],
2266
0
        count: usize,
2267
0
    ) -> Result<(&Self, &[u8]), TryCastError<&[u8], Self>>
2268
0
    where
2269
0
        Self: KnownLayout<PointerMetadata = usize> + Immutable,
2270
    {
2271
0
        try_ref_from_prefix_suffix(source, CastType::Prefix, Some(count))
2272
0
    }
2273
2274
    /// Attempts to interpret the suffix of the given `source` as a `&Self` with
2275
    /// a DST length equal to `count`.
2276
    ///
2277
    /// This method attempts to return a reference to the suffix of `source`
2278
    /// interpreted as a `Self` with `count` trailing elements, and a reference
2279
    /// to the preceding bytes. If the length of `source` is less than the size
2280
    /// of `Self` with `count` elements, if the suffix of `source` is not
2281
    /// appropriately aligned, or if the suffix of `source` does not contain a
2282
    /// valid instance of `Self`, this returns `Err`. If [`Self:
2283
    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
2284
    /// error][ConvertError::from].
2285
    ///
2286
    /// [self-unaligned]: Unaligned
2287
    /// [slice-dst]: KnownLayout#dynamically-sized-types
2288
    ///
2289
    /// # Examples
2290
    ///
2291
    /// ```
2292
    /// # #![allow(non_camel_case_types)] // For C0::xC0
2293
    /// use zerocopy::TryFromBytes;
2294
    /// # use zerocopy_derive::*;
2295
    ///
2296
    /// // The only valid value of this type is the byte `0xC0`
2297
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2298
    /// #[repr(u8)]
2299
    /// enum C0 { xC0 = 0xC0 }
2300
    ///
2301
    /// // The only valid value of this type is the bytes `0xC0C0`.
2302
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2303
    /// #[repr(C)]
2304
    /// struct C0C0(C0, C0);
2305
    ///
2306
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2307
    /// #[repr(C)]
2308
    /// struct Packet {
2309
    ///     magic_number: C0C0,
2310
    ///     mug_size: u8,
2311
    ///     temperature: u8,
2312
    ///     marshmallows: [[u8; 2]],
2313
    /// }
2314
    ///
2315
    /// let bytes = &[123, 0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7][..];
2316
    ///
2317
    /// let (prefix, packet) = Packet::try_ref_from_suffix_with_elems(bytes, 3).unwrap();
2318
    ///
2319
    /// assert_eq!(packet.mug_size, 240);
2320
    /// assert_eq!(packet.temperature, 77);
2321
    /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
2322
    /// assert_eq!(prefix, &[123u8][..]);
2323
    ///
2324
    /// // These bytes are not valid instance of `Packet`.
2325
    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 77, 240, 0xC0, 0xC0][..];
2326
    /// assert!(Packet::try_ref_from_suffix_with_elems(bytes, 3).is_err());
2327
    /// ```
2328
    ///
2329
    /// Since an explicit `count` is provided, this method supports types with
2330
    /// zero-sized trailing slice elements. Methods such as [`try_ref_from_prefix`]
2331
    /// which do not take an explicit count do not support such types.
2332
    ///
2333
    /// ```
2334
    /// use core::num::NonZeroU16;
2335
    /// use zerocopy::*;
2336
    /// # use zerocopy_derive::*;
2337
    ///
2338
    /// #[derive(TryFromBytes, Immutable, KnownLayout)]
2339
    /// #[repr(C)]
2340
    /// struct ZSTy {
2341
    ///     leading_sized: NonZeroU16,
2342
    ///     trailing_dst: [()],
2343
    /// }
2344
    ///
2345
    /// let src = 0xCAFEu16.as_bytes();
2346
    /// let (_, zsty) = ZSTy::try_ref_from_suffix_with_elems(src, 42).unwrap();
2347
    /// assert_eq!(zsty.trailing_dst.len(), 42);
2348
    /// ```
2349
    ///
2350
    /// [`try_ref_from_prefix`]: TryFromBytes::try_ref_from_prefix
2351
    #[must_use = "has no side effects"]
2352
    #[inline]
2353
0
    fn try_ref_from_suffix_with_elems(
2354
0
        source: &[u8],
2355
0
        count: usize,
2356
0
    ) -> Result<(&[u8], &Self), TryCastError<&[u8], Self>>
2357
0
    where
2358
0
        Self: KnownLayout<PointerMetadata = usize> + Immutable,
2359
    {
2360
0
        try_ref_from_prefix_suffix(source, CastType::Suffix, Some(count)).map(swap)
2361
0
    }
2362
2363
    /// Attempts to interpret the given `source` as a `&mut Self` with a DST
2364
    /// length equal to `count`.
2365
    ///
2366
    /// This method attempts to return a reference to `source` interpreted as a
2367
    /// `Self` with `count` trailing elements. If the length of `source` is not
2368
    /// equal to the size of `Self` with `count` elements, if `source` is not
2369
    /// appropriately aligned, or if `source` does not contain a valid instance
2370
    /// of `Self`, this returns `Err`. If [`Self: Unaligned`][self-unaligned],
2371
    /// you can [infallibly discard the alignment error][ConvertError::from].
2372
    ///
2373
    /// [self-unaligned]: Unaligned
2374
    /// [slice-dst]: KnownLayout#dynamically-sized-types
2375
    ///
2376
    /// # Examples
2377
    ///
2378
    /// ```
2379
    /// # #![allow(non_camel_case_types)] // For C0::xC0
2380
    /// use zerocopy::TryFromBytes;
2381
    /// # use zerocopy_derive::*;
2382
    ///
2383
    /// // The only valid value of this type is the byte `0xC0`
2384
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2385
    /// #[repr(u8)]
2386
    /// enum C0 { xC0 = 0xC0 }
2387
    ///
2388
    /// // The only valid value of this type is the bytes `0xC0C0`.
2389
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2390
    /// #[repr(C)]
2391
    /// struct C0C0(C0, C0);
2392
    ///
2393
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2394
    /// #[repr(C, packed)]
2395
    /// struct Packet {
2396
    ///     magic_number: C0C0,
2397
    ///     mug_size: u8,
2398
    ///     temperature: u8,
2399
    ///     marshmallows: [[u8; 2]],
2400
    /// }
2401
    ///
2402
    /// let bytes = &mut [0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7][..];
2403
    ///
2404
    /// let packet = Packet::try_mut_from_bytes_with_elems(bytes, 3).unwrap();
2405
    ///
2406
    /// assert_eq!(packet.mug_size, 240);
2407
    /// assert_eq!(packet.temperature, 77);
2408
    /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
2409
    ///
2410
    /// packet.temperature = 111;
2411
    ///
2412
    /// assert_eq!(bytes, [0xC0, 0xC0, 240, 111, 2, 3, 4, 5, 6, 7]);
2413
    ///
2414
    /// // These bytes are not valid instance of `Packet`.
2415
    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 77, 240, 0xC0, 0xC0][..];
2416
    /// assert!(Packet::try_mut_from_bytes_with_elems(bytes, 3).is_err());
2417
    /// ```
2418
    ///
2419
    /// Since an explicit `count` is provided, this method supports types with
2420
    /// zero-sized trailing slice elements. Methods such as [`try_mut_from_bytes`]
2421
    /// which do not take an explicit count do not support such types.
2422
    ///
2423
    /// ```
2424
    /// use core::num::NonZeroU16;
2425
    /// use zerocopy::*;
2426
    /// # use zerocopy_derive::*;
2427
    ///
2428
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2429
    /// #[repr(C, packed)]
2430
    /// struct ZSTy {
2431
    ///     leading_sized: NonZeroU16,
2432
    ///     trailing_dst: [()],
2433
    /// }
2434
    ///
2435
    /// let mut src = 0xCAFEu16;
2436
    /// let src = src.as_mut_bytes();
2437
    /// let zsty = ZSTy::try_mut_from_bytes_with_elems(src, 42).unwrap();
2438
    /// assert_eq!(zsty.trailing_dst.len(), 42);
2439
    /// ```
2440
    ///
2441
    /// [`try_mut_from_bytes`]: TryFromBytes::try_mut_from_bytes
2442
    #[must_use = "has no side effects"]
2443
    #[inline]
2444
0
    fn try_mut_from_bytes_with_elems(
2445
0
        source: &mut [u8],
2446
0
        count: usize,
2447
0
    ) -> Result<&mut Self, TryCastError<&mut [u8], Self>>
2448
0
    where
2449
0
        Self: KnownLayout<PointerMetadata = usize> + IntoBytes,
2450
    {
2451
0
        match Ptr::from_mut(source).try_cast_into_no_leftover::<Self, BecauseExclusive>(Some(count))
2452
        {
2453
0
            Ok(source) => {
2454
                // This call may panic. If that happens, it doesn't cause any soundness
2455
                // issues, as we have not generated any invalid state which we need to
2456
                // fix before returning.
2457
                //
2458
                // Note that one panic or post-monomorphization error condition is
2459
                // calling `try_into_valid` (and thus `is_bit_valid`) with a shared
2460
                // pointer when `Self: !Immutable`. Since `Self: Immutable`, this panic
2461
                // condition will not happen.
2462
0
                match source.try_into_valid() {
2463
0
                    Ok(source) => Ok(source.as_mut()),
2464
0
                    Err(e) => {
2465
0
                        Err(e.map_src(|src| src.as_bytes::<BecauseExclusive>().as_mut()).into())
2466
                    }
2467
                }
2468
            }
2469
0
            Err(e) => Err(e.map_src(Ptr::as_mut).into()),
2470
        }
2471
0
    }
2472
2473
    /// Attempts to interpret the prefix of the given `source` as a `&mut Self`
2474
    /// with a DST length equal to `count`.
2475
    ///
2476
    /// This method attempts to return a reference to the prefix of `source`
2477
    /// interpreted as a `Self` with `count` trailing elements, and a reference
2478
    /// to the remaining bytes. If the length of `source` is less than the size
2479
    /// of `Self` with `count` elements, if `source` is not appropriately
2480
    /// aligned, or if the prefix of `source` does not contain a valid instance
2481
    /// of `Self`, this returns `Err`. If [`Self: Unaligned`][self-unaligned],
2482
    /// you can [infallibly discard the alignment error][ConvertError::from].
2483
    ///
2484
    /// [self-unaligned]: Unaligned
2485
    /// [slice-dst]: KnownLayout#dynamically-sized-types
2486
    ///
2487
    /// # Examples
2488
    ///
2489
    /// ```
2490
    /// # #![allow(non_camel_case_types)] // For C0::xC0
2491
    /// use zerocopy::TryFromBytes;
2492
    /// # use zerocopy_derive::*;
2493
    ///
2494
    /// // The only valid value of this type is the byte `0xC0`
2495
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2496
    /// #[repr(u8)]
2497
    /// enum C0 { xC0 = 0xC0 }
2498
    ///
2499
    /// // The only valid value of this type is the bytes `0xC0C0`.
2500
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2501
    /// #[repr(C)]
2502
    /// struct C0C0(C0, C0);
2503
    ///
2504
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2505
    /// #[repr(C, packed)]
2506
    /// struct Packet {
2507
    ///     magic_number: C0C0,
2508
    ///     mug_size: u8,
2509
    ///     temperature: u8,
2510
    ///     marshmallows: [[u8; 2]],
2511
    /// }
2512
    ///
2513
    /// let bytes = &mut [0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7, 8][..];
2514
    ///
2515
    /// let (packet, suffix) = Packet::try_mut_from_prefix_with_elems(bytes, 3).unwrap();
2516
    ///
2517
    /// assert_eq!(packet.mug_size, 240);
2518
    /// assert_eq!(packet.temperature, 77);
2519
    /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
2520
    /// assert_eq!(suffix, &[8u8][..]);
2521
    ///
2522
    /// packet.temperature = 111;
2523
    /// suffix[0] = 222;
2524
    ///
2525
    /// assert_eq!(bytes, [0xC0, 0xC0, 240, 111, 2, 3, 4, 5, 6, 7, 222]);
2526
    ///
2527
    /// // These bytes are not valid instance of `Packet`.
2528
    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7, 8, 77, 240, 0xC0, 0xC0][..];
2529
    /// assert!(Packet::try_mut_from_prefix_with_elems(bytes, 3).is_err());
2530
    /// ```
2531
    ///
2532
    /// Since an explicit `count` is provided, this method supports types with
2533
    /// zero-sized trailing slice elements. Methods such as [`try_mut_from_prefix`]
2534
    /// which do not take an explicit count do not support such types.
2535
    ///
2536
    /// ```
2537
    /// use core::num::NonZeroU16;
2538
    /// use zerocopy::*;
2539
    /// # use zerocopy_derive::*;
2540
    ///
2541
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2542
    /// #[repr(C, packed)]
2543
    /// struct ZSTy {
2544
    ///     leading_sized: NonZeroU16,
2545
    ///     trailing_dst: [()],
2546
    /// }
2547
    ///
2548
    /// let mut src = 0xCAFEu16;
2549
    /// let src = src.as_mut_bytes();
2550
    /// let (zsty, _) = ZSTy::try_mut_from_prefix_with_elems(src, 42).unwrap();
2551
    /// assert_eq!(zsty.trailing_dst.len(), 42);
2552
    /// ```
2553
    ///
2554
    /// [`try_mut_from_prefix`]: TryFromBytes::try_mut_from_prefix
2555
    #[must_use = "has no side effects"]
2556
    #[inline]
2557
0
    fn try_mut_from_prefix_with_elems(
2558
0
        source: &mut [u8],
2559
0
        count: usize,
2560
0
    ) -> Result<(&mut Self, &mut [u8]), TryCastError<&mut [u8], Self>>
2561
0
    where
2562
0
        Self: KnownLayout<PointerMetadata = usize> + IntoBytes,
2563
    {
2564
0
        try_mut_from_prefix_suffix(source, CastType::Prefix, Some(count))
2565
0
    }
2566
2567
    /// Attempts to interpret the suffix of the given `source` as a `&mut Self`
2568
    /// with a DST length equal to `count`.
2569
    ///
2570
    /// This method attempts to return a reference to the suffix of `source`
2571
    /// interpreted as a `Self` with `count` trailing elements, and a reference
2572
    /// to the preceding bytes. If the length of `source` is less than the size
2573
    /// of `Self` with `count` elements, if the suffix of `source` is not
2574
    /// appropriately aligned, or if the suffix of `source` does not contain a
2575
    /// valid instance of `Self`, this returns `Err`. If [`Self:
2576
    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
2577
    /// error][ConvertError::from].
2578
    ///
2579
    /// [self-unaligned]: Unaligned
2580
    /// [slice-dst]: KnownLayout#dynamically-sized-types
2581
    ///
2582
    /// # Examples
2583
    ///
2584
    /// ```
2585
    /// # #![allow(non_camel_case_types)] // For C0::xC0
2586
    /// use zerocopy::TryFromBytes;
2587
    /// # use zerocopy_derive::*;
2588
    ///
2589
    /// // The only valid value of this type is the byte `0xC0`
2590
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2591
    /// #[repr(u8)]
2592
    /// enum C0 { xC0 = 0xC0 }
2593
    ///
2594
    /// // The only valid value of this type is the bytes `0xC0C0`.
2595
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2596
    /// #[repr(C)]
2597
    /// struct C0C0(C0, C0);
2598
    ///
2599
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2600
    /// #[repr(C, packed)]
2601
    /// struct Packet {
2602
    ///     magic_number: C0C0,
2603
    ///     mug_size: u8,
2604
    ///     temperature: u8,
2605
    ///     marshmallows: [[u8; 2]],
2606
    /// }
2607
    ///
2608
    /// let bytes = &mut [123, 0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7][..];
2609
    ///
2610
    /// let (prefix, packet) = Packet::try_mut_from_suffix_with_elems(bytes, 3).unwrap();
2611
    ///
2612
    /// assert_eq!(packet.mug_size, 240);
2613
    /// assert_eq!(packet.temperature, 77);
2614
    /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
2615
    /// assert_eq!(prefix, &[123u8][..]);
2616
    ///
2617
    /// prefix[0] = 111;
2618
    /// packet.temperature = 222;
2619
    ///
2620
    /// assert_eq!(bytes, [111, 0xC0, 0xC0, 240, 222, 2, 3, 4, 5, 6, 7]);
2621
    ///
2622
    /// // These bytes are not valid instance of `Packet`.
2623
    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7, 8, 77, 240, 0xC0, 0xC0][..];
2624
    /// assert!(Packet::try_mut_from_suffix_with_elems(bytes, 3).is_err());
2625
    /// ```
2626
    ///
2627
    /// Since an explicit `count` is provided, this method supports types with
2628
    /// zero-sized trailing slice elements. Methods such as [`try_mut_from_prefix`]
2629
    /// which do not take an explicit count do not support such types.
2630
    ///
2631
    /// ```
2632
    /// use core::num::NonZeroU16;
2633
    /// use zerocopy::*;
2634
    /// # use zerocopy_derive::*;
2635
    ///
2636
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2637
    /// #[repr(C, packed)]
2638
    /// struct ZSTy {
2639
    ///     leading_sized: NonZeroU16,
2640
    ///     trailing_dst: [()],
2641
    /// }
2642
    ///
2643
    /// let mut src = 0xCAFEu16;
2644
    /// let src = src.as_mut_bytes();
2645
    /// let (_, zsty) = ZSTy::try_mut_from_suffix_with_elems(src, 42).unwrap();
2646
    /// assert_eq!(zsty.trailing_dst.len(), 42);
2647
    /// ```
2648
    ///
2649
    /// [`try_mut_from_prefix`]: TryFromBytes::try_mut_from_prefix
2650
    #[must_use = "has no side effects"]
2651
    #[inline]
2652
0
    fn try_mut_from_suffix_with_elems(
2653
0
        source: &mut [u8],
2654
0
        count: usize,
2655
0
    ) -> Result<(&mut [u8], &mut Self), TryCastError<&mut [u8], Self>>
2656
0
    where
2657
0
        Self: KnownLayout<PointerMetadata = usize> + IntoBytes,
2658
    {
2659
0
        try_mut_from_prefix_suffix(source, CastType::Suffix, Some(count)).map(swap)
2660
0
    }
2661
2662
    /// Attempts to read the given `source` as a `Self`.
2663
    ///
2664
    /// If `source.len() != size_of::<Self>()` or the bytes are not a valid
2665
    /// instance of `Self`, this returns `Err`.
2666
    ///
2667
    /// # Examples
2668
    ///
2669
    /// ```
2670
    /// use zerocopy::TryFromBytes;
2671
    /// # use zerocopy_derive::*;
2672
    ///
2673
    /// // The only valid value of this type is the byte `0xC0`
2674
    /// #[derive(TryFromBytes)]
2675
    /// #[repr(u8)]
2676
    /// enum C0 { xC0 = 0xC0 }
2677
    ///
2678
    /// // The only valid value of this type is the bytes `0xC0C0`.
2679
    /// #[derive(TryFromBytes)]
2680
    /// #[repr(C)]
2681
    /// struct C0C0(C0, C0);
2682
    ///
2683
    /// #[derive(TryFromBytes)]
2684
    /// #[repr(C)]
2685
    /// struct Packet {
2686
    ///     magic_number: C0C0,
2687
    ///     mug_size: u8,
2688
    ///     temperature: u8,
2689
    /// }
2690
    ///
2691
    /// let bytes = &[0xC0, 0xC0, 240, 77][..];
2692
    ///
2693
    /// let packet = Packet::try_read_from_bytes(bytes).unwrap();
2694
    ///
2695
    /// assert_eq!(packet.mug_size, 240);
2696
    /// assert_eq!(packet.temperature, 77);
2697
    ///
2698
    /// // These bytes are not valid instance of `Packet`.
2699
    /// let bytes = &mut [0x10, 0xC0, 240, 77][..];
2700
    /// assert!(Packet::try_read_from_bytes(bytes).is_err());
2701
    /// ```
2702
    #[must_use = "has no side effects"]
2703
    #[inline]
2704
0
    fn try_read_from_bytes(source: &[u8]) -> Result<Self, TryReadError<&[u8], Self>>
2705
0
    where
2706
0
        Self: Sized,
2707
    {
2708
0
        let candidate = match CoreMaybeUninit::<Self>::read_from_bytes(source) {
2709
0
            Ok(candidate) => candidate,
2710
0
            Err(e) => {
2711
0
                return Err(TryReadError::Size(e.with_dst()));
2712
            }
2713
        };
2714
        // SAFETY: `candidate` was copied from from `source: &[u8]`, so all of
2715
        // its bytes are initialized.
2716
0
        unsafe { try_read_from(source, candidate) }
2717
0
    }
2718
2719
    /// Attempts to read a `Self` from the prefix of the given `source`.
2720
    ///
2721
    /// This attempts to read a `Self` from the first `size_of::<Self>()` bytes
2722
    /// of `source`, returning that `Self` and any remaining bytes. If
2723
    /// `source.len() < size_of::<Self>()` or the bytes are not a valid instance
2724
    /// of `Self`, it returns `Err`.
2725
    ///
2726
    /// # Examples
2727
    ///
2728
    /// ```
2729
    /// use zerocopy::TryFromBytes;
2730
    /// # use zerocopy_derive::*;
2731
    ///
2732
    /// // The only valid value of this type is the byte `0xC0`
2733
    /// #[derive(TryFromBytes)]
2734
    /// #[repr(u8)]
2735
    /// enum C0 { xC0 = 0xC0 }
2736
    ///
2737
    /// // The only valid value of this type is the bytes `0xC0C0`.
2738
    /// #[derive(TryFromBytes)]
2739
    /// #[repr(C)]
2740
    /// struct C0C0(C0, C0);
2741
    ///
2742
    /// #[derive(TryFromBytes)]
2743
    /// #[repr(C)]
2744
    /// struct Packet {
2745
    ///     magic_number: C0C0,
2746
    ///     mug_size: u8,
2747
    ///     temperature: u8,
2748
    /// }
2749
    ///
2750
    /// // These are more bytes than are needed to encode a `Packet`.
2751
    /// let bytes = &[0xC0, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5, 6][..];
2752
    ///
2753
    /// let (packet, suffix) = Packet::try_read_from_prefix(bytes).unwrap();
2754
    ///
2755
    /// assert_eq!(packet.mug_size, 240);
2756
    /// assert_eq!(packet.temperature, 77);
2757
    /// assert_eq!(suffix, &[0u8, 1, 2, 3, 4, 5, 6][..]);
2758
    ///
2759
    /// // These bytes are not valid instance of `Packet`.
2760
    /// let bytes = &[0x10, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5, 6][..];
2761
    /// assert!(Packet::try_read_from_prefix(bytes).is_err());
2762
    /// ```
2763
    #[must_use = "has no side effects"]
2764
    #[inline]
2765
0
    fn try_read_from_prefix(source: &[u8]) -> Result<(Self, &[u8]), TryReadError<&[u8], Self>>
2766
0
    where
2767
0
        Self: Sized,
2768
    {
2769
0
        let (candidate, suffix) = match CoreMaybeUninit::<Self>::read_from_prefix(source) {
2770
0
            Ok(candidate) => candidate,
2771
0
            Err(e) => {
2772
0
                return Err(TryReadError::Size(e.with_dst()));
2773
            }
2774
        };
2775
        // SAFETY: `candidate` was copied from from `source: &[u8]`, so all of
2776
        // its bytes are initialized.
2777
0
        unsafe { try_read_from(source, candidate).map(|slf| (slf, suffix)) }
2778
0
    }
2779
2780
    /// Attempts to read a `Self` from the suffix of the given `source`.
2781
    ///
2782
    /// This attempts to read a `Self` from the last `size_of::<Self>()` bytes
2783
    /// of `source`, returning that `Self` and any preceding bytes. If
2784
    /// `source.len() < size_of::<Self>()` or the bytes are not a valid instance
2785
    /// of `Self`, it returns `Err`.
2786
    ///
2787
    /// # Examples
2788
    ///
2789
    /// ```
2790
    /// # #![allow(non_camel_case_types)] // For C0::xC0
2791
    /// use zerocopy::TryFromBytes;
2792
    /// # use zerocopy_derive::*;
2793
    ///
2794
    /// // The only valid value of this type is the byte `0xC0`
2795
    /// #[derive(TryFromBytes)]
2796
    /// #[repr(u8)]
2797
    /// enum C0 { xC0 = 0xC0 }
2798
    ///
2799
    /// // The only valid value of this type is the bytes `0xC0C0`.
2800
    /// #[derive(TryFromBytes)]
2801
    /// #[repr(C)]
2802
    /// struct C0C0(C0, C0);
2803
    ///
2804
    /// #[derive(TryFromBytes)]
2805
    /// #[repr(C)]
2806
    /// struct Packet {
2807
    ///     magic_number: C0C0,
2808
    ///     mug_size: u8,
2809
    ///     temperature: u8,
2810
    /// }
2811
    ///
2812
    /// // These are more bytes than are needed to encode a `Packet`.
2813
    /// let bytes = &[0, 1, 2, 3, 4, 5, 0xC0, 0xC0, 240, 77][..];
2814
    ///
2815
    /// let (prefix, packet) = Packet::try_read_from_suffix(bytes).unwrap();
2816
    ///
2817
    /// assert_eq!(packet.mug_size, 240);
2818
    /// assert_eq!(packet.temperature, 77);
2819
    /// assert_eq!(prefix, &[0u8, 1, 2, 3, 4, 5][..]);
2820
    ///
2821
    /// // These bytes are not valid instance of `Packet`.
2822
    /// let bytes = &[0, 1, 2, 3, 4, 5, 0x10, 0xC0, 240, 77][..];
2823
    /// assert!(Packet::try_read_from_suffix(bytes).is_err());
2824
    /// ```
2825
    #[must_use = "has no side effects"]
2826
    #[inline]
2827
0
    fn try_read_from_suffix(source: &[u8]) -> Result<(&[u8], Self), TryReadError<&[u8], Self>>
2828
0
    where
2829
0
        Self: Sized,
2830
    {
2831
0
        let (prefix, candidate) = match CoreMaybeUninit::<Self>::read_from_suffix(source) {
2832
0
            Ok(candidate) => candidate,
2833
0
            Err(e) => {
2834
0
                return Err(TryReadError::Size(e.with_dst()));
2835
            }
2836
        };
2837
        // SAFETY: `candidate` was copied from from `source: &[u8]`, so all of
2838
        // its bytes are initialized.
2839
0
        unsafe { try_read_from(source, candidate).map(|slf| (prefix, slf)) }
2840
0
    }
2841
}
2842
2843
#[inline(always)]
2844
0
fn try_ref_from_prefix_suffix<T: TryFromBytes + KnownLayout + Immutable + ?Sized>(
2845
0
    source: &[u8],
2846
0
    cast_type: CastType,
2847
0
    meta: Option<T::PointerMetadata>,
2848
0
) -> Result<(&T, &[u8]), TryCastError<&[u8], T>> {
2849
0
    match Ptr::from_ref(source).try_cast_into::<T, BecauseImmutable>(cast_type, meta) {
2850
0
        Ok((source, prefix_suffix)) => {
2851
            // This call may panic. If that happens, it doesn't cause any soundness
2852
            // issues, as we have not generated any invalid state which we need to
2853
            // fix before returning.
2854
            //
2855
            // Note that one panic or post-monomorphization error condition is
2856
            // calling `try_into_valid` (and thus `is_bit_valid`) with a shared
2857
            // pointer when `Self: !Immutable`. Since `Self: Immutable`, this panic
2858
            // condition will not happen.
2859
0
            match source.try_into_valid() {
2860
0
                Ok(valid) => Ok((valid.as_ref(), prefix_suffix.as_ref())),
2861
0
                Err(e) => Err(e.map_src(|src| src.as_bytes::<BecauseImmutable>().as_ref()).into()),
2862
            }
2863
        }
2864
0
        Err(e) => Err(e.map_src(Ptr::as_ref).into()),
2865
    }
2866
0
}
2867
2868
#[inline(always)]
2869
0
fn try_mut_from_prefix_suffix<T: IntoBytes + TryFromBytes + KnownLayout + ?Sized>(
2870
0
    candidate: &mut [u8],
2871
0
    cast_type: CastType,
2872
0
    meta: Option<T::PointerMetadata>,
2873
0
) -> Result<(&mut T, &mut [u8]), TryCastError<&mut [u8], T>> {
2874
0
    match Ptr::from_mut(candidate).try_cast_into::<T, BecauseExclusive>(cast_type, meta) {
2875
0
        Ok((candidate, prefix_suffix)) => {
2876
            // This call may panic. If that happens, it doesn't cause any soundness
2877
            // issues, as we have not generated any invalid state which we need to
2878
            // fix before returning.
2879
            //
2880
            // Note that one panic or post-monomorphization error condition is
2881
            // calling `try_into_valid` (and thus `is_bit_valid`) with a shared
2882
            // pointer when `Self: !Immutable`. Since `Self: Immutable`, this panic
2883
            // condition will not happen.
2884
0
            match candidate.try_into_valid() {
2885
0
                Ok(valid) => Ok((valid.as_mut(), prefix_suffix.as_mut())),
2886
0
                Err(e) => Err(e.map_src(|src| src.as_bytes::<BecauseExclusive>().as_mut()).into()),
2887
            }
2888
        }
2889
0
        Err(e) => Err(e.map_src(Ptr::as_mut).into()),
2890
    }
2891
0
}
2892
2893
#[inline(always)]
2894
0
fn swap<T, U>((t, u): (T, U)) -> (U, T) {
2895
0
    (u, t)
2896
0
}
2897
2898
/// # Safety
2899
///
2900
/// All bytes of `candidate` must be initialized.
2901
#[inline(always)]
2902
0
unsafe fn try_read_from<S, T: TryFromBytes>(
2903
0
    source: S,
2904
0
    mut candidate: CoreMaybeUninit<T>,
2905
0
) -> Result<T, TryReadError<S, T>> {
2906
    // We use `from_mut` despite not mutating via `c_ptr` so that we don't need
2907
    // to add a `T: Immutable` bound.
2908
0
    let c_ptr = Ptr::from_mut(&mut candidate);
2909
    // SAFETY: `c_ptr` has no uninitialized sub-ranges because it derived from
2910
    // `candidate`, which the caller promises is entirely initialized. Since
2911
    // `candidate` is a `MaybeUninit`, it has no validity requirements, and so
2912
    // no values written to an `Initialized` `c_ptr` can violate its validity.
2913
    // Since `c_ptr` has `Exclusive` aliasing, no mutations may happen except
2914
    // via `c_ptr` so long as it is live, so we don't need to worry about the
2915
    // fact that `c_ptr` may have more restricted validity than `candidate`.
2916
0
    let c_ptr = unsafe { c_ptr.assume_validity::<invariant::Initialized>() };
2917
0
    let c_ptr = c_ptr.transmute();
2918
2919
    // Since we don't have `T: KnownLayout`, we hack around that by using
2920
    // `Wrapping<T>`, which implements `KnownLayout` even if `T` doesn't.
2921
    //
2922
    // This call may panic. If that happens, it doesn't cause any soundness
2923
    // issues, as we have not generated any invalid state which we need to fix
2924
    // before returning.
2925
    //
2926
    // Note that one panic or post-monomorphization error condition is calling
2927
    // `try_into_valid` (and thus `is_bit_valid`) with a shared pointer when
2928
    // `Self: !Immutable`. Since `Self: Immutable`, this panic condition will
2929
    // not happen.
2930
0
    if !Wrapping::<T>::is_bit_valid(c_ptr.forget_aligned()) {
2931
0
        return Err(ValidityError::new(source).into());
2932
0
    }
2933
2934
0
    fn _assert_same_size_and_validity<T>()
2935
0
    where
2936
0
        Wrapping<T>: pointer::TransmuteFrom<T, invariant::Valid, invariant::Valid>,
2937
0
        T: pointer::TransmuteFrom<Wrapping<T>, invariant::Valid, invariant::Valid>,
2938
    {
2939
0
    }
2940
2941
0
    _assert_same_size_and_validity::<T>();
2942
2943
    // SAFETY: We just validated that `candidate` contains a valid
2944
    // `Wrapping<T>`, which has the same size and bit validity as `T`, as
2945
    // guaranteed by the preceding type assertion.
2946
0
    Ok(unsafe { candidate.assume_init() })
2947
0
}
2948
2949
/// Types for which a sequence of `0` bytes is a valid instance.
2950
///
2951
/// Any memory region of the appropriate length which is guaranteed to contain
2952
/// only zero bytes can be viewed as any `FromZeros` type with no runtime
2953
/// overhead. This is useful whenever memory is known to be in a zeroed state,
2954
/// such memory returned from some allocation routines.
2955
///
2956
/// # Warning: Padding bytes
2957
///
2958
/// Note that, when a value is moved or copied, only the non-padding bytes of
2959
/// that value are guaranteed to be preserved. It is unsound to assume that
2960
/// values written to padding bytes are preserved after a move or copy. For more
2961
/// details, see the [`FromBytes` docs][frombytes-warning-padding-bytes].
2962
///
2963
/// [frombytes-warning-padding-bytes]: FromBytes#warning-padding-bytes
2964
///
2965
/// # Implementation
2966
///
2967
/// **Do not implement this trait yourself!** Instead, use
2968
/// [`#[derive(FromZeros)]`][derive]; e.g.:
2969
///
2970
/// ```
2971
/// # use zerocopy_derive::{FromZeros, Immutable};
2972
/// #[derive(FromZeros)]
2973
/// struct MyStruct {
2974
/// # /*
2975
///     ...
2976
/// # */
2977
/// }
2978
///
2979
/// #[derive(FromZeros)]
2980
/// #[repr(u8)]
2981
/// enum MyEnum {
2982
/// #   Variant0,
2983
/// # /*
2984
///     ...
2985
/// # */
2986
/// }
2987
///
2988
/// #[derive(FromZeros, Immutable)]
2989
/// union MyUnion {
2990
/// #   variant: u8,
2991
/// # /*
2992
///     ...
2993
/// # */
2994
/// }
2995
/// ```
2996
///
2997
/// This derive performs a sophisticated, compile-time safety analysis to
2998
/// determine whether a type is `FromZeros`.
2999
///
3000
/// # Safety
3001
///
3002
/// *This section describes what is required in order for `T: FromZeros`, and
3003
/// what unsafe code may assume of such types. If you don't plan on implementing
3004
/// `FromZeros` manually, and you don't plan on writing unsafe code that
3005
/// operates on `FromZeros` types, then you don't need to read this section.*
3006
///
3007
/// If `T: FromZeros`, then unsafe code may assume that it is sound to produce a
3008
/// `T` whose bytes are all initialized to zero. If a type is marked as
3009
/// `FromZeros` which violates this contract, it may cause undefined behavior.
3010
///
3011
/// `#[derive(FromZeros)]` only permits [types which satisfy these
3012
/// requirements][derive-analysis].
3013
///
3014
#[cfg_attr(
3015
    feature = "derive",
3016
    doc = "[derive]: zerocopy_derive::FromZeros",
3017
    doc = "[derive-analysis]: zerocopy_derive::FromZeros#analysis"
3018
)]
3019
#[cfg_attr(
3020
    not(feature = "derive"),
3021
    doc = concat!("[derive]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.FromZeros.html"),
3022
    doc = concat!("[derive-analysis]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.FromZeros.html#analysis"),
3023
)]
3024
#[cfg_attr(
3025
    not(no_zerocopy_diagnostic_on_unimplemented_1_78_0),
3026
    diagnostic::on_unimplemented(note = "Consider adding `#[derive(FromZeros)]` to `{Self}`")
3027
)]
3028
pub unsafe trait FromZeros: TryFromBytes {
3029
    // The `Self: Sized` bound makes it so that `FromZeros` is still object
3030
    // safe.
3031
    #[doc(hidden)]
3032
    fn only_derive_is_allowed_to_implement_this_trait()
3033
    where
3034
        Self: Sized;
3035
3036
    /// Overwrites `self` with zeros.
3037
    ///
3038
    /// Sets every byte in `self` to 0. While this is similar to doing `*self =
3039
    /// Self::new_zeroed()`, it differs in that `zero` does not semantically
3040
    /// drop the current value and replace it with a new one — it simply
3041
    /// modifies the bytes of the existing value.
3042
    ///
3043
    /// # Examples
3044
    ///
3045
    /// ```
3046
    /// # use zerocopy::FromZeros;
3047
    /// # use zerocopy_derive::*;
3048
    /// #
3049
    /// #[derive(FromZeros)]
3050
    /// #[repr(C)]
3051
    /// struct PacketHeader {
3052
    ///     src_port: [u8; 2],
3053
    ///     dst_port: [u8; 2],
3054
    ///     length: [u8; 2],
3055
    ///     checksum: [u8; 2],
3056
    /// }
3057
    ///
3058
    /// let mut header = PacketHeader {
3059
    ///     src_port: 100u16.to_be_bytes(),
3060
    ///     dst_port: 200u16.to_be_bytes(),
3061
    ///     length: 300u16.to_be_bytes(),
3062
    ///     checksum: 400u16.to_be_bytes(),
3063
    /// };
3064
    ///
3065
    /// header.zero();
3066
    ///
3067
    /// assert_eq!(header.src_port, [0, 0]);
3068
    /// assert_eq!(header.dst_port, [0, 0]);
3069
    /// assert_eq!(header.length, [0, 0]);
3070
    /// assert_eq!(header.checksum, [0, 0]);
3071
    /// ```
3072
    #[inline(always)]
3073
0
    fn zero(&mut self) {
3074
0
        let slf: *mut Self = self;
3075
0
        let len = mem::size_of_val(self);
3076
        // SAFETY:
3077
        // - `self` is guaranteed by the type system to be valid for writes of
3078
        //   size `size_of_val(self)`.
3079
        // - `u8`'s alignment is 1, and thus `self` is guaranteed to be aligned
3080
        //   as required by `u8`.
3081
        // - Since `Self: FromZeros`, the all-zeros instance is a valid instance
3082
        //   of `Self.`
3083
        //
3084
        // FIXME(#429): Add references to docs and quotes.
3085
0
        unsafe { ptr::write_bytes(slf.cast::<u8>(), 0, len) };
3086
0
    }
3087
3088
    /// Creates an instance of `Self` from zeroed bytes.
3089
    ///
3090
    /// # Examples
3091
    ///
3092
    /// ```
3093
    /// # use zerocopy::FromZeros;
3094
    /// # use zerocopy_derive::*;
3095
    /// #
3096
    /// #[derive(FromZeros)]
3097
    /// #[repr(C)]
3098
    /// struct PacketHeader {
3099
    ///     src_port: [u8; 2],
3100
    ///     dst_port: [u8; 2],
3101
    ///     length: [u8; 2],
3102
    ///     checksum: [u8; 2],
3103
    /// }
3104
    ///
3105
    /// let header: PacketHeader = FromZeros::new_zeroed();
3106
    ///
3107
    /// assert_eq!(header.src_port, [0, 0]);
3108
    /// assert_eq!(header.dst_port, [0, 0]);
3109
    /// assert_eq!(header.length, [0, 0]);
3110
    /// assert_eq!(header.checksum, [0, 0]);
3111
    /// ```
3112
    #[must_use = "has no side effects"]
3113
    #[inline(always)]
3114
0
    fn new_zeroed() -> Self
3115
0
    where
3116
0
        Self: Sized,
3117
    {
3118
        // SAFETY: `FromZeros` says that the all-zeros bit pattern is legal.
3119
0
        unsafe { mem::zeroed() }
3120
0
    }
3121
3122
    /// Creates a `Box<Self>` from zeroed bytes.
3123
    ///
3124
    /// This function is useful for allocating large values on the heap and
3125
    /// zero-initializing them, without ever creating a temporary instance of
3126
    /// `Self` on the stack. For example, `<[u8; 1048576]>::new_box_zeroed()`
3127
    /// will allocate `[u8; 1048576]` directly on the heap; it does not require
3128
    /// storing `[u8; 1048576]` in a temporary variable on the stack.
3129
    ///
3130
    /// On systems that use a heap implementation that supports allocating from
3131
    /// pre-zeroed memory, using `new_box_zeroed` (or related functions) may
3132
    /// have performance benefits.
3133
    ///
3134
    /// # Errors
3135
    ///
3136
    /// Returns an error on allocation failure. Allocation failure is guaranteed
3137
    /// never to cause a panic or an abort.
3138
    #[must_use = "has no side effects (other than allocation)"]
3139
    #[cfg(any(feature = "alloc", test))]
3140
    #[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))]
3141
    #[inline]
3142
    fn new_box_zeroed() -> Result<Box<Self>, AllocError>
3143
    where
3144
        Self: Sized,
3145
    {
3146
        // If `T` is a ZST, then return a proper boxed instance of it. There is
3147
        // no allocation, but `Box` does require a correct dangling pointer.
3148
        let layout = Layout::new::<Self>();
3149
        if layout.size() == 0 {
3150
            // Construct the `Box` from a dangling pointer to avoid calling
3151
            // `Self::new_zeroed`. This ensures that stack space is never
3152
            // allocated for `Self` even on lower opt-levels where this branch
3153
            // might not get optimized out.
3154
3155
            // SAFETY: Per [1], when `T` is a ZST, `Box<T>`'s only validity
3156
            // requirements are that the pointer is non-null and sufficiently
3157
            // aligned. Per [2], `NonNull::dangling` produces a pointer which
3158
            // is sufficiently aligned. Since the produced pointer is a
3159
            // `NonNull`, it is non-null.
3160
            //
3161
            // [1] Per https://doc.rust-lang.org/1.81.0/std/boxed/index.html#memory-layout:
3162
            //
3163
            //   For zero-sized values, the `Box` pointer has to be non-null and sufficiently aligned.
3164
            //
3165
            // [2] Per https://doc.rust-lang.org/std/ptr/struct.NonNull.html#method.dangling:
3166
            //
3167
            //   Creates a new `NonNull` that is dangling, but well-aligned.
3168
            return Ok(unsafe { Box::from_raw(NonNull::dangling().as_ptr()) });
3169
        }
3170
3171
        // FIXME(#429): Add a "SAFETY" comment and remove this `allow`.
3172
        #[allow(clippy::undocumented_unsafe_blocks)]
3173
        let ptr = unsafe { alloc::alloc::alloc_zeroed(layout).cast::<Self>() };
3174
        if ptr.is_null() {
3175
            return Err(AllocError);
3176
        }
3177
        // FIXME(#429): Add a "SAFETY" comment and remove this `allow`.
3178
        #[allow(clippy::undocumented_unsafe_blocks)]
3179
        Ok(unsafe { Box::from_raw(ptr) })
3180
    }
3181
3182
    /// Creates a `Box<[Self]>` (a boxed slice) from zeroed bytes.
3183
    ///
3184
    /// This function is useful for allocating large values of `[Self]` on the
3185
    /// heap and zero-initializing them, without ever creating a temporary
3186
    /// instance of `[Self; _]` on the stack. For example,
3187
    /// `u8::new_box_slice_zeroed(1048576)` will allocate the slice directly on
3188
    /// the heap; it does not require storing the slice on the stack.
3189
    ///
3190
    /// On systems that use a heap implementation that supports allocating from
3191
    /// pre-zeroed memory, using `new_box_slice_zeroed` may have performance
3192
    /// benefits.
3193
    ///
3194
    /// If `Self` is a zero-sized type, then this function will return a
3195
    /// `Box<[Self]>` that has the correct `len`. Such a box cannot contain any
3196
    /// actual information, but its `len()` property will report the correct
3197
    /// value.
3198
    ///
3199
    /// # Errors
3200
    ///
3201
    /// Returns an error on allocation failure. Allocation failure is
3202
    /// guaranteed never to cause a panic or an abort.
3203
    #[must_use = "has no side effects (other than allocation)"]
3204
    #[cfg(feature = "alloc")]
3205
    #[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))]
3206
    #[inline]
3207
    fn new_box_zeroed_with_elems(count: usize) -> Result<Box<Self>, AllocError>
3208
    where
3209
        Self: KnownLayout<PointerMetadata = usize>,
3210
    {
3211
        // SAFETY: `alloc::alloc::alloc_zeroed` is a valid argument of
3212
        // `new_box`. The referent of the pointer returned by `alloc_zeroed`
3213
        // (and, consequently, the `Box` derived from it) is a valid instance of
3214
        // `Self`, because `Self` is `FromZeros`.
3215
        unsafe { crate::util::new_box(count, alloc::alloc::alloc_zeroed) }
3216
    }
3217
3218
    #[deprecated(since = "0.8.0", note = "renamed to `FromZeros::new_box_zeroed_with_elems`")]
3219
    #[doc(hidden)]
3220
    #[cfg(feature = "alloc")]
3221
    #[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))]
3222
    #[must_use = "has no side effects (other than allocation)"]
3223
    #[inline(always)]
3224
    fn new_box_slice_zeroed(len: usize) -> Result<Box<[Self]>, AllocError>
3225
    where
3226
        Self: Sized,
3227
    {
3228
        <[Self]>::new_box_zeroed_with_elems(len)
3229
    }
3230
3231
    /// Creates a `Vec<Self>` from zeroed bytes.
3232
    ///
3233
    /// This function is useful for allocating large values of `Vec`s and
3234
    /// zero-initializing them, without ever creating a temporary instance of
3235
    /// `[Self; _]` (or many temporary instances of `Self`) on the stack. For
3236
    /// example, `u8::new_vec_zeroed(1048576)` will allocate directly on the
3237
    /// heap; it does not require storing intermediate values on the stack.
3238
    ///
3239
    /// On systems that use a heap implementation that supports allocating from
3240
    /// pre-zeroed memory, using `new_vec_zeroed` may have performance benefits.
3241
    ///
3242
    /// If `Self` is a zero-sized type, then this function will return a
3243
    /// `Vec<Self>` that has the correct `len`. Such a `Vec` cannot contain any
3244
    /// actual information, but its `len()` property will report the correct
3245
    /// value.
3246
    ///
3247
    /// # Errors
3248
    ///
3249
    /// Returns an error on allocation failure. Allocation failure is
3250
    /// guaranteed never to cause a panic or an abort.
3251
    #[must_use = "has no side effects (other than allocation)"]
3252
    #[cfg(feature = "alloc")]
3253
    #[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))]
3254
    #[inline(always)]
3255
    fn new_vec_zeroed(len: usize) -> Result<Vec<Self>, AllocError>
3256
    where
3257
        Self: Sized,
3258
    {
3259
        <[Self]>::new_box_zeroed_with_elems(len).map(Into::into)
3260
    }
3261
3262
    /// Extends a `Vec<Self>` by pushing `additional` new items onto the end of
3263
    /// the vector. The new items are initialized with zeros.
3264
    #[cfg(not(no_zerocopy_panic_in_const_and_vec_try_reserve_1_57_0))]
3265
    #[cfg(feature = "alloc")]
3266
    #[cfg_attr(doc_cfg, doc(cfg(all(rust = "1.57.0", feature = "alloc"))))]
3267
    #[inline(always)]
3268
    fn extend_vec_zeroed(v: &mut Vec<Self>, additional: usize) -> Result<(), AllocError>
3269
    where
3270
        Self: Sized,
3271
    {
3272
        // PANICS: We pass `v.len()` for `position`, so the `position > v.len()`
3273
        // panic condition is not satisfied.
3274
        <Self as FromZeros>::insert_vec_zeroed(v, v.len(), additional)
3275
    }
3276
3277
    /// Inserts `additional` new items into `Vec<Self>` at `position`. The new
3278
    /// items are initialized with zeros.
3279
    ///
3280
    /// # Panics
3281
    ///
3282
    /// Panics if `position > v.len()`.
3283
    #[cfg(not(no_zerocopy_panic_in_const_and_vec_try_reserve_1_57_0))]
3284
    #[cfg(feature = "alloc")]
3285
    #[cfg_attr(doc_cfg, doc(cfg(all(rust = "1.57.0", feature = "alloc"))))]
3286
    #[inline]
3287
    fn insert_vec_zeroed(
3288
        v: &mut Vec<Self>,
3289
        position: usize,
3290
        additional: usize,
3291
    ) -> Result<(), AllocError>
3292
    where
3293
        Self: Sized,
3294
    {
3295
        assert!(position <= v.len());
3296
        // We only conditionally compile on versions on which `try_reserve` is
3297
        // stable; the Clippy lint is a false positive.
3298
        v.try_reserve(additional).map_err(|_| AllocError)?;
3299
        // SAFETY: The `try_reserve` call guarantees that these cannot overflow:
3300
        // * `ptr.add(position)`
3301
        // * `position + additional`
3302
        // * `v.len() + additional`
3303
        //
3304
        // `v.len() - position` cannot overflow because we asserted that
3305
        // `position <= v.len()`.
3306
        #[allow(clippy::multiple_unsafe_ops_per_block)]
3307
        unsafe {
3308
            // This is a potentially overlapping copy.
3309
            let ptr = v.as_mut_ptr();
3310
            #[allow(clippy::arithmetic_side_effects)]
3311
            ptr.add(position).copy_to(ptr.add(position + additional), v.len() - position);
3312
            ptr.add(position).write_bytes(0, additional);
3313
            #[allow(clippy::arithmetic_side_effects)]
3314
            v.set_len(v.len() + additional);
3315
        }
3316
3317
        Ok(())
3318
    }
3319
}
3320
3321
/// Analyzes whether a type is [`FromBytes`].
3322
///
3323
/// This derive analyzes, at compile time, whether the annotated type satisfies
3324
/// the [safety conditions] of `FromBytes` and implements `FromBytes` and its
3325
/// supertraits if it is sound to do so. This derive can be applied to structs,
3326
/// enums, and unions;
3327
/// e.g.:
3328
///
3329
/// ```
3330
/// # use zerocopy_derive::{FromBytes, FromZeros, Immutable};
3331
/// #[derive(FromBytes)]
3332
/// struct MyStruct {
3333
/// # /*
3334
///     ...
3335
/// # */
3336
/// }
3337
///
3338
/// #[derive(FromBytes)]
3339
/// #[repr(u8)]
3340
/// enum MyEnum {
3341
/// #   V00, V01, V02, V03, V04, V05, V06, V07, V08, V09, V0A, V0B, V0C, V0D, V0E,
3342
/// #   V0F, V10, V11, V12, V13, V14, V15, V16, V17, V18, V19, V1A, V1B, V1C, V1D,
3343
/// #   V1E, V1F, V20, V21, V22, V23, V24, V25, V26, V27, V28, V29, V2A, V2B, V2C,
3344
/// #   V2D, V2E, V2F, V30, V31, V32, V33, V34, V35, V36, V37, V38, V39, V3A, V3B,
3345
/// #   V3C, V3D, V3E, V3F, V40, V41, V42, V43, V44, V45, V46, V47, V48, V49, V4A,
3346
/// #   V4B, V4C, V4D, V4E, V4F, V50, V51, V52, V53, V54, V55, V56, V57, V58, V59,
3347
/// #   V5A, V5B, V5C, V5D, V5E, V5F, V60, V61, V62, V63, V64, V65, V66, V67, V68,
3348
/// #   V69, V6A, V6B, V6C, V6D, V6E, V6F, V70, V71, V72, V73, V74, V75, V76, V77,
3349
/// #   V78, V79, V7A, V7B, V7C, V7D, V7E, V7F, V80, V81, V82, V83, V84, V85, V86,
3350
/// #   V87, V88, V89, V8A, V8B, V8C, V8D, V8E, V8F, V90, V91, V92, V93, V94, V95,
3351
/// #   V96, V97, V98, V99, V9A, V9B, V9C, V9D, V9E, V9F, VA0, VA1, VA2, VA3, VA4,
3352
/// #   VA5, VA6, VA7, VA8, VA9, VAA, VAB, VAC, VAD, VAE, VAF, VB0, VB1, VB2, VB3,
3353
/// #   VB4, VB5, VB6, VB7, VB8, VB9, VBA, VBB, VBC, VBD, VBE, VBF, VC0, VC1, VC2,
3354
/// #   VC3, VC4, VC5, VC6, VC7, VC8, VC9, VCA, VCB, VCC, VCD, VCE, VCF, VD0, VD1,
3355
/// #   VD2, VD3, VD4, VD5, VD6, VD7, VD8, VD9, VDA, VDB, VDC, VDD, VDE, VDF, VE0,
3356
/// #   VE1, VE2, VE3, VE4, VE5, VE6, VE7, VE8, VE9, VEA, VEB, VEC, VED, VEE, VEF,
3357
/// #   VF0, VF1, VF2, VF3, VF4, VF5, VF6, VF7, VF8, VF9, VFA, VFB, VFC, VFD, VFE,
3358
/// #   VFF,
3359
/// # /*
3360
///     ...
3361
/// # */
3362
/// }
3363
///
3364
/// #[derive(FromBytes, Immutable)]
3365
/// union MyUnion {
3366
/// #   variant: u8,
3367
/// # /*
3368
///     ...
3369
/// # */
3370
/// }
3371
/// ```
3372
///
3373
/// [safety conditions]: trait@FromBytes#safety
3374
///
3375
/// # Analysis
3376
///
3377
/// *This section describes, roughly, the analysis performed by this derive to
3378
/// determine whether it is sound to implement `FromBytes` for a given type.
3379
/// Unless you are modifying the implementation of this derive, or attempting to
3380
/// manually implement `FromBytes` for a type yourself, you don't need to read
3381
/// this section.*
3382
///
3383
/// If a type has the following properties, then this derive can implement
3384
/// `FromBytes` for that type:
3385
///
3386
/// - If the type is a struct, all of its fields must be `FromBytes`.
3387
/// - If the type is an enum:
3388
///   - It must have a defined representation which is one of `u8`, `u16`, `i8`,
3389
///     or `i16`.
3390
///   - The maximum number of discriminants must be used (so that every possible
3391
///     bit pattern is a valid one).
3392
///   - Its fields must be `FromBytes`.
3393
///
3394
/// This analysis is subject to change. Unsafe code may *only* rely on the
3395
/// documented [safety conditions] of `FromBytes`, and must *not* rely on the
3396
/// implementation details of this derive.
3397
///
3398
/// ## Why isn't an explicit representation required for structs?
3399
///
3400
/// Neither this derive, nor the [safety conditions] of `FromBytes`, requires
3401
/// that structs are marked with `#[repr(C)]`.
3402
///
3403
/// Per the [Rust reference](reference),
3404
///
3405
/// > The representation of a type can change the padding between fields, but
3406
/// > does not change the layout of the fields themselves.
3407
///
3408
/// [reference]: https://doc.rust-lang.org/reference/type-layout.html#representations
3409
///
3410
/// Since the layout of structs only consists of padding bytes and field bytes,
3411
/// a struct is soundly `FromBytes` if:
3412
/// 1. its padding is soundly `FromBytes`, and
3413
/// 2. its fields are soundly `FromBytes`.
3414
///
3415
/// The answer to the first question is always yes: padding bytes do not have
3416
/// any validity constraints. A [discussion] of this question in the Unsafe Code
3417
/// Guidelines Working Group concluded that it would be virtually unimaginable
3418
/// for future versions of rustc to add validity constraints to padding bytes.
3419
///
3420
/// [discussion]: https://github.com/rust-lang/unsafe-code-guidelines/issues/174
3421
///
3422
/// Whether a struct is soundly `FromBytes` therefore solely depends on whether
3423
/// its fields are `FromBytes`.
3424
#[cfg(any(feature = "derive", test))]
3425
#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
3426
pub use zerocopy_derive::FromBytes;
3427
3428
/// Types for which any bit pattern is valid.
3429
///
3430
/// Any memory region of the appropriate length which contains initialized bytes
3431
/// can be viewed as any `FromBytes` type with no runtime overhead. This is
3432
/// useful for efficiently parsing bytes as structured data.
3433
///
3434
/// # Warning: Padding bytes
3435
///
3436
/// Note that, when a value is moved or copied, only the non-padding bytes of
3437
/// that value are guaranteed to be preserved. It is unsound to assume that
3438
/// values written to padding bytes are preserved after a move or copy. For
3439
/// example, the following is unsound:
3440
///
3441
/// ```rust,no_run
3442
/// use core::mem::{size_of, transmute};
3443
/// use zerocopy::FromZeros;
3444
/// # use zerocopy_derive::*;
3445
///
3446
/// // Assume `Foo` is a type with padding bytes.
3447
/// #[derive(FromZeros, Default)]
3448
/// struct Foo {
3449
/// # /*
3450
///     ...
3451
/// # */
3452
/// }
3453
///
3454
/// let mut foo: Foo = Foo::default();
3455
/// FromZeros::zero(&mut foo);
3456
/// // UNSOUND: Although `FromZeros::zero` writes zeros to all bytes of `foo`,
3457
/// // those writes are not guaranteed to be preserved in padding bytes when
3458
/// // `foo` is moved, so this may expose padding bytes as `u8`s.
3459
/// let foo_bytes: [u8; size_of::<Foo>()] = unsafe { transmute(foo) };
3460
/// ```
3461
///
3462
/// # Implementation
3463
///
3464
/// **Do not implement this trait yourself!** Instead, use
3465
/// [`#[derive(FromBytes)]`][derive]; e.g.:
3466
///
3467
/// ```
3468
/// # use zerocopy_derive::{FromBytes, Immutable};
3469
/// #[derive(FromBytes)]
3470
/// struct MyStruct {
3471
/// # /*
3472
///     ...
3473
/// # */
3474
/// }
3475
///
3476
/// #[derive(FromBytes)]
3477
/// #[repr(u8)]
3478
/// enum MyEnum {
3479
/// #   V00, V01, V02, V03, V04, V05, V06, V07, V08, V09, V0A, V0B, V0C, V0D, V0E,
3480
/// #   V0F, V10, V11, V12, V13, V14, V15, V16, V17, V18, V19, V1A, V1B, V1C, V1D,
3481
/// #   V1E, V1F, V20, V21, V22, V23, V24, V25, V26, V27, V28, V29, V2A, V2B, V2C,
3482
/// #   V2D, V2E, V2F, V30, V31, V32, V33, V34, V35, V36, V37, V38, V39, V3A, V3B,
3483
/// #   V3C, V3D, V3E, V3F, V40, V41, V42, V43, V44, V45, V46, V47, V48, V49, V4A,
3484
/// #   V4B, V4C, V4D, V4E, V4F, V50, V51, V52, V53, V54, V55, V56, V57, V58, V59,
3485
/// #   V5A, V5B, V5C, V5D, V5E, V5F, V60, V61, V62, V63, V64, V65, V66, V67, V68,
3486
/// #   V69, V6A, V6B, V6C, V6D, V6E, V6F, V70, V71, V72, V73, V74, V75, V76, V77,
3487
/// #   V78, V79, V7A, V7B, V7C, V7D, V7E, V7F, V80, V81, V82, V83, V84, V85, V86,
3488
/// #   V87, V88, V89, V8A, V8B, V8C, V8D, V8E, V8F, V90, V91, V92, V93, V94, V95,
3489
/// #   V96, V97, V98, V99, V9A, V9B, V9C, V9D, V9E, V9F, VA0, VA1, VA2, VA3, VA4,
3490
/// #   VA5, VA6, VA7, VA8, VA9, VAA, VAB, VAC, VAD, VAE, VAF, VB0, VB1, VB2, VB3,
3491
/// #   VB4, VB5, VB6, VB7, VB8, VB9, VBA, VBB, VBC, VBD, VBE, VBF, VC0, VC1, VC2,
3492
/// #   VC3, VC4, VC5, VC6, VC7, VC8, VC9, VCA, VCB, VCC, VCD, VCE, VCF, VD0, VD1,
3493
/// #   VD2, VD3, VD4, VD5, VD6, VD7, VD8, VD9, VDA, VDB, VDC, VDD, VDE, VDF, VE0,
3494
/// #   VE1, VE2, VE3, VE4, VE5, VE6, VE7, VE8, VE9, VEA, VEB, VEC, VED, VEE, VEF,
3495
/// #   VF0, VF1, VF2, VF3, VF4, VF5, VF6, VF7, VF8, VF9, VFA, VFB, VFC, VFD, VFE,
3496
/// #   VFF,
3497
/// # /*
3498
///     ...
3499
/// # */
3500
/// }
3501
///
3502
/// #[derive(FromBytes, Immutable)]
3503
/// union MyUnion {
3504
/// #   variant: u8,
3505
/// # /*
3506
///     ...
3507
/// # */
3508
/// }
3509
/// ```
3510
///
3511
/// This derive performs a sophisticated, compile-time safety analysis to
3512
/// determine whether a type is `FromBytes`.
3513
///
3514
/// # Safety
3515
///
3516
/// *This section describes what is required in order for `T: FromBytes`, and
3517
/// what unsafe code may assume of such types. If you don't plan on implementing
3518
/// `FromBytes` manually, and you don't plan on writing unsafe code that
3519
/// operates on `FromBytes` types, then you don't need to read this section.*
3520
///
3521
/// If `T: FromBytes`, then unsafe code may assume that it is sound to produce a
3522
/// `T` whose bytes are initialized to any sequence of valid `u8`s (in other
3523
/// words, any byte value which is not uninitialized). If a type is marked as
3524
/// `FromBytes` which violates this contract, it may cause undefined behavior.
3525
///
3526
/// `#[derive(FromBytes)]` only permits [types which satisfy these
3527
/// requirements][derive-analysis].
3528
///
3529
#[cfg_attr(
3530
    feature = "derive",
3531
    doc = "[derive]: zerocopy_derive::FromBytes",
3532
    doc = "[derive-analysis]: zerocopy_derive::FromBytes#analysis"
3533
)]
3534
#[cfg_attr(
3535
    not(feature = "derive"),
3536
    doc = concat!("[derive]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.FromBytes.html"),
3537
    doc = concat!("[derive-analysis]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.FromBytes.html#analysis"),
3538
)]
3539
#[cfg_attr(
3540
    not(no_zerocopy_diagnostic_on_unimplemented_1_78_0),
3541
    diagnostic::on_unimplemented(note = "Consider adding `#[derive(FromBytes)]` to `{Self}`")
3542
)]
3543
pub unsafe trait FromBytes: FromZeros {
3544
    // The `Self: Sized` bound makes it so that `FromBytes` is still object
3545
    // safe.
3546
    #[doc(hidden)]
3547
    fn only_derive_is_allowed_to_implement_this_trait()
3548
    where
3549
        Self: Sized;
3550
3551
    /// Interprets the given `source` as a `&Self`.
3552
    ///
3553
    /// This method attempts to return a reference to `source` interpreted as a
3554
    /// `Self`. If the length of `source` is not a [valid size of
3555
    /// `Self`][valid-size], or if `source` is not appropriately aligned, this
3556
    /// returns `Err`. If [`Self: Unaligned`][self-unaligned], you can
3557
    /// [infallibly discard the alignment error][size-error-from].
3558
    ///
3559
    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
3560
    ///
3561
    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
3562
    /// [self-unaligned]: Unaligned
3563
    /// [size-error-from]: error/struct.SizeError.html#method.from-1
3564
    /// [slice-dst]: KnownLayout#dynamically-sized-types
3565
    ///
3566
    /// # Compile-Time Assertions
3567
    ///
3568
    /// This method cannot yet be used on unsized types whose dynamically-sized
3569
    /// component is zero-sized. Attempting to use this method on such types
3570
    /// results in a compile-time assertion error; e.g.:
3571
    ///
3572
    /// ```compile_fail,E0080
3573
    /// use zerocopy::*;
3574
    /// # use zerocopy_derive::*;
3575
    ///
3576
    /// #[derive(FromBytes, Immutable, KnownLayout)]
3577
    /// #[repr(C)]
3578
    /// struct ZSTy {
3579
    ///     leading_sized: u16,
3580
    ///     trailing_dst: [()],
3581
    /// }
3582
    ///
3583
    /// let _ = ZSTy::ref_from_bytes(0u16.as_bytes()); // âš  Compile Error!
3584
    /// ```
3585
    ///
3586
    /// # Examples
3587
    ///
3588
    /// ```
3589
    /// use zerocopy::FromBytes;
3590
    /// # use zerocopy_derive::*;
3591
    ///
3592
    /// #[derive(FromBytes, KnownLayout, Immutable)]
3593
    /// #[repr(C)]
3594
    /// struct PacketHeader {
3595
    ///     src_port: [u8; 2],
3596
    ///     dst_port: [u8; 2],
3597
    ///     length: [u8; 2],
3598
    ///     checksum: [u8; 2],
3599
    /// }
3600
    ///
3601
    /// #[derive(FromBytes, KnownLayout, Immutable)]
3602
    /// #[repr(C)]
3603
    /// struct Packet {
3604
    ///     header: PacketHeader,
3605
    ///     body: [u8],
3606
    /// }
3607
    ///
3608
    /// // These bytes encode a `Packet`.
3609
    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11][..];
3610
    ///
3611
    /// let packet = Packet::ref_from_bytes(bytes).unwrap();
3612
    ///
3613
    /// assert_eq!(packet.header.src_port, [0, 1]);
3614
    /// assert_eq!(packet.header.dst_port, [2, 3]);
3615
    /// assert_eq!(packet.header.length, [4, 5]);
3616
    /// assert_eq!(packet.header.checksum, [6, 7]);
3617
    /// assert_eq!(packet.body, [8, 9, 10, 11]);
3618
    /// ```
3619
    #[must_use = "has no side effects"]
3620
    #[inline]
3621
0
    fn ref_from_bytes(source: &[u8]) -> Result<&Self, CastError<&[u8], Self>>
3622
0
    where
3623
0
        Self: KnownLayout + Immutable,
3624
    {
3625
0
        static_assert_dst_is_not_zst!(Self);
3626
0
        match Ptr::from_ref(source).try_cast_into_no_leftover::<_, BecauseImmutable>(None) {
3627
0
            Ok(ptr) => Ok(ptr.recall_validity().as_ref()),
3628
0
            Err(err) => Err(err.map_src(|src| src.as_ref())),
3629
        }
3630
0
    }
3631
3632
    /// Interprets the prefix of the given `source` as a `&Self` without
3633
    /// copying.
3634
    ///
3635
    /// This method computes the [largest possible size of `Self`][valid-size]
3636
    /// that can fit in the leading bytes of `source`, then attempts to return
3637
    /// both a reference to those bytes interpreted as a `Self`, and a reference
3638
    /// to the remaining bytes. If there are insufficient bytes, or if `source`
3639
    /// is not appropriately aligned, this returns `Err`. If [`Self:
3640
    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
3641
    /// error][size-error-from].
3642
    ///
3643
    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
3644
    ///
3645
    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
3646
    /// [self-unaligned]: Unaligned
3647
    /// [size-error-from]: error/struct.SizeError.html#method.from-1
3648
    /// [slice-dst]: KnownLayout#dynamically-sized-types
3649
    ///
3650
    /// # Compile-Time Assertions
3651
    ///
3652
    /// This method cannot yet be used on unsized types whose dynamically-sized
3653
    /// component is zero-sized. See [`ref_from_prefix_with_elems`], which does
3654
    /// support such types. Attempting to use this method on such types results
3655
    /// in a compile-time assertion error; e.g.:
3656
    ///
3657
    /// ```compile_fail,E0080
3658
    /// use zerocopy::*;
3659
    /// # use zerocopy_derive::*;
3660
    ///
3661
    /// #[derive(FromBytes, Immutable, KnownLayout)]
3662
    /// #[repr(C)]
3663
    /// struct ZSTy {
3664
    ///     leading_sized: u16,
3665
    ///     trailing_dst: [()],
3666
    /// }
3667
    ///
3668
    /// let _ = ZSTy::ref_from_prefix(0u16.as_bytes()); // âš  Compile Error!
3669
    /// ```
3670
    ///
3671
    /// [`ref_from_prefix_with_elems`]: FromBytes::ref_from_prefix_with_elems
3672
    ///
3673
    /// # Examples
3674
    ///
3675
    /// ```
3676
    /// use zerocopy::FromBytes;
3677
    /// # use zerocopy_derive::*;
3678
    ///
3679
    /// #[derive(FromBytes, KnownLayout, Immutable)]
3680
    /// #[repr(C)]
3681
    /// struct PacketHeader {
3682
    ///     src_port: [u8; 2],
3683
    ///     dst_port: [u8; 2],
3684
    ///     length: [u8; 2],
3685
    ///     checksum: [u8; 2],
3686
    /// }
3687
    ///
3688
    /// #[derive(FromBytes, KnownLayout, Immutable)]
3689
    /// #[repr(C)]
3690
    /// struct Packet {
3691
    ///     header: PacketHeader,
3692
    ///     body: [[u8; 2]],
3693
    /// }
3694
    ///
3695
    /// // These are more bytes than are needed to encode a `Packet`.
3696
    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14][..];
3697
    ///
3698
    /// let (packet, suffix) = Packet::ref_from_prefix(bytes).unwrap();
3699
    ///
3700
    /// assert_eq!(packet.header.src_port, [0, 1]);
3701
    /// assert_eq!(packet.header.dst_port, [2, 3]);
3702
    /// assert_eq!(packet.header.length, [4, 5]);
3703
    /// assert_eq!(packet.header.checksum, [6, 7]);
3704
    /// assert_eq!(packet.body, [[8, 9], [10, 11], [12, 13]]);
3705
    /// assert_eq!(suffix, &[14u8][..]);
3706
    /// ```
3707
    #[must_use = "has no side effects"]
3708
    #[inline]
3709
0
    fn ref_from_prefix(source: &[u8]) -> Result<(&Self, &[u8]), CastError<&[u8], Self>>
3710
0
    where
3711
0
        Self: KnownLayout + Immutable,
3712
    {
3713
0
        static_assert_dst_is_not_zst!(Self);
3714
0
        ref_from_prefix_suffix(source, None, CastType::Prefix)
3715
0
    }
3716
3717
    /// Interprets the suffix of the given bytes as a `&Self`.
3718
    ///
3719
    /// This method computes the [largest possible size of `Self`][valid-size]
3720
    /// that can fit in the trailing bytes of `source`, then attempts to return
3721
    /// both a reference to those bytes interpreted as a `Self`, and a reference
3722
    /// to the preceding bytes. If there are insufficient bytes, or if that
3723
    /// suffix of `source` is not appropriately aligned, this returns `Err`. If
3724
    /// [`Self: Unaligned`][self-unaligned], you can [infallibly discard the
3725
    /// alignment error][size-error-from].
3726
    ///
3727
    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
3728
    ///
3729
    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
3730
    /// [self-unaligned]: Unaligned
3731
    /// [size-error-from]: error/struct.SizeError.html#method.from-1
3732
    /// [slice-dst]: KnownLayout#dynamically-sized-types
3733
    ///
3734
    /// # Compile-Time Assertions
3735
    ///
3736
    /// This method cannot yet be used on unsized types whose dynamically-sized
3737
    /// component is zero-sized. See [`ref_from_suffix_with_elems`], which does
3738
    /// support such types. Attempting to use this method on such types results
3739
    /// in a compile-time assertion error; e.g.:
3740
    ///
3741
    /// ```compile_fail,E0080
3742
    /// use zerocopy::*;
3743
    /// # use zerocopy_derive::*;
3744
    ///
3745
    /// #[derive(FromBytes, Immutable, KnownLayout)]
3746
    /// #[repr(C)]
3747
    /// struct ZSTy {
3748
    ///     leading_sized: u16,
3749
    ///     trailing_dst: [()],
3750
    /// }
3751
    ///
3752
    /// let _ = ZSTy::ref_from_suffix(0u16.as_bytes()); // âš  Compile Error!
3753
    /// ```
3754
    ///
3755
    /// [`ref_from_suffix_with_elems`]: FromBytes::ref_from_suffix_with_elems
3756
    ///
3757
    /// # Examples
3758
    ///
3759
    /// ```
3760
    /// use zerocopy::FromBytes;
3761
    /// # use zerocopy_derive::*;
3762
    ///
3763
    /// #[derive(FromBytes, Immutable, KnownLayout)]
3764
    /// #[repr(C)]
3765
    /// struct PacketTrailer {
3766
    ///     frame_check_sequence: [u8; 4],
3767
    /// }
3768
    ///
3769
    /// // These are more bytes than are needed to encode a `PacketTrailer`.
3770
    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
3771
    ///
3772
    /// let (prefix, trailer) = PacketTrailer::ref_from_suffix(bytes).unwrap();
3773
    ///
3774
    /// assert_eq!(prefix, &[0, 1, 2, 3, 4, 5][..]);
3775
    /// assert_eq!(trailer.frame_check_sequence, [6, 7, 8, 9]);
3776
    /// ```
3777
    #[must_use = "has no side effects"]
3778
    #[inline]
3779
0
    fn ref_from_suffix(source: &[u8]) -> Result<(&[u8], &Self), CastError<&[u8], Self>>
3780
0
    where
3781
0
        Self: Immutable + KnownLayout,
3782
    {
3783
0
        static_assert_dst_is_not_zst!(Self);
3784
0
        ref_from_prefix_suffix(source, None, CastType::Suffix).map(swap)
3785
0
    }
3786
3787
    /// Interprets the given `source` as a `&mut Self`.
3788
    ///
3789
    /// This method attempts to return a reference to `source` interpreted as a
3790
    /// `Self`. If the length of `source` is not a [valid size of
3791
    /// `Self`][valid-size], or if `source` is not appropriately aligned, this
3792
    /// returns `Err`. If [`Self: Unaligned`][self-unaligned], you can
3793
    /// [infallibly discard the alignment error][size-error-from].
3794
    ///
3795
    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
3796
    ///
3797
    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
3798
    /// [self-unaligned]: Unaligned
3799
    /// [size-error-from]: error/struct.SizeError.html#method.from-1
3800
    /// [slice-dst]: KnownLayout#dynamically-sized-types
3801
    ///
3802
    /// # Compile-Time Assertions
3803
    ///
3804
    /// This method cannot yet be used on unsized types whose dynamically-sized
3805
    /// component is zero-sized. See [`mut_from_prefix_with_elems`], which does
3806
    /// support such types. Attempting to use this method on such types results
3807
    /// in a compile-time assertion error; e.g.:
3808
    ///
3809
    /// ```compile_fail,E0080
3810
    /// use zerocopy::*;
3811
    /// # use zerocopy_derive::*;
3812
    ///
3813
    /// #[derive(FromBytes, Immutable, IntoBytes, KnownLayout)]
3814
    /// #[repr(C, packed)]
3815
    /// struct ZSTy {
3816
    ///     leading_sized: [u8; 2],
3817
    ///     trailing_dst: [()],
3818
    /// }
3819
    ///
3820
    /// let mut source = [85, 85];
3821
    /// let _ = ZSTy::mut_from_bytes(&mut source[..]); // âš  Compile Error!
3822
    /// ```
3823
    ///
3824
    /// [`mut_from_prefix_with_elems`]: FromBytes::mut_from_prefix_with_elems
3825
    ///
3826
    /// # Examples
3827
    ///
3828
    /// ```
3829
    /// use zerocopy::FromBytes;
3830
    /// # use zerocopy_derive::*;
3831
    ///
3832
    /// #[derive(FromBytes, IntoBytes, KnownLayout, Immutable)]
3833
    /// #[repr(C)]
3834
    /// struct PacketHeader {
3835
    ///     src_port: [u8; 2],
3836
    ///     dst_port: [u8; 2],
3837
    ///     length: [u8; 2],
3838
    ///     checksum: [u8; 2],
3839
    /// }
3840
    ///
3841
    /// // These bytes encode a `PacketHeader`.
3842
    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7][..];
3843
    ///
3844
    /// let header = PacketHeader::mut_from_bytes(bytes).unwrap();
3845
    ///
3846
    /// assert_eq!(header.src_port, [0, 1]);
3847
    /// assert_eq!(header.dst_port, [2, 3]);
3848
    /// assert_eq!(header.length, [4, 5]);
3849
    /// assert_eq!(header.checksum, [6, 7]);
3850
    ///
3851
    /// header.checksum = [0, 0];
3852
    ///
3853
    /// assert_eq!(bytes, [0, 1, 2, 3, 4, 5, 0, 0]);
3854
    /// ```
3855
    #[must_use = "has no side effects"]
3856
    #[inline]
3857
0
    fn mut_from_bytes(source: &mut [u8]) -> Result<&mut Self, CastError<&mut [u8], Self>>
3858
0
    where
3859
0
        Self: IntoBytes + KnownLayout,
3860
    {
3861
0
        static_assert_dst_is_not_zst!(Self);
3862
0
        match Ptr::from_mut(source).try_cast_into_no_leftover::<_, BecauseExclusive>(None) {
3863
0
            Ok(ptr) => Ok(ptr.recall_validity::<_, (_, (_, _))>().as_mut()),
3864
0
            Err(err) => Err(err.map_src(|src| src.as_mut())),
3865
        }
3866
0
    }
3867
3868
    /// Interprets the prefix of the given `source` as a `&mut Self` without
3869
    /// copying.
3870
    ///
3871
    /// This method computes the [largest possible size of `Self`][valid-size]
3872
    /// that can fit in the leading bytes of `source`, then attempts to return
3873
    /// both a reference to those bytes interpreted as a `Self`, and a reference
3874
    /// to the remaining bytes. If there are insufficient bytes, or if `source`
3875
    /// is not appropriately aligned, this returns `Err`. If [`Self:
3876
    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
3877
    /// error][size-error-from].
3878
    ///
3879
    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
3880
    ///
3881
    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
3882
    /// [self-unaligned]: Unaligned
3883
    /// [size-error-from]: error/struct.SizeError.html#method.from-1
3884
    /// [slice-dst]: KnownLayout#dynamically-sized-types
3885
    ///
3886
    /// # Compile-Time Assertions
3887
    ///
3888
    /// This method cannot yet be used on unsized types whose dynamically-sized
3889
    /// component is zero-sized. See [`mut_from_suffix_with_elems`], which does
3890
    /// support such types. Attempting to use this method on such types results
3891
    /// in a compile-time assertion error; e.g.:
3892
    ///
3893
    /// ```compile_fail,E0080
3894
    /// use zerocopy::*;
3895
    /// # use zerocopy_derive::*;
3896
    ///
3897
    /// #[derive(FromBytes, Immutable, IntoBytes, KnownLayout)]
3898
    /// #[repr(C, packed)]
3899
    /// struct ZSTy {
3900
    ///     leading_sized: [u8; 2],
3901
    ///     trailing_dst: [()],
3902
    /// }
3903
    ///
3904
    /// let mut source = [85, 85];
3905
    /// let _ = ZSTy::mut_from_prefix(&mut source[..]); // âš  Compile Error!
3906
    /// ```
3907
    ///
3908
    /// [`mut_from_suffix_with_elems`]: FromBytes::mut_from_suffix_with_elems
3909
    ///
3910
    /// # Examples
3911
    ///
3912
    /// ```
3913
    /// use zerocopy::FromBytes;
3914
    /// # use zerocopy_derive::*;
3915
    ///
3916
    /// #[derive(FromBytes, IntoBytes, KnownLayout, Immutable)]
3917
    /// #[repr(C)]
3918
    /// struct PacketHeader {
3919
    ///     src_port: [u8; 2],
3920
    ///     dst_port: [u8; 2],
3921
    ///     length: [u8; 2],
3922
    ///     checksum: [u8; 2],
3923
    /// }
3924
    ///
3925
    /// // These are more bytes than are needed to encode a `PacketHeader`.
3926
    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
3927
    ///
3928
    /// let (header, body) = PacketHeader::mut_from_prefix(bytes).unwrap();
3929
    ///
3930
    /// assert_eq!(header.src_port, [0, 1]);
3931
    /// assert_eq!(header.dst_port, [2, 3]);
3932
    /// assert_eq!(header.length, [4, 5]);
3933
    /// assert_eq!(header.checksum, [6, 7]);
3934
    /// assert_eq!(body, &[8, 9][..]);
3935
    ///
3936
    /// header.checksum = [0, 0];
3937
    /// body.fill(1);
3938
    ///
3939
    /// assert_eq!(bytes, [0, 1, 2, 3, 4, 5, 0, 0, 1, 1]);
3940
    /// ```
3941
    #[must_use = "has no side effects"]
3942
    #[inline]
3943
0
    fn mut_from_prefix(
3944
0
        source: &mut [u8],
3945
0
    ) -> Result<(&mut Self, &mut [u8]), CastError<&mut [u8], Self>>
3946
0
    where
3947
0
        Self: IntoBytes + KnownLayout,
3948
    {
3949
0
        static_assert_dst_is_not_zst!(Self);
3950
0
        mut_from_prefix_suffix(source, None, CastType::Prefix)
3951
0
    }
3952
3953
    /// Interprets the suffix of the given `source` as a `&mut Self` without
3954
    /// copying.
3955
    ///
3956
    /// This method computes the [largest possible size of `Self`][valid-size]
3957
    /// that can fit in the trailing bytes of `source`, then attempts to return
3958
    /// both a reference to those bytes interpreted as a `Self`, and a reference
3959
    /// to the preceding bytes. If there are insufficient bytes, or if that
3960
    /// suffix of `source` is not appropriately aligned, this returns `Err`. If
3961
    /// [`Self: Unaligned`][self-unaligned], you can [infallibly discard the
3962
    /// alignment error][size-error-from].
3963
    ///
3964
    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
3965
    ///
3966
    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
3967
    /// [self-unaligned]: Unaligned
3968
    /// [size-error-from]: error/struct.SizeError.html#method.from-1
3969
    /// [slice-dst]: KnownLayout#dynamically-sized-types
3970
    ///
3971
    /// # Compile-Time Assertions
3972
    ///
3973
    /// This method cannot yet be used on unsized types whose dynamically-sized
3974
    /// component is zero-sized. Attempting to use this method on such types
3975
    /// results in a compile-time assertion error; e.g.:
3976
    ///
3977
    /// ```compile_fail,E0080
3978
    /// use zerocopy::*;
3979
    /// # use zerocopy_derive::*;
3980
    ///
3981
    /// #[derive(FromBytes, Immutable, IntoBytes, KnownLayout)]
3982
    /// #[repr(C, packed)]
3983
    /// struct ZSTy {
3984
    ///     leading_sized: [u8; 2],
3985
    ///     trailing_dst: [()],
3986
    /// }
3987
    ///
3988
    /// let mut source = [85, 85];
3989
    /// let _ = ZSTy::mut_from_suffix(&mut source[..]); // âš  Compile Error!
3990
    /// ```
3991
    ///
3992
    /// # Examples
3993
    ///
3994
    /// ```
3995
    /// use zerocopy::FromBytes;
3996
    /// # use zerocopy_derive::*;
3997
    ///
3998
    /// #[derive(FromBytes, IntoBytes, KnownLayout, Immutable)]
3999
    /// #[repr(C)]
4000
    /// struct PacketTrailer {
4001
    ///     frame_check_sequence: [u8; 4],
4002
    /// }
4003
    ///
4004
    /// // These are more bytes than are needed to encode a `PacketTrailer`.
4005
    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
4006
    ///
4007
    /// let (prefix, trailer) = PacketTrailer::mut_from_suffix(bytes).unwrap();
4008
    ///
4009
    /// assert_eq!(prefix, &[0u8, 1, 2, 3, 4, 5][..]);
4010
    /// assert_eq!(trailer.frame_check_sequence, [6, 7, 8, 9]);
4011
    ///
4012
    /// prefix.fill(0);
4013
    /// trailer.frame_check_sequence.fill(1);
4014
    ///
4015
    /// assert_eq!(bytes, [0, 0, 0, 0, 0, 0, 1, 1, 1, 1]);
4016
    /// ```
4017
    #[must_use = "has no side effects"]
4018
    #[inline]
4019
0
    fn mut_from_suffix(
4020
0
        source: &mut [u8],
4021
0
    ) -> Result<(&mut [u8], &mut Self), CastError<&mut [u8], Self>>
4022
0
    where
4023
0
        Self: IntoBytes + KnownLayout,
4024
    {
4025
0
        static_assert_dst_is_not_zst!(Self);
4026
0
        mut_from_prefix_suffix(source, None, CastType::Suffix).map(swap)
4027
0
    }
4028
4029
    /// Interprets the given `source` as a `&Self` with a DST length equal to
4030
    /// `count`.
4031
    ///
4032
    /// This method attempts to return a reference to `source` interpreted as a
4033
    /// `Self` with `count` trailing elements. If the length of `source` is not
4034
    /// equal to the size of `Self` with `count` elements, or if `source` is not
4035
    /// appropriately aligned, this returns `Err`. If [`Self:
4036
    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
4037
    /// error][size-error-from].
4038
    ///
4039
    /// [self-unaligned]: Unaligned
4040
    /// [size-error-from]: error/struct.SizeError.html#method.from-1
4041
    ///
4042
    /// # Examples
4043
    ///
4044
    /// ```
4045
    /// use zerocopy::FromBytes;
4046
    /// # use zerocopy_derive::*;
4047
    ///
4048
    /// # #[derive(Debug, PartialEq, Eq)]
4049
    /// #[derive(FromBytes, Immutable)]
4050
    /// #[repr(C)]
4051
    /// struct Pixel {
4052
    ///     r: u8,
4053
    ///     g: u8,
4054
    ///     b: u8,
4055
    ///     a: u8,
4056
    /// }
4057
    ///
4058
    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7][..];
4059
    ///
4060
    /// let pixels = <[Pixel]>::ref_from_bytes_with_elems(bytes, 2).unwrap();
4061
    ///
4062
    /// assert_eq!(pixels, &[
4063
    ///     Pixel { r: 0, g: 1, b: 2, a: 3 },
4064
    ///     Pixel { r: 4, g: 5, b: 6, a: 7 },
4065
    /// ]);
4066
    ///
4067
    /// ```
4068
    ///
4069
    /// Since an explicit `count` is provided, this method supports types with
4070
    /// zero-sized trailing slice elements. Methods such as [`ref_from_bytes`]
4071
    /// which do not take an explicit count do not support such types.
4072
    ///
4073
    /// ```
4074
    /// use zerocopy::*;
4075
    /// # use zerocopy_derive::*;
4076
    ///
4077
    /// #[derive(FromBytes, Immutable, KnownLayout)]
4078
    /// #[repr(C)]
4079
    /// struct ZSTy {
4080
    ///     leading_sized: [u8; 2],
4081
    ///     trailing_dst: [()],
4082
    /// }
4083
    ///
4084
    /// let src = &[85, 85][..];
4085
    /// let zsty = ZSTy::ref_from_bytes_with_elems(src, 42).unwrap();
4086
    /// assert_eq!(zsty.trailing_dst.len(), 42);
4087
    /// ```
4088
    ///
4089
    /// [`ref_from_bytes`]: FromBytes::ref_from_bytes
4090
    #[must_use = "has no side effects"]
4091
    #[inline]
4092
0
    fn ref_from_bytes_with_elems(
4093
0
        source: &[u8],
4094
0
        count: usize,
4095
0
    ) -> Result<&Self, CastError<&[u8], Self>>
4096
0
    where
4097
0
        Self: KnownLayout<PointerMetadata = usize> + Immutable,
4098
    {
4099
0
        let source = Ptr::from_ref(source);
4100
0
        let maybe_slf = source.try_cast_into_no_leftover::<_, BecauseImmutable>(Some(count));
4101
0
        match maybe_slf {
4102
0
            Ok(slf) => Ok(slf.recall_validity().as_ref()),
4103
0
            Err(err) => Err(err.map_src(|s| s.as_ref())),
4104
        }
4105
0
    }
4106
4107
    /// Interprets the prefix of the given `source` as a DST `&Self` with length
4108
    /// equal to `count`.
4109
    ///
4110
    /// This method attempts to return a reference to the prefix of `source`
4111
    /// interpreted as a `Self` with `count` trailing elements, and a reference
4112
    /// to the remaining bytes. If there are insufficient bytes, or if `source`
4113
    /// is not appropriately aligned, this returns `Err`. If [`Self:
4114
    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
4115
    /// error][size-error-from].
4116
    ///
4117
    /// [self-unaligned]: Unaligned
4118
    /// [size-error-from]: error/struct.SizeError.html#method.from-1
4119
    ///
4120
    /// # Examples
4121
    ///
4122
    /// ```
4123
    /// use zerocopy::FromBytes;
4124
    /// # use zerocopy_derive::*;
4125
    ///
4126
    /// # #[derive(Debug, PartialEq, Eq)]
4127
    /// #[derive(FromBytes, Immutable)]
4128
    /// #[repr(C)]
4129
    /// struct Pixel {
4130
    ///     r: u8,
4131
    ///     g: u8,
4132
    ///     b: u8,
4133
    ///     a: u8,
4134
    /// }
4135
    ///
4136
    /// // These are more bytes than are needed to encode two `Pixel`s.
4137
    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
4138
    ///
4139
    /// let (pixels, suffix) = <[Pixel]>::ref_from_prefix_with_elems(bytes, 2).unwrap();
4140
    ///
4141
    /// assert_eq!(pixels, &[
4142
    ///     Pixel { r: 0, g: 1, b: 2, a: 3 },
4143
    ///     Pixel { r: 4, g: 5, b: 6, a: 7 },
4144
    /// ]);
4145
    ///
4146
    /// assert_eq!(suffix, &[8, 9]);
4147
    /// ```
4148
    ///
4149
    /// Since an explicit `count` is provided, this method supports types with
4150
    /// zero-sized trailing slice elements. Methods such as [`ref_from_prefix`]
4151
    /// which do not take an explicit count do not support such types.
4152
    ///
4153
    /// ```
4154
    /// use zerocopy::*;
4155
    /// # use zerocopy_derive::*;
4156
    ///
4157
    /// #[derive(FromBytes, Immutable, KnownLayout)]
4158
    /// #[repr(C)]
4159
    /// struct ZSTy {
4160
    ///     leading_sized: [u8; 2],
4161
    ///     trailing_dst: [()],
4162
    /// }
4163
    ///
4164
    /// let src = &[85, 85][..];
4165
    /// let (zsty, _) = ZSTy::ref_from_prefix_with_elems(src, 42).unwrap();
4166
    /// assert_eq!(zsty.trailing_dst.len(), 42);
4167
    /// ```
4168
    ///
4169
    /// [`ref_from_prefix`]: FromBytes::ref_from_prefix
4170
    #[must_use = "has no side effects"]
4171
    #[inline]
4172
0
    fn ref_from_prefix_with_elems(
4173
0
        source: &[u8],
4174
0
        count: usize,
4175
0
    ) -> Result<(&Self, &[u8]), CastError<&[u8], Self>>
4176
0
    where
4177
0
        Self: KnownLayout<PointerMetadata = usize> + Immutable,
4178
    {
4179
0
        ref_from_prefix_suffix(source, Some(count), CastType::Prefix)
4180
0
    }
4181
4182
    /// Interprets the suffix of the given `source` as a DST `&Self` with length
4183
    /// equal to `count`.
4184
    ///
4185
    /// This method attempts to return a reference to the suffix of `source`
4186
    /// interpreted as a `Self` with `count` trailing elements, and a reference
4187
    /// to the preceding bytes. If there are insufficient bytes, or if that
4188
    /// suffix of `source` is not appropriately aligned, this returns `Err`. If
4189
    /// [`Self: Unaligned`][self-unaligned], you can [infallibly discard the
4190
    /// alignment error][size-error-from].
4191
    ///
4192
    /// [self-unaligned]: Unaligned
4193
    /// [size-error-from]: error/struct.SizeError.html#method.from-1
4194
    ///
4195
    /// # Examples
4196
    ///
4197
    /// ```
4198
    /// use zerocopy::FromBytes;
4199
    /// # use zerocopy_derive::*;
4200
    ///
4201
    /// # #[derive(Debug, PartialEq, Eq)]
4202
    /// #[derive(FromBytes, Immutable)]
4203
    /// #[repr(C)]
4204
    /// struct Pixel {
4205
    ///     r: u8,
4206
    ///     g: u8,
4207
    ///     b: u8,
4208
    ///     a: u8,
4209
    /// }
4210
    ///
4211
    /// // These are more bytes than are needed to encode two `Pixel`s.
4212
    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
4213
    ///
4214
    /// let (prefix, pixels) = <[Pixel]>::ref_from_suffix_with_elems(bytes, 2).unwrap();
4215
    ///
4216
    /// assert_eq!(prefix, &[0, 1]);
4217
    ///
4218
    /// assert_eq!(pixels, &[
4219
    ///     Pixel { r: 2, g: 3, b: 4, a: 5 },
4220
    ///     Pixel { r: 6, g: 7, b: 8, a: 9 },
4221
    /// ]);
4222
    /// ```
4223
    ///
4224
    /// Since an explicit `count` is provided, this method supports types with
4225
    /// zero-sized trailing slice elements. Methods such as [`ref_from_suffix`]
4226
    /// which do not take an explicit count do not support such types.
4227
    ///
4228
    /// ```
4229
    /// use zerocopy::*;
4230
    /// # use zerocopy_derive::*;
4231
    ///
4232
    /// #[derive(FromBytes, Immutable, KnownLayout)]
4233
    /// #[repr(C)]
4234
    /// struct ZSTy {
4235
    ///     leading_sized: [u8; 2],
4236
    ///     trailing_dst: [()],
4237
    /// }
4238
    ///
4239
    /// let src = &[85, 85][..];
4240
    /// let (_, zsty) = ZSTy::ref_from_suffix_with_elems(src, 42).unwrap();
4241
    /// assert_eq!(zsty.trailing_dst.len(), 42);
4242
    /// ```
4243
    ///
4244
    /// [`ref_from_suffix`]: FromBytes::ref_from_suffix
4245
    #[must_use = "has no side effects"]
4246
    #[inline]
4247
0
    fn ref_from_suffix_with_elems(
4248
0
        source: &[u8],
4249
0
        count: usize,
4250
0
    ) -> Result<(&[u8], &Self), CastError<&[u8], Self>>
4251
0
    where
4252
0
        Self: KnownLayout<PointerMetadata = usize> + Immutable,
4253
    {
4254
0
        ref_from_prefix_suffix(source, Some(count), CastType::Suffix).map(swap)
4255
0
    }
4256
4257
    /// Interprets the given `source` as a `&mut Self` with a DST length equal
4258
    /// to `count`.
4259
    ///
4260
    /// This method attempts to return a reference to `source` interpreted as a
4261
    /// `Self` with `count` trailing elements. If the length of `source` is not
4262
    /// equal to the size of `Self` with `count` elements, or if `source` is not
4263
    /// appropriately aligned, this returns `Err`. If [`Self:
4264
    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
4265
    /// error][size-error-from].
4266
    ///
4267
    /// [self-unaligned]: Unaligned
4268
    /// [size-error-from]: error/struct.SizeError.html#method.from-1
4269
    ///
4270
    /// # Examples
4271
    ///
4272
    /// ```
4273
    /// use zerocopy::FromBytes;
4274
    /// # use zerocopy_derive::*;
4275
    ///
4276
    /// # #[derive(Debug, PartialEq, Eq)]
4277
    /// #[derive(KnownLayout, FromBytes, IntoBytes, Immutable)]
4278
    /// #[repr(C)]
4279
    /// struct Pixel {
4280
    ///     r: u8,
4281
    ///     g: u8,
4282
    ///     b: u8,
4283
    ///     a: u8,
4284
    /// }
4285
    ///
4286
    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7][..];
4287
    ///
4288
    /// let pixels = <[Pixel]>::mut_from_bytes_with_elems(bytes, 2).unwrap();
4289
    ///
4290
    /// assert_eq!(pixels, &[
4291
    ///     Pixel { r: 0, g: 1, b: 2, a: 3 },
4292
    ///     Pixel { r: 4, g: 5, b: 6, a: 7 },
4293
    /// ]);
4294
    ///
4295
    /// pixels[1] = Pixel { r: 0, g: 0, b: 0, a: 0 };
4296
    ///
4297
    /// assert_eq!(bytes, [0, 1, 2, 3, 0, 0, 0, 0]);
4298
    /// ```
4299
    ///
4300
    /// Since an explicit `count` is provided, this method supports types with
4301
    /// zero-sized trailing slice elements. Methods such as [`mut_from`] which
4302
    /// do not take an explicit count do not support such types.
4303
    ///
4304
    /// ```
4305
    /// use zerocopy::*;
4306
    /// # use zerocopy_derive::*;
4307
    ///
4308
    /// #[derive(FromBytes, IntoBytes, Immutable, KnownLayout)]
4309
    /// #[repr(C, packed)]
4310
    /// struct ZSTy {
4311
    ///     leading_sized: [u8; 2],
4312
    ///     trailing_dst: [()],
4313
    /// }
4314
    ///
4315
    /// let src = &mut [85, 85][..];
4316
    /// let zsty = ZSTy::mut_from_bytes_with_elems(src, 42).unwrap();
4317
    /// assert_eq!(zsty.trailing_dst.len(), 42);
4318
    /// ```
4319
    ///
4320
    /// [`mut_from`]: FromBytes::mut_from
4321
    #[must_use = "has no side effects"]
4322
    #[inline]
4323
0
    fn mut_from_bytes_with_elems(
4324
0
        source: &mut [u8],
4325
0
        count: usize,
4326
0
    ) -> Result<&mut Self, CastError<&mut [u8], Self>>
4327
0
    where
4328
0
        Self: IntoBytes + KnownLayout<PointerMetadata = usize> + Immutable,
4329
    {
4330
0
        let source = Ptr::from_mut(source);
4331
0
        let maybe_slf = source.try_cast_into_no_leftover::<_, BecauseImmutable>(Some(count));
4332
0
        match maybe_slf {
4333
0
            Ok(slf) => Ok(slf
4334
0
                .recall_validity::<_, (_, (_, (BecauseExclusive, BecauseExclusive)))>()
4335
0
                .as_mut()),
4336
0
            Err(err) => Err(err.map_src(|s| s.as_mut())),
4337
        }
4338
0
    }
4339
4340
    /// Interprets the prefix of the given `source` as a `&mut Self` with DST
4341
    /// length equal to `count`.
4342
    ///
4343
    /// This method attempts to return a reference to the prefix of `source`
4344
    /// interpreted as a `Self` with `count` trailing elements, and a reference
4345
    /// to the preceding bytes. If there are insufficient bytes, or if `source`
4346
    /// is not appropriately aligned, this returns `Err`. If [`Self:
4347
    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
4348
    /// error][size-error-from].
4349
    ///
4350
    /// [self-unaligned]: Unaligned
4351
    /// [size-error-from]: error/struct.SizeError.html#method.from-1
4352
    ///
4353
    /// # Examples
4354
    ///
4355
    /// ```
4356
    /// use zerocopy::FromBytes;
4357
    /// # use zerocopy_derive::*;
4358
    ///
4359
    /// # #[derive(Debug, PartialEq, Eq)]
4360
    /// #[derive(KnownLayout, FromBytes, IntoBytes, Immutable)]
4361
    /// #[repr(C)]
4362
    /// struct Pixel {
4363
    ///     r: u8,
4364
    ///     g: u8,
4365
    ///     b: u8,
4366
    ///     a: u8,
4367
    /// }
4368
    ///
4369
    /// // These are more bytes than are needed to encode two `Pixel`s.
4370
    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
4371
    ///
4372
    /// let (pixels, suffix) = <[Pixel]>::mut_from_prefix_with_elems(bytes, 2).unwrap();
4373
    ///
4374
    /// assert_eq!(pixels, &[
4375
    ///     Pixel { r: 0, g: 1, b: 2, a: 3 },
4376
    ///     Pixel { r: 4, g: 5, b: 6, a: 7 },
4377
    /// ]);
4378
    ///
4379
    /// assert_eq!(suffix, &[8, 9]);
4380
    ///
4381
    /// pixels[1] = Pixel { r: 0, g: 0, b: 0, a: 0 };
4382
    /// suffix.fill(1);
4383
    ///
4384
    /// assert_eq!(bytes, [0, 1, 2, 3, 0, 0, 0, 0, 1, 1]);
4385
    /// ```
4386
    ///
4387
    /// Since an explicit `count` is provided, this method supports types with
4388
    /// zero-sized trailing slice elements. Methods such as [`mut_from_prefix`]
4389
    /// which do not take an explicit count do not support such types.
4390
    ///
4391
    /// ```
4392
    /// use zerocopy::*;
4393
    /// # use zerocopy_derive::*;
4394
    ///
4395
    /// #[derive(FromBytes, IntoBytes, Immutable, KnownLayout)]
4396
    /// #[repr(C, packed)]
4397
    /// struct ZSTy {
4398
    ///     leading_sized: [u8; 2],
4399
    ///     trailing_dst: [()],
4400
    /// }
4401
    ///
4402
    /// let src = &mut [85, 85][..];
4403
    /// let (zsty, _) = ZSTy::mut_from_prefix_with_elems(src, 42).unwrap();
4404
    /// assert_eq!(zsty.trailing_dst.len(), 42);
4405
    /// ```
4406
    ///
4407
    /// [`mut_from_prefix`]: FromBytes::mut_from_prefix
4408
    #[must_use = "has no side effects"]
4409
    #[inline]
4410
0
    fn mut_from_prefix_with_elems(
4411
0
        source: &mut [u8],
4412
0
        count: usize,
4413
0
    ) -> Result<(&mut Self, &mut [u8]), CastError<&mut [u8], Self>>
4414
0
    where
4415
0
        Self: IntoBytes + KnownLayout<PointerMetadata = usize>,
4416
    {
4417
0
        mut_from_prefix_suffix(source, Some(count), CastType::Prefix)
4418
0
    }
4419
4420
    /// Interprets the suffix of the given `source` as a `&mut Self` with DST
4421
    /// length equal to `count`.
4422
    ///
4423
    /// This method attempts to return a reference to the suffix of `source`
4424
    /// interpreted as a `Self` with `count` trailing elements, and a reference
4425
    /// to the remaining bytes. If there are insufficient bytes, or if that
4426
    /// suffix of `source` is not appropriately aligned, this returns `Err`. If
4427
    /// [`Self: Unaligned`][self-unaligned], you can [infallibly discard the
4428
    /// alignment error][size-error-from].
4429
    ///
4430
    /// [self-unaligned]: Unaligned
4431
    /// [size-error-from]: error/struct.SizeError.html#method.from-1
4432
    ///
4433
    /// # Examples
4434
    ///
4435
    /// ```
4436
    /// use zerocopy::FromBytes;
4437
    /// # use zerocopy_derive::*;
4438
    ///
4439
    /// # #[derive(Debug, PartialEq, Eq)]
4440
    /// #[derive(FromBytes, IntoBytes, Immutable)]
4441
    /// #[repr(C)]
4442
    /// struct Pixel {
4443
    ///     r: u8,
4444
    ///     g: u8,
4445
    ///     b: u8,
4446
    ///     a: u8,
4447
    /// }
4448
    ///
4449
    /// // These are more bytes than are needed to encode two `Pixel`s.
4450
    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
4451
    ///
4452
    /// let (prefix, pixels) = <[Pixel]>::mut_from_suffix_with_elems(bytes, 2).unwrap();
4453
    ///
4454
    /// assert_eq!(prefix, &[0, 1]);
4455
    ///
4456
    /// assert_eq!(pixels, &[
4457
    ///     Pixel { r: 2, g: 3, b: 4, a: 5 },
4458
    ///     Pixel { r: 6, g: 7, b: 8, a: 9 },
4459
    /// ]);
4460
    ///
4461
    /// prefix.fill(9);
4462
    /// pixels[1] = Pixel { r: 0, g: 0, b: 0, a: 0 };
4463
    ///
4464
    /// assert_eq!(bytes, [9, 9, 2, 3, 4, 5, 0, 0, 0, 0]);
4465
    /// ```
4466
    ///
4467
    /// Since an explicit `count` is provided, this method supports types with
4468
    /// zero-sized trailing slice elements. Methods such as [`mut_from_suffix`]
4469
    /// which do not take an explicit count do not support such types.
4470
    ///
4471
    /// ```
4472
    /// use zerocopy::*;
4473
    /// # use zerocopy_derive::*;
4474
    ///
4475
    /// #[derive(FromBytes, IntoBytes, Immutable, KnownLayout)]
4476
    /// #[repr(C, packed)]
4477
    /// struct ZSTy {
4478
    ///     leading_sized: [u8; 2],
4479
    ///     trailing_dst: [()],
4480
    /// }
4481
    ///
4482
    /// let src = &mut [85, 85][..];
4483
    /// let (_, zsty) = ZSTy::mut_from_suffix_with_elems(src, 42).unwrap();
4484
    /// assert_eq!(zsty.trailing_dst.len(), 42);
4485
    /// ```
4486
    ///
4487
    /// [`mut_from_suffix`]: FromBytes::mut_from_suffix
4488
    #[must_use = "has no side effects"]
4489
    #[inline]
4490
0
    fn mut_from_suffix_with_elems(
4491
0
        source: &mut [u8],
4492
0
        count: usize,
4493
0
    ) -> Result<(&mut [u8], &mut Self), CastError<&mut [u8], Self>>
4494
0
    where
4495
0
        Self: IntoBytes + KnownLayout<PointerMetadata = usize>,
4496
    {
4497
0
        mut_from_prefix_suffix(source, Some(count), CastType::Suffix).map(swap)
4498
0
    }
4499
4500
    /// Reads a copy of `Self` from the given `source`.
4501
    ///
4502
    /// If `source.len() != size_of::<Self>()`, `read_from_bytes` returns `Err`.
4503
    ///
4504
    /// # Examples
4505
    ///
4506
    /// ```
4507
    /// use zerocopy::FromBytes;
4508
    /// # use zerocopy_derive::*;
4509
    ///
4510
    /// #[derive(FromBytes)]
4511
    /// #[repr(C)]
4512
    /// struct PacketHeader {
4513
    ///     src_port: [u8; 2],
4514
    ///     dst_port: [u8; 2],
4515
    ///     length: [u8; 2],
4516
    ///     checksum: [u8; 2],
4517
    /// }
4518
    ///
4519
    /// // These bytes encode a `PacketHeader`.
4520
    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7][..];
4521
    ///
4522
    /// let header = PacketHeader::read_from_bytes(bytes).unwrap();
4523
    ///
4524
    /// assert_eq!(header.src_port, [0, 1]);
4525
    /// assert_eq!(header.dst_port, [2, 3]);
4526
    /// assert_eq!(header.length, [4, 5]);
4527
    /// assert_eq!(header.checksum, [6, 7]);
4528
    /// ```
4529
    #[must_use = "has no side effects"]
4530
    #[inline]
4531
0
    fn read_from_bytes(source: &[u8]) -> Result<Self, SizeError<&[u8], Self>>
4532
0
    where
4533
0
        Self: Sized,
4534
    {
4535
0
        match Ref::<_, Unalign<Self>>::sized_from(source) {
4536
0
            Ok(r) => Ok(Ref::read(&r).into_inner()),
4537
0
            Err(CastError::Size(e)) => Err(e.with_dst()),
4538
0
            Err(CastError::Alignment(_)) => {
4539
                // SAFETY: `Unalign<Self>` is trivially aligned, so
4540
                // `Ref::sized_from` cannot fail due to unmet alignment
4541
                // requirements.
4542
0
                unsafe { core::hint::unreachable_unchecked() }
4543
            }
4544
            Err(CastError::Validity(i)) => match i {},
4545
        }
4546
0
    }
4547
4548
    /// Reads a copy of `Self` from the prefix of the given `source`.
4549
    ///
4550
    /// This attempts to read a `Self` from the first `size_of::<Self>()` bytes
4551
    /// of `source`, returning that `Self` and any remaining bytes. If
4552
    /// `source.len() < size_of::<Self>()`, it returns `Err`.
4553
    ///
4554
    /// # Examples
4555
    ///
4556
    /// ```
4557
    /// use zerocopy::FromBytes;
4558
    /// # use zerocopy_derive::*;
4559
    ///
4560
    /// #[derive(FromBytes)]
4561
    /// #[repr(C)]
4562
    /// struct PacketHeader {
4563
    ///     src_port: [u8; 2],
4564
    ///     dst_port: [u8; 2],
4565
    ///     length: [u8; 2],
4566
    ///     checksum: [u8; 2],
4567
    /// }
4568
    ///
4569
    /// // These are more bytes than are needed to encode a `PacketHeader`.
4570
    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
4571
    ///
4572
    /// let (header, body) = PacketHeader::read_from_prefix(bytes).unwrap();
4573
    ///
4574
    /// assert_eq!(header.src_port, [0, 1]);
4575
    /// assert_eq!(header.dst_port, [2, 3]);
4576
    /// assert_eq!(header.length, [4, 5]);
4577
    /// assert_eq!(header.checksum, [6, 7]);
4578
    /// assert_eq!(body, [8, 9]);
4579
    /// ```
4580
    #[must_use = "has no side effects"]
4581
    #[inline]
4582
0
    fn read_from_prefix(source: &[u8]) -> Result<(Self, &[u8]), SizeError<&[u8], Self>>
4583
0
    where
4584
0
        Self: Sized,
4585
    {
4586
0
        match Ref::<_, Unalign<Self>>::sized_from_prefix(source) {
4587
0
            Ok((r, suffix)) => Ok((Ref::read(&r).into_inner(), suffix)),
4588
0
            Err(CastError::Size(e)) => Err(e.with_dst()),
4589
0
            Err(CastError::Alignment(_)) => {
4590
                // SAFETY: `Unalign<Self>` is trivially aligned, so
4591
                // `Ref::sized_from_prefix` cannot fail due to unmet alignment
4592
                // requirements.
4593
0
                unsafe { core::hint::unreachable_unchecked() }
4594
            }
4595
            Err(CastError::Validity(i)) => match i {},
4596
        }
4597
0
    }
4598
4599
    /// Reads a copy of `Self` from the suffix of the given `source`.
4600
    ///
4601
    /// This attempts to read a `Self` from the last `size_of::<Self>()` bytes
4602
    /// of `source`, returning that `Self` and any preceding bytes. If
4603
    /// `source.len() < size_of::<Self>()`, it returns `Err`.
4604
    ///
4605
    /// # Examples
4606
    ///
4607
    /// ```
4608
    /// use zerocopy::FromBytes;
4609
    /// # use zerocopy_derive::*;
4610
    ///
4611
    /// #[derive(FromBytes)]
4612
    /// #[repr(C)]
4613
    /// struct PacketTrailer {
4614
    ///     frame_check_sequence: [u8; 4],
4615
    /// }
4616
    ///
4617
    /// // These are more bytes than are needed to encode a `PacketTrailer`.
4618
    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
4619
    ///
4620
    /// let (prefix, trailer) = PacketTrailer::read_from_suffix(bytes).unwrap();
4621
    ///
4622
    /// assert_eq!(prefix, [0, 1, 2, 3, 4, 5]);
4623
    /// assert_eq!(trailer.frame_check_sequence, [6, 7, 8, 9]);
4624
    /// ```
4625
    #[must_use = "has no side effects"]
4626
    #[inline]
4627
0
    fn read_from_suffix(source: &[u8]) -> Result<(&[u8], Self), SizeError<&[u8], Self>>
4628
0
    where
4629
0
        Self: Sized,
4630
    {
4631
0
        match Ref::<_, Unalign<Self>>::sized_from_suffix(source) {
4632
0
            Ok((prefix, r)) => Ok((prefix, Ref::read(&r).into_inner())),
4633
0
            Err(CastError::Size(e)) => Err(e.with_dst()),
4634
0
            Err(CastError::Alignment(_)) => {
4635
                // SAFETY: `Unalign<Self>` is trivially aligned, so
4636
                // `Ref::sized_from_suffix` cannot fail due to unmet alignment
4637
                // requirements.
4638
0
                unsafe { core::hint::unreachable_unchecked() }
4639
            }
4640
            Err(CastError::Validity(i)) => match i {},
4641
        }
4642
0
    }
4643
4644
    /// Reads a copy of `self` from an `io::Read`.
4645
    ///
4646
    /// This is useful for interfacing with operating system byte sinks (files,
4647
    /// sockets, etc.).
4648
    ///
4649
    /// # Examples
4650
    ///
4651
    /// ```no_run
4652
    /// use zerocopy::{byteorder::big_endian::*, FromBytes};
4653
    /// use std::fs::File;
4654
    /// # use zerocopy_derive::*;
4655
    ///
4656
    /// #[derive(FromBytes)]
4657
    /// #[repr(C)]
4658
    /// struct BitmapFileHeader {
4659
    ///     signature: [u8; 2],
4660
    ///     size: U32,
4661
    ///     reserved: U64,
4662
    ///     offset: U64,
4663
    /// }
4664
    ///
4665
    /// let mut file = File::open("image.bin").unwrap();
4666
    /// let header = BitmapFileHeader::read_from_io(&mut file).unwrap();
4667
    /// ```
4668
    #[cfg(feature = "std")]
4669
    #[cfg_attr(doc_cfg, doc(cfg(feature = "std")))]
4670
    #[inline(always)]
4671
    fn read_from_io<R>(mut src: R) -> io::Result<Self>
4672
    where
4673
        Self: Sized,
4674
        R: io::Read,
4675
    {
4676
        // NOTE(#2319, #2320): We do `buf.zero()` separately rather than
4677
        // constructing `let buf = CoreMaybeUninit::zeroed()` because, if `Self`
4678
        // contains padding bytes, then a typed copy of `CoreMaybeUninit<Self>`
4679
        // will not necessarily preserve zeros written to those padding byte
4680
        // locations, and so `buf` could contain uninitialized bytes.
4681
        let mut buf = CoreMaybeUninit::<Self>::uninit();
4682
        buf.zero();
4683
4684
        let ptr = Ptr::from_mut(&mut buf);
4685
        // SAFETY: After `buf.zero()`, `buf` consists entirely of initialized,
4686
        // zeroed bytes. Since `MaybeUninit` has no validity requirements, `ptr`
4687
        // cannot be used to write values which will violate `buf`'s bit
4688
        // validity. Since `ptr` has `Exclusive` aliasing, nothing other than
4689
        // `ptr` may be used to mutate `ptr`'s referent, and so its bit validity
4690
        // cannot be violated even though `buf` may have more permissive bit
4691
        // validity than `ptr`.
4692
        let ptr = unsafe { ptr.assume_validity::<invariant::Initialized>() };
4693
        let ptr = ptr.as_bytes::<BecauseExclusive>();
4694
        src.read_exact(ptr.as_mut())?;
4695
        // SAFETY: `buf` entirely consists of initialized bytes, and `Self` is
4696
        // `FromBytes`.
4697
        Ok(unsafe { buf.assume_init() })
4698
    }
4699
4700
    #[deprecated(since = "0.8.0", note = "renamed to `FromBytes::ref_from_bytes`")]
4701
    #[doc(hidden)]
4702
    #[must_use = "has no side effects"]
4703
    #[inline(always)]
4704
0
    fn ref_from(source: &[u8]) -> Option<&Self>
4705
0
    where
4706
0
        Self: KnownLayout + Immutable,
4707
    {
4708
0
        Self::ref_from_bytes(source).ok()
4709
0
    }
4710
4711
    #[deprecated(since = "0.8.0", note = "renamed to `FromBytes::mut_from_bytes`")]
4712
    #[doc(hidden)]
4713
    #[must_use = "has no side effects"]
4714
    #[inline(always)]
4715
0
    fn mut_from(source: &mut [u8]) -> Option<&mut Self>
4716
0
    where
4717
0
        Self: KnownLayout + IntoBytes,
4718
    {
4719
0
        Self::mut_from_bytes(source).ok()
4720
0
    }
4721
4722
    #[deprecated(since = "0.8.0", note = "renamed to `FromBytes::ref_from_prefix_with_elems`")]
4723
    #[doc(hidden)]
4724
    #[must_use = "has no side effects"]
4725
    #[inline(always)]
4726
0
    fn slice_from_prefix(source: &[u8], count: usize) -> Option<(&[Self], &[u8])>
4727
0
    where
4728
0
        Self: Sized + Immutable,
4729
    {
4730
0
        <[Self]>::ref_from_prefix_with_elems(source, count).ok()
4731
0
    }
4732
4733
    #[deprecated(since = "0.8.0", note = "renamed to `FromBytes::ref_from_suffix_with_elems`")]
4734
    #[doc(hidden)]
4735
    #[must_use = "has no side effects"]
4736
    #[inline(always)]
4737
0
    fn slice_from_suffix(source: &[u8], count: usize) -> Option<(&[u8], &[Self])>
4738
0
    where
4739
0
        Self: Sized + Immutable,
4740
    {
4741
0
        <[Self]>::ref_from_suffix_with_elems(source, count).ok()
4742
0
    }
4743
4744
    #[deprecated(since = "0.8.0", note = "renamed to `FromBytes::mut_from_prefix_with_elems`")]
4745
    #[doc(hidden)]
4746
    #[must_use = "has no side effects"]
4747
    #[inline(always)]
4748
0
    fn mut_slice_from_prefix(source: &mut [u8], count: usize) -> Option<(&mut [Self], &mut [u8])>
4749
0
    where
4750
0
        Self: Sized + IntoBytes,
4751
    {
4752
0
        <[Self]>::mut_from_prefix_with_elems(source, count).ok()
4753
0
    }
4754
4755
    #[deprecated(since = "0.8.0", note = "renamed to `FromBytes::mut_from_suffix_with_elems`")]
4756
    #[doc(hidden)]
4757
    #[must_use = "has no side effects"]
4758
    #[inline(always)]
4759
0
    fn mut_slice_from_suffix(source: &mut [u8], count: usize) -> Option<(&mut [u8], &mut [Self])>
4760
0
    where
4761
0
        Self: Sized + IntoBytes,
4762
    {
4763
0
        <[Self]>::mut_from_suffix_with_elems(source, count).ok()
4764
0
    }
4765
4766
    #[deprecated(since = "0.8.0", note = "renamed to `FromBytes::read_from_bytes`")]
4767
    #[doc(hidden)]
4768
    #[must_use = "has no side effects"]
4769
    #[inline(always)]
4770
0
    fn read_from(source: &[u8]) -> Option<Self>
4771
0
    where
4772
0
        Self: Sized,
4773
    {
4774
0
        Self::read_from_bytes(source).ok()
4775
0
    }
4776
}
4777
4778
/// Interprets the given affix of the given bytes as a `&Self`.
4779
///
4780
/// This method computes the largest possible size of `Self` that can fit in the
4781
/// prefix or suffix bytes of `source`, then attempts to return both a reference
4782
/// to those bytes interpreted as a `Self`, and a reference to the excess bytes.
4783
/// If there are insufficient bytes, or if that affix of `source` is not
4784
/// appropriately aligned, this returns `Err`.
4785
#[inline(always)]
4786
0
fn ref_from_prefix_suffix<T: FromBytes + KnownLayout + Immutable + ?Sized>(
4787
0
    source: &[u8],
4788
0
    meta: Option<T::PointerMetadata>,
4789
0
    cast_type: CastType,
4790
0
) -> Result<(&T, &[u8]), CastError<&[u8], T>> {
4791
0
    let (slf, prefix_suffix) = Ptr::from_ref(source)
4792
0
        .try_cast_into::<_, BecauseImmutable>(cast_type, meta)
4793
0
        .map_err(|err| err.map_src(|s| s.as_ref()))?;
4794
0
    Ok((slf.recall_validity().as_ref(), prefix_suffix.as_ref()))
4795
0
}
4796
4797
/// Interprets the given affix of the given bytes as a `&mut Self` without
4798
/// copying.
4799
///
4800
/// This method computes the largest possible size of `Self` that can fit in the
4801
/// prefix or suffix bytes of `source`, then attempts to return both a reference
4802
/// to those bytes interpreted as a `Self`, and a reference to the excess bytes.
4803
/// If there are insufficient bytes, or if that affix of `source` is not
4804
/// appropriately aligned, this returns `Err`.
4805
#[inline(always)]
4806
0
fn mut_from_prefix_suffix<T: FromBytes + IntoBytes + KnownLayout + ?Sized>(
4807
0
    source: &mut [u8],
4808
0
    meta: Option<T::PointerMetadata>,
4809
0
    cast_type: CastType,
4810
0
) -> Result<(&mut T, &mut [u8]), CastError<&mut [u8], T>> {
4811
0
    let (slf, prefix_suffix) = Ptr::from_mut(source)
4812
0
        .try_cast_into::<_, BecauseExclusive>(cast_type, meta)
4813
0
        .map_err(|err| err.map_src(|s| s.as_mut()))?;
4814
0
    Ok((slf.recall_validity::<_, (_, (_, _))>().as_mut(), prefix_suffix.as_mut()))
4815
0
}
4816
4817
/// Analyzes whether a type is [`IntoBytes`].
4818
///
4819
/// This derive analyzes, at compile time, whether the annotated type satisfies
4820
/// the [safety conditions] of `IntoBytes` and implements `IntoBytes` if it is
4821
/// sound to do so. This derive can be applied to structs and enums (see below
4822
/// for union support); e.g.:
4823
///
4824
/// ```
4825
/// # use zerocopy_derive::{IntoBytes};
4826
/// #[derive(IntoBytes)]
4827
/// #[repr(C)]
4828
/// struct MyStruct {
4829
/// # /*
4830
///     ...
4831
/// # */
4832
/// }
4833
///
4834
/// #[derive(IntoBytes)]
4835
/// #[repr(u8)]
4836
/// enum MyEnum {
4837
/// #   Variant,
4838
/// # /*
4839
///     ...
4840
/// # */
4841
/// }
4842
/// ```
4843
///
4844
/// [safety conditions]: trait@IntoBytes#safety
4845
///
4846
/// # Error Messages
4847
///
4848
/// On Rust toolchains prior to 1.78.0, due to the way that the custom derive
4849
/// for `IntoBytes` is implemented, you may get an error like this:
4850
///
4851
/// ```text
4852
/// error[E0277]: the trait bound `(): PaddingFree<Foo, true>` is not satisfied
4853
///   --> lib.rs:23:10
4854
///    |
4855
///  1 | #[derive(IntoBytes)]
4856
///    |          ^^^^^^^^^ the trait `PaddingFree<Foo, true>` is not implemented for `()`
4857
///    |
4858
///    = help: the following implementations were found:
4859
///                   <() as PaddingFree<T, false>>
4860
/// ```
4861
///
4862
/// This error indicates that the type being annotated has padding bytes, which
4863
/// is illegal for `IntoBytes` types. Consider reducing the alignment of some
4864
/// fields by using types in the [`byteorder`] module, wrapping field types in
4865
/// [`Unalign`], adding explicit struct fields where those padding bytes would
4866
/// be, or using `#[repr(packed)]`. See the Rust Reference's page on [type
4867
/// layout] for more information about type layout and padding.
4868
///
4869
/// [type layout]: https://doc.rust-lang.org/reference/type-layout.html
4870
///
4871
/// # Unions
4872
///
4873
/// Currently, union bit validity is [up in the air][union-validity], and so
4874
/// zerocopy does not support `#[derive(IntoBytes)]` on unions by default.
4875
/// However, implementing `IntoBytes` on a union type is likely sound on all
4876
/// existing Rust toolchains - it's just that it may become unsound in the
4877
/// future. You can opt-in to `#[derive(IntoBytes)]` support on unions by
4878
/// passing the unstable `zerocopy_derive_union_into_bytes` cfg:
4879
///
4880
/// ```shell
4881
/// $ RUSTFLAGS='--cfg zerocopy_derive_union_into_bytes' cargo build
4882
/// ```
4883
///
4884
/// However, it is your responsibility to ensure that this derive is sound on
4885
/// the specific versions of the Rust toolchain you are using! We make no
4886
/// stability or soundness guarantees regarding this cfg, and may remove it at
4887
/// any point.
4888
///
4889
/// We are actively working with Rust to stabilize the necessary language
4890
/// guarantees to support this in a forwards-compatible way, which will enable
4891
/// us to remove the cfg gate. As part of this effort, we need to know how much
4892
/// demand there is for this feature. If you would like to use `IntoBytes` on
4893
/// unions, [please let us know][discussion].
4894
///
4895
/// [union-validity]: https://github.com/rust-lang/unsafe-code-guidelines/issues/438
4896
/// [discussion]: https://github.com/google/zerocopy/discussions/1802
4897
///
4898
/// # Analysis
4899
///
4900
/// *This section describes, roughly, the analysis performed by this derive to
4901
/// determine whether it is sound to implement `IntoBytes` for a given type.
4902
/// Unless you are modifying the implementation of this derive, or attempting to
4903
/// manually implement `IntoBytes` for a type yourself, you don't need to read
4904
/// this section.*
4905
///
4906
/// If a type has the following properties, then this derive can implement
4907
/// `IntoBytes` for that type:
4908
///
4909
/// - If the type is a struct, its fields must be [`IntoBytes`]. Additionally:
4910
///     - if the type is `repr(transparent)` or `repr(packed)`, it is
4911
///       [`IntoBytes`] if its fields are [`IntoBytes`]; else,
4912
///     - if the type is `repr(C)` with at most one field, it is [`IntoBytes`]
4913
///       if its field is [`IntoBytes`]; else,
4914
///     - if the type has no generic parameters, it is [`IntoBytes`] if the type
4915
///       is sized and has no padding bytes; else,
4916
///     - if the type is `repr(C)`, its fields must be [`Unaligned`].
4917
/// - If the type is an enum:
4918
///   - It must have a defined representation (`repr`s `C`, `u8`, `u16`, `u32`,
4919
///     `u64`, `usize`, `i8`, `i16`, `i32`, `i64`, or `isize`).
4920
///   - It must have no padding bytes.
4921
///   - Its fields must be [`IntoBytes`].
4922
///
4923
/// This analysis is subject to change. Unsafe code may *only* rely on the
4924
/// documented [safety conditions] of `FromBytes`, and must *not* rely on the
4925
/// implementation details of this derive.
4926
///
4927
/// [Rust Reference]: https://doc.rust-lang.org/reference/type-layout.html
4928
#[cfg(any(feature = "derive", test))]
4929
#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
4930
pub use zerocopy_derive::IntoBytes;
4931
4932
/// Types that can be converted to an immutable slice of initialized bytes.
4933
///
4934
/// Any `IntoBytes` type can be converted to a slice of initialized bytes of the
4935
/// same size. This is useful for efficiently serializing structured data as raw
4936
/// bytes.
4937
///
4938
/// # Implementation
4939
///
4940
/// **Do not implement this trait yourself!** Instead, use
4941
/// [`#[derive(IntoBytes)]`][derive]; e.g.:
4942
///
4943
/// ```
4944
/// # use zerocopy_derive::IntoBytes;
4945
/// #[derive(IntoBytes)]
4946
/// #[repr(C)]
4947
/// struct MyStruct {
4948
/// # /*
4949
///     ...
4950
/// # */
4951
/// }
4952
///
4953
/// #[derive(IntoBytes)]
4954
/// #[repr(u8)]
4955
/// enum MyEnum {
4956
/// #   Variant0,
4957
/// # /*
4958
///     ...
4959
/// # */
4960
/// }
4961
/// ```
4962
///
4963
/// This derive performs a sophisticated, compile-time safety analysis to
4964
/// determine whether a type is `IntoBytes`. See the [derive
4965
/// documentation][derive] for guidance on how to interpret error messages
4966
/// produced by the derive's analysis.
4967
///
4968
/// # Safety
4969
///
4970
/// *This section describes what is required in order for `T: IntoBytes`, and
4971
/// what unsafe code may assume of such types. If you don't plan on implementing
4972
/// `IntoBytes` manually, and you don't plan on writing unsafe code that
4973
/// operates on `IntoBytes` types, then you don't need to read this section.*
4974
///
4975
/// If `T: IntoBytes`, then unsafe code may assume that it is sound to treat any
4976
/// `t: T` as an immutable `[u8]` of length `size_of_val(t)`. If a type is
4977
/// marked as `IntoBytes` which violates this contract, it may cause undefined
4978
/// behavior.
4979
///
4980
/// `#[derive(IntoBytes)]` only permits [types which satisfy these
4981
/// requirements][derive-analysis].
4982
///
4983
#[cfg_attr(
4984
    feature = "derive",
4985
    doc = "[derive]: zerocopy_derive::IntoBytes",
4986
    doc = "[derive-analysis]: zerocopy_derive::IntoBytes#analysis"
4987
)]
4988
#[cfg_attr(
4989
    not(feature = "derive"),
4990
    doc = concat!("[derive]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.IntoBytes.html"),
4991
    doc = concat!("[derive-analysis]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.IntoBytes.html#analysis"),
4992
)]
4993
#[cfg_attr(
4994
    not(no_zerocopy_diagnostic_on_unimplemented_1_78_0),
4995
    diagnostic::on_unimplemented(note = "Consider adding `#[derive(IntoBytes)]` to `{Self}`")
4996
)]
4997
pub unsafe trait IntoBytes {
4998
    // The `Self: Sized` bound makes it so that this function doesn't prevent
4999
    // `IntoBytes` from being object safe. Note that other `IntoBytes` methods
5000
    // prevent object safety, but those provide a benefit in exchange for object
5001
    // safety. If at some point we remove those methods, change their type
5002
    // signatures, or move them out of this trait so that `IntoBytes` is object
5003
    // safe again, it's important that this function not prevent object safety.
5004
    #[doc(hidden)]
5005
    fn only_derive_is_allowed_to_implement_this_trait()
5006
    where
5007
        Self: Sized;
5008
5009
    /// Gets the bytes of this value.
5010
    ///
5011
    /// # Examples
5012
    ///
5013
    /// ```
5014
    /// use zerocopy::IntoBytes;
5015
    /// # use zerocopy_derive::*;
5016
    ///
5017
    /// #[derive(IntoBytes, Immutable)]
5018
    /// #[repr(C)]
5019
    /// struct PacketHeader {
5020
    ///     src_port: [u8; 2],
5021
    ///     dst_port: [u8; 2],
5022
    ///     length: [u8; 2],
5023
    ///     checksum: [u8; 2],
5024
    /// }
5025
    ///
5026
    /// let header = PacketHeader {
5027
    ///     src_port: [0, 1],
5028
    ///     dst_port: [2, 3],
5029
    ///     length: [4, 5],
5030
    ///     checksum: [6, 7],
5031
    /// };
5032
    ///
5033
    /// let bytes = header.as_bytes();
5034
    ///
5035
    /// assert_eq!(bytes, [0, 1, 2, 3, 4, 5, 6, 7]);
5036
    /// ```
5037
    #[must_use = "has no side effects"]
5038
    #[inline(always)]
5039
0
    fn as_bytes(&self) -> &[u8]
5040
0
    where
5041
0
        Self: Immutable,
5042
    {
5043
        // Note that this method does not have a `Self: Sized` bound;
5044
        // `size_of_val` works for unsized values too.
5045
0
        let len = mem::size_of_val(self);
5046
0
        let slf: *const Self = self;
5047
5048
        // SAFETY:
5049
        // - `slf.cast::<u8>()` is valid for reads for `len * size_of::<u8>()`
5050
        //   many bytes because...
5051
        //   - `slf` is the same pointer as `self`, and `self` is a reference
5052
        //     which points to an object whose size is `len`. Thus...
5053
        //     - The entire region of `len` bytes starting at `slf` is contained
5054
        //       within a single allocation.
5055
        //     - `slf` is non-null.
5056
        //   - `slf` is trivially aligned to `align_of::<u8>() == 1`.
5057
        // - `Self: IntoBytes` ensures that all of the bytes of `slf` are
5058
        //   initialized.
5059
        // - Since `slf` is derived from `self`, and `self` is an immutable
5060
        //   reference, the only other references to this memory region that
5061
        //   could exist are other immutable references, and those don't allow
5062
        //   mutation. `Self: Immutable` prohibits types which contain
5063
        //   `UnsafeCell`s, which are the only types for which this rule
5064
        //   wouldn't be sufficient.
5065
        // - The total size of the resulting slice is no larger than
5066
        //   `isize::MAX` because no allocation produced by safe code can be
5067
        //   larger than `isize::MAX`.
5068
        //
5069
        // FIXME(#429): Add references to docs and quotes.
5070
0
        unsafe { slice::from_raw_parts(slf.cast::<u8>(), len) }
5071
0
    }
5072
5073
    /// Gets the bytes of this value mutably.
5074
    ///
5075
    /// # Examples
5076
    ///
5077
    /// ```
5078
    /// use zerocopy::IntoBytes;
5079
    /// # use zerocopy_derive::*;
5080
    ///
5081
    /// # #[derive(Eq, PartialEq, Debug)]
5082
    /// #[derive(FromBytes, IntoBytes, Immutable)]
5083
    /// #[repr(C)]
5084
    /// struct PacketHeader {
5085
    ///     src_port: [u8; 2],
5086
    ///     dst_port: [u8; 2],
5087
    ///     length: [u8; 2],
5088
    ///     checksum: [u8; 2],
5089
    /// }
5090
    ///
5091
    /// let mut header = PacketHeader {
5092
    ///     src_port: [0, 1],
5093
    ///     dst_port: [2, 3],
5094
    ///     length: [4, 5],
5095
    ///     checksum: [6, 7],
5096
    /// };
5097
    ///
5098
    /// let bytes = header.as_mut_bytes();
5099
    ///
5100
    /// assert_eq!(bytes, [0, 1, 2, 3, 4, 5, 6, 7]);
5101
    ///
5102
    /// bytes.reverse();
5103
    ///
5104
    /// assert_eq!(header, PacketHeader {
5105
    ///     src_port: [7, 6],
5106
    ///     dst_port: [5, 4],
5107
    ///     length: [3, 2],
5108
    ///     checksum: [1, 0],
5109
    /// });
5110
    /// ```
5111
    #[must_use = "has no side effects"]
5112
    #[inline(always)]
5113
0
    fn as_mut_bytes(&mut self) -> &mut [u8]
5114
0
    where
5115
0
        Self: FromBytes,
5116
    {
5117
        // Note that this method does not have a `Self: Sized` bound;
5118
        // `size_of_val` works for unsized values too.
5119
0
        let len = mem::size_of_val(self);
5120
0
        let slf: *mut Self = self;
5121
5122
        // SAFETY:
5123
        // - `slf.cast::<u8>()` is valid for reads and writes for `len *
5124
        //   size_of::<u8>()` many bytes because...
5125
        //   - `slf` is the same pointer as `self`, and `self` is a reference
5126
        //     which points to an object whose size is `len`. Thus...
5127
        //     - The entire region of `len` bytes starting at `slf` is contained
5128
        //       within a single allocation.
5129
        //     - `slf` is non-null.
5130
        //   - `slf` is trivially aligned to `align_of::<u8>() == 1`.
5131
        // - `Self: IntoBytes` ensures that all of the bytes of `slf` are
5132
        //   initialized.
5133
        // - `Self: FromBytes` ensures that no write to this memory region
5134
        //   could result in it containing an invalid `Self`.
5135
        // - Since `slf` is derived from `self`, and `self` is a mutable
5136
        //   reference, no other references to this memory region can exist.
5137
        // - The total size of the resulting slice is no larger than
5138
        //   `isize::MAX` because no allocation produced by safe code can be
5139
        //   larger than `isize::MAX`.
5140
        //
5141
        // FIXME(#429): Add references to docs and quotes.
5142
0
        unsafe { slice::from_raw_parts_mut(slf.cast::<u8>(), len) }
5143
0
    }
5144
5145
    /// Writes a copy of `self` to `dst`.
5146
    ///
5147
    /// If `dst.len() != size_of_val(self)`, `write_to` returns `Err`.
5148
    ///
5149
    /// # Examples
5150
    ///
5151
    /// ```
5152
    /// use zerocopy::IntoBytes;
5153
    /// # use zerocopy_derive::*;
5154
    ///
5155
    /// #[derive(IntoBytes, Immutable)]
5156
    /// #[repr(C)]
5157
    /// struct PacketHeader {
5158
    ///     src_port: [u8; 2],
5159
    ///     dst_port: [u8; 2],
5160
    ///     length: [u8; 2],
5161
    ///     checksum: [u8; 2],
5162
    /// }
5163
    ///
5164
    /// let header = PacketHeader {
5165
    ///     src_port: [0, 1],
5166
    ///     dst_port: [2, 3],
5167
    ///     length: [4, 5],
5168
    ///     checksum: [6, 7],
5169
    /// };
5170
    ///
5171
    /// let mut bytes = [0, 0, 0, 0, 0, 0, 0, 0];
5172
    ///
5173
    /// header.write_to(&mut bytes[..]);
5174
    ///
5175
    /// assert_eq!(bytes, [0, 1, 2, 3, 4, 5, 6, 7]);
5176
    /// ```
5177
    ///
5178
    /// If too many or too few target bytes are provided, `write_to` returns
5179
    /// `Err` and leaves the target bytes unmodified:
5180
    ///
5181
    /// ```
5182
    /// # use zerocopy::IntoBytes;
5183
    /// # let header = u128::MAX;
5184
    /// let mut excessive_bytes = &mut [0u8; 128][..];
5185
    ///
5186
    /// let write_result = header.write_to(excessive_bytes);
5187
    ///
5188
    /// assert!(write_result.is_err());
5189
    /// assert_eq!(excessive_bytes, [0u8; 128]);
5190
    /// ```
5191
    #[must_use = "callers should check the return value to see if the operation succeeded"]
5192
    #[inline]
5193
    #[allow(clippy::mut_from_ref)] // False positive: `&self -> &mut [u8]`
5194
0
    fn write_to(&self, dst: &mut [u8]) -> Result<(), SizeError<&Self, &mut [u8]>>
5195
0
    where
5196
0
        Self: Immutable,
5197
    {
5198
0
        let src = self.as_bytes();
5199
0
        if dst.len() == src.len() {
5200
            // SAFETY: Within this branch of the conditional, we have ensured
5201
            // that `dst.len()` is equal to `src.len()`. Neither the size of the
5202
            // source nor the size of the destination change between the above
5203
            // size check and the invocation of `copy_unchecked`.
5204
0
            unsafe { util::copy_unchecked(src, dst) }
5205
0
            Ok(())
5206
        } else {
5207
0
            Err(SizeError::new(self))
5208
        }
5209
0
    }
5210
5211
    /// Writes a copy of `self` to the prefix of `dst`.
5212
    ///
5213
    /// `write_to_prefix` writes `self` to the first `size_of_val(self)` bytes
5214
    /// of `dst`. If `dst.len() < size_of_val(self)`, it returns `Err`.
5215
    ///
5216
    /// # Examples
5217
    ///
5218
    /// ```
5219
    /// use zerocopy::IntoBytes;
5220
    /// # use zerocopy_derive::*;
5221
    ///
5222
    /// #[derive(IntoBytes, Immutable)]
5223
    /// #[repr(C)]
5224
    /// struct PacketHeader {
5225
    ///     src_port: [u8; 2],
5226
    ///     dst_port: [u8; 2],
5227
    ///     length: [u8; 2],
5228
    ///     checksum: [u8; 2],
5229
    /// }
5230
    ///
5231
    /// let header = PacketHeader {
5232
    ///     src_port: [0, 1],
5233
    ///     dst_port: [2, 3],
5234
    ///     length: [4, 5],
5235
    ///     checksum: [6, 7],
5236
    /// };
5237
    ///
5238
    /// let mut bytes = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0];
5239
    ///
5240
    /// header.write_to_prefix(&mut bytes[..]);
5241
    ///
5242
    /// assert_eq!(bytes, [0, 1, 2, 3, 4, 5, 6, 7, 0, 0]);
5243
    /// ```
5244
    ///
5245
    /// If insufficient target bytes are provided, `write_to_prefix` returns
5246
    /// `Err` and leaves the target bytes unmodified:
5247
    ///
5248
    /// ```
5249
    /// # use zerocopy::IntoBytes;
5250
    /// # let header = u128::MAX;
5251
    /// let mut insufficient_bytes = &mut [0, 0][..];
5252
    ///
5253
    /// let write_result = header.write_to_suffix(insufficient_bytes);
5254
    ///
5255
    /// assert!(write_result.is_err());
5256
    /// assert_eq!(insufficient_bytes, [0, 0]);
5257
    /// ```
5258
    #[must_use = "callers should check the return value to see if the operation succeeded"]
5259
    #[inline]
5260
    #[allow(clippy::mut_from_ref)] // False positive: `&self -> &mut [u8]`
5261
0
    fn write_to_prefix(&self, dst: &mut [u8]) -> Result<(), SizeError<&Self, &mut [u8]>>
5262
0
    where
5263
0
        Self: Immutable,
5264
    {
5265
0
        let src = self.as_bytes();
5266
0
        match dst.get_mut(..src.len()) {
5267
0
            Some(dst) => {
5268
                // SAFETY: Within this branch of the `match`, we have ensured
5269
                // through fallible subslicing that `dst.len()` is equal to
5270
                // `src.len()`. Neither the size of the source nor the size of
5271
                // the destination change between the above subslicing operation
5272
                // and the invocation of `copy_unchecked`.
5273
0
                unsafe { util::copy_unchecked(src, dst) }
5274
0
                Ok(())
5275
            }
5276
0
            None => Err(SizeError::new(self)),
5277
        }
5278
0
    }
5279
5280
    /// Writes a copy of `self` to the suffix of `dst`.
5281
    ///
5282
    /// `write_to_suffix` writes `self` to the last `size_of_val(self)` bytes of
5283
    /// `dst`. If `dst.len() < size_of_val(self)`, it returns `Err`.
5284
    ///
5285
    /// # Examples
5286
    ///
5287
    /// ```
5288
    /// use zerocopy::IntoBytes;
5289
    /// # use zerocopy_derive::*;
5290
    ///
5291
    /// #[derive(IntoBytes, Immutable)]
5292
    /// #[repr(C)]
5293
    /// struct PacketHeader {
5294
    ///     src_port: [u8; 2],
5295
    ///     dst_port: [u8; 2],
5296
    ///     length: [u8; 2],
5297
    ///     checksum: [u8; 2],
5298
    /// }
5299
    ///
5300
    /// let header = PacketHeader {
5301
    ///     src_port: [0, 1],
5302
    ///     dst_port: [2, 3],
5303
    ///     length: [4, 5],
5304
    ///     checksum: [6, 7],
5305
    /// };
5306
    ///
5307
    /// let mut bytes = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0];
5308
    ///
5309
    /// header.write_to_suffix(&mut bytes[..]);
5310
    ///
5311
    /// assert_eq!(bytes, [0, 0, 0, 1, 2, 3, 4, 5, 6, 7]);
5312
    ///
5313
    /// let mut insufficient_bytes = &mut [0, 0][..];
5314
    ///
5315
    /// let write_result = header.write_to_suffix(insufficient_bytes);
5316
    ///
5317
    /// assert!(write_result.is_err());
5318
    /// assert_eq!(insufficient_bytes, [0, 0]);
5319
    /// ```
5320
    ///
5321
    /// If insufficient target bytes are provided, `write_to_suffix` returns
5322
    /// `Err` and leaves the target bytes unmodified:
5323
    ///
5324
    /// ```
5325
    /// # use zerocopy::IntoBytes;
5326
    /// # let header = u128::MAX;
5327
    /// let mut insufficient_bytes = &mut [0, 0][..];
5328
    ///
5329
    /// let write_result = header.write_to_suffix(insufficient_bytes);
5330
    ///
5331
    /// assert!(write_result.is_err());
5332
    /// assert_eq!(insufficient_bytes, [0, 0]);
5333
    /// ```
5334
    #[must_use = "callers should check the return value to see if the operation succeeded"]
5335
    #[inline]
5336
    #[allow(clippy::mut_from_ref)] // False positive: `&self -> &mut [u8]`
5337
0
    fn write_to_suffix(&self, dst: &mut [u8]) -> Result<(), SizeError<&Self, &mut [u8]>>
5338
0
    where
5339
0
        Self: Immutable,
5340
    {
5341
0
        let src = self.as_bytes();
5342
0
        let start = if let Some(start) = dst.len().checked_sub(src.len()) {
5343
0
            start
5344
        } else {
5345
0
            return Err(SizeError::new(self));
5346
        };
5347
0
        let dst = if let Some(dst) = dst.get_mut(start..) {
5348
0
            dst
5349
        } else {
5350
            // get_mut() should never return None here. We return a `SizeError`
5351
            // rather than .unwrap() because in the event the branch is not
5352
            // optimized away, returning a value is generally lighter-weight
5353
            // than panicking.
5354
0
            return Err(SizeError::new(self));
5355
        };
5356
        // SAFETY: Through fallible subslicing of `dst`, we have ensured that
5357
        // `dst.len()` is equal to `src.len()`. Neither the size of the source
5358
        // nor the size of the destination change between the above subslicing
5359
        // operation and the invocation of `copy_unchecked`.
5360
0
        unsafe {
5361
0
            util::copy_unchecked(src, dst);
5362
0
        }
5363
0
        Ok(())
5364
0
    }
5365
5366
    /// Writes a copy of `self` to an `io::Write`.
5367
    ///
5368
    /// This is a shorthand for `dst.write_all(self.as_bytes())`, and is useful
5369
    /// for interfacing with operating system byte sinks (files, sockets, etc.).
5370
    ///
5371
    /// # Examples
5372
    ///
5373
    /// ```no_run
5374
    /// use zerocopy::{byteorder::big_endian::U16, FromBytes, IntoBytes};
5375
    /// use std::fs::File;
5376
    /// # use zerocopy_derive::*;
5377
    ///
5378
    /// #[derive(FromBytes, IntoBytes, Immutable, KnownLayout)]
5379
    /// #[repr(C, packed)]
5380
    /// struct GrayscaleImage {
5381
    ///     height: U16,
5382
    ///     width: U16,
5383
    ///     pixels: [U16],
5384
    /// }
5385
    ///
5386
    /// let image = GrayscaleImage::ref_from_bytes(&[0, 0, 0, 0][..]).unwrap();
5387
    /// let mut file = File::create("image.bin").unwrap();
5388
    /// image.write_to_io(&mut file).unwrap();
5389
    /// ```
5390
    ///
5391
    /// If the write fails, `write_to_io` returns `Err` and a partial write may
5392
    /// have occurred; e.g.:
5393
    ///
5394
    /// ```
5395
    /// # use zerocopy::IntoBytes;
5396
    ///
5397
    /// let src = u128::MAX;
5398
    /// let mut dst = [0u8; 2];
5399
    ///
5400
    /// let write_result = src.write_to_io(&mut dst[..]);
5401
    ///
5402
    /// assert!(write_result.is_err());
5403
    /// assert_eq!(dst, [255, 255]);
5404
    /// ```
5405
    #[cfg(feature = "std")]
5406
    #[cfg_attr(doc_cfg, doc(cfg(feature = "std")))]
5407
    #[inline(always)]
5408
    fn write_to_io<W>(&self, mut dst: W) -> io::Result<()>
5409
    where
5410
        Self: Immutable,
5411
        W: io::Write,
5412
    {
5413
        dst.write_all(self.as_bytes())
5414
    }
5415
5416
    #[deprecated(since = "0.8.0", note = "`IntoBytes::as_bytes_mut` was renamed to `as_mut_bytes`")]
5417
    #[doc(hidden)]
5418
    #[inline]
5419
0
    fn as_bytes_mut(&mut self) -> &mut [u8]
5420
0
    where
5421
0
        Self: FromBytes,
5422
    {
5423
0
        self.as_mut_bytes()
5424
0
    }
5425
}
5426
5427
/// Analyzes whether a type is [`Unaligned`].
5428
///
5429
/// This derive analyzes, at compile time, whether the annotated type satisfies
5430
/// the [safety conditions] of `Unaligned` and implements `Unaligned` if it is
5431
/// sound to do so. This derive can be applied to structs, enums, and unions;
5432
/// e.g.:
5433
///
5434
/// ```
5435
/// # use zerocopy_derive::Unaligned;
5436
/// #[derive(Unaligned)]
5437
/// #[repr(C)]
5438
/// struct MyStruct {
5439
/// # /*
5440
///     ...
5441
/// # */
5442
/// }
5443
///
5444
/// #[derive(Unaligned)]
5445
/// #[repr(u8)]
5446
/// enum MyEnum {
5447
/// #   Variant0,
5448
/// # /*
5449
///     ...
5450
/// # */
5451
/// }
5452
///
5453
/// #[derive(Unaligned)]
5454
/// #[repr(packed)]
5455
/// union MyUnion {
5456
/// #   variant: u8,
5457
/// # /*
5458
///     ...
5459
/// # */
5460
/// }
5461
/// ```
5462
///
5463
/// # Analysis
5464
///
5465
/// *This section describes, roughly, the analysis performed by this derive to
5466
/// determine whether it is sound to implement `Unaligned` for a given type.
5467
/// Unless you are modifying the implementation of this derive, or attempting to
5468
/// manually implement `Unaligned` for a type yourself, you don't need to read
5469
/// this section.*
5470
///
5471
/// If a type has the following properties, then this derive can implement
5472
/// `Unaligned` for that type:
5473
///
5474
/// - If the type is a struct or union:
5475
///   - If `repr(align(N))` is provided, `N` must equal 1.
5476
///   - If the type is `repr(C)` or `repr(transparent)`, all fields must be
5477
///     [`Unaligned`].
5478
///   - If the type is not `repr(C)` or `repr(transparent)`, it must be
5479
///     `repr(packed)` or `repr(packed(1))`.
5480
/// - If the type is an enum:
5481
///   - If `repr(align(N))` is provided, `N` must equal 1.
5482
///   - It must be a field-less enum (meaning that all variants have no fields).
5483
///   - It must be `repr(i8)` or `repr(u8)`.
5484
///
5485
/// [safety conditions]: trait@Unaligned#safety
5486
#[cfg(any(feature = "derive", test))]
5487
#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
5488
pub use zerocopy_derive::Unaligned;
5489
5490
/// Types with no alignment requirement.
5491
///
5492
/// If `T: Unaligned`, then `align_of::<T>() == 1`.
5493
///
5494
/// # Implementation
5495
///
5496
/// **Do not implement this trait yourself!** Instead, use
5497
/// [`#[derive(Unaligned)]`][derive]; e.g.:
5498
///
5499
/// ```
5500
/// # use zerocopy_derive::Unaligned;
5501
/// #[derive(Unaligned)]
5502
/// #[repr(C)]
5503
/// struct MyStruct {
5504
/// # /*
5505
///     ...
5506
/// # */
5507
/// }
5508
///
5509
/// #[derive(Unaligned)]
5510
/// #[repr(u8)]
5511
/// enum MyEnum {
5512
/// #   Variant0,
5513
/// # /*
5514
///     ...
5515
/// # */
5516
/// }
5517
///
5518
/// #[derive(Unaligned)]
5519
/// #[repr(packed)]
5520
/// union MyUnion {
5521
/// #   variant: u8,
5522
/// # /*
5523
///     ...
5524
/// # */
5525
/// }
5526
/// ```
5527
///
5528
/// This derive performs a sophisticated, compile-time safety analysis to
5529
/// determine whether a type is `Unaligned`.
5530
///
5531
/// # Safety
5532
///
5533
/// *This section describes what is required in order for `T: Unaligned`, and
5534
/// what unsafe code may assume of such types. If you don't plan on implementing
5535
/// `Unaligned` manually, and you don't plan on writing unsafe code that
5536
/// operates on `Unaligned` types, then you don't need to read this section.*
5537
///
5538
/// If `T: Unaligned`, then unsafe code may assume that it is sound to produce a
5539
/// reference to `T` at any memory location regardless of alignment. If a type
5540
/// is marked as `Unaligned` which violates this contract, it may cause
5541
/// undefined behavior.
5542
///
5543
/// `#[derive(Unaligned)]` only permits [types which satisfy these
5544
/// requirements][derive-analysis].
5545
///
5546
#[cfg_attr(
5547
    feature = "derive",
5548
    doc = "[derive]: zerocopy_derive::Unaligned",
5549
    doc = "[derive-analysis]: zerocopy_derive::Unaligned#analysis"
5550
)]
5551
#[cfg_attr(
5552
    not(feature = "derive"),
5553
    doc = concat!("[derive]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.Unaligned.html"),
5554
    doc = concat!("[derive-analysis]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.Unaligned.html#analysis"),
5555
)]
5556
#[cfg_attr(
5557
    not(no_zerocopy_diagnostic_on_unimplemented_1_78_0),
5558
    diagnostic::on_unimplemented(note = "Consider adding `#[derive(Unaligned)]` to `{Self}`")
5559
)]
5560
pub unsafe trait Unaligned {
5561
    // The `Self: Sized` bound makes it so that `Unaligned` is still object
5562
    // safe.
5563
    #[doc(hidden)]
5564
    fn only_derive_is_allowed_to_implement_this_trait()
5565
    where
5566
        Self: Sized;
5567
}
5568
5569
/// Derives optimized [`PartialEq`] and [`Eq`] implementations.
5570
///
5571
/// This derive can be applied to structs and enums implementing both
5572
/// [`Immutable`] and [`IntoBytes`]; e.g.:
5573
///
5574
/// ```
5575
/// # use zerocopy_derive::{ByteEq, Immutable, IntoBytes};
5576
/// #[derive(ByteEq, Immutable, IntoBytes)]
5577
/// #[repr(C)]
5578
/// struct MyStruct {
5579
/// # /*
5580
///     ...
5581
/// # */
5582
/// }
5583
///
5584
/// #[derive(ByteEq, Immutable, IntoBytes)]
5585
/// #[repr(u8)]
5586
/// enum MyEnum {
5587
/// #   Variant,
5588
/// # /*
5589
///     ...
5590
/// # */
5591
/// }
5592
/// ```
5593
///
5594
/// The standard library's [`derive(Eq, PartialEq)`][derive@PartialEq] computes
5595
/// equality by individually comparing each field. Instead, the implementation
5596
/// of [`PartialEq::eq`] emitted by `derive(ByteHash)` converts the entirety of
5597
/// `self` and `other` to byte slices and compares those slices for equality.
5598
/// This may have performance advantages.
5599
#[cfg(any(feature = "derive", test))]
5600
#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
5601
pub use zerocopy_derive::ByteEq;
5602
/// Derives an optimized [`Hash`] implementation.
5603
///
5604
/// This derive can be applied to structs and enums implementing both
5605
/// [`Immutable`] and [`IntoBytes`]; e.g.:
5606
///
5607
/// ```
5608
/// # use zerocopy_derive::{ByteHash, Immutable, IntoBytes};
5609
/// #[derive(ByteHash, Immutable, IntoBytes)]
5610
/// #[repr(C)]
5611
/// struct MyStruct {
5612
/// # /*
5613
///     ...
5614
/// # */
5615
/// }
5616
///
5617
/// #[derive(ByteHash, Immutable, IntoBytes)]
5618
/// #[repr(u8)]
5619
/// enum MyEnum {
5620
/// #   Variant,
5621
/// # /*
5622
///     ...
5623
/// # */
5624
/// }
5625
/// ```
5626
///
5627
/// The standard library's [`derive(Hash)`][derive@Hash] produces hashes by
5628
/// individually hashing each field and combining the results. Instead, the
5629
/// implementations of [`Hash::hash()`] and [`Hash::hash_slice()`] generated by
5630
/// `derive(ByteHash)` convert the entirety of `self` to a byte slice and hashes
5631
/// it in a single call to [`Hasher::write()`]. This may have performance
5632
/// advantages.
5633
///
5634
/// [`Hash`]: core::hash::Hash
5635
/// [`Hash::hash()`]: core::hash::Hash::hash()
5636
/// [`Hash::hash_slice()`]: core::hash::Hash::hash_slice()
5637
#[cfg(any(feature = "derive", test))]
5638
#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
5639
pub use zerocopy_derive::ByteHash;
5640
/// Implements [`SplitAt`].
5641
///
5642
/// This derive can be applied to structs; e.g.:
5643
///
5644
/// ```
5645
/// # use zerocopy_derive::{ByteEq, Immutable, IntoBytes};
5646
/// #[derive(ByteEq, Immutable, IntoBytes)]
5647
/// #[repr(C)]
5648
/// struct MyStruct {
5649
/// # /*
5650
///     ...
5651
/// # */
5652
/// }
5653
/// ```
5654
#[cfg(any(feature = "derive", test))]
5655
#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
5656
pub use zerocopy_derive::SplitAt;
5657
5658
#[cfg(feature = "alloc")]
5659
#[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))]
5660
#[cfg(not(no_zerocopy_panic_in_const_and_vec_try_reserve_1_57_0))]
5661
mod alloc_support {
5662
    use super::*;
5663
5664
    /// Extends a `Vec<T>` by pushing `additional` new items onto the end of the
5665
    /// vector. The new items are initialized with zeros.
5666
    #[cfg(not(no_zerocopy_panic_in_const_and_vec_try_reserve_1_57_0))]
5667
    #[doc(hidden)]
5668
    #[deprecated(since = "0.8.0", note = "moved to `FromZeros`")]
5669
    #[inline(always)]
5670
    pub fn extend_vec_zeroed<T: FromZeros>(
5671
        v: &mut Vec<T>,
5672
        additional: usize,
5673
    ) -> Result<(), AllocError> {
5674
        <T as FromZeros>::extend_vec_zeroed(v, additional)
5675
    }
5676
5677
    /// Inserts `additional` new items into `Vec<T>` at `position`. The new
5678
    /// items are initialized with zeros.
5679
    ///
5680
    /// # Panics
5681
    ///
5682
    /// Panics if `position > v.len()`.
5683
    #[cfg(not(no_zerocopy_panic_in_const_and_vec_try_reserve_1_57_0))]
5684
    #[doc(hidden)]
5685
    #[deprecated(since = "0.8.0", note = "moved to `FromZeros`")]
5686
    #[inline(always)]
5687
    pub fn insert_vec_zeroed<T: FromZeros>(
5688
        v: &mut Vec<T>,
5689
        position: usize,
5690
        additional: usize,
5691
    ) -> Result<(), AllocError> {
5692
        <T as FromZeros>::insert_vec_zeroed(v, position, additional)
5693
    }
5694
}
5695
5696
#[cfg(feature = "alloc")]
5697
#[cfg(not(no_zerocopy_panic_in_const_and_vec_try_reserve_1_57_0))]
5698
#[doc(hidden)]
5699
pub use alloc_support::*;
5700
5701
#[cfg(test)]
5702
#[allow(clippy::assertions_on_result_states, clippy::unreadable_literal)]
5703
mod tests {
5704
    use static_assertions::assert_impl_all;
5705
5706
    use super::*;
5707
    use crate::util::testutil::*;
5708
5709
    // An unsized type.
5710
    //
5711
    // This is used to test the custom derives of our traits. The `[u8]` type
5712
    // gets a hand-rolled impl, so it doesn't exercise our custom derives.
5713
    #[derive(Debug, Eq, PartialEq, FromBytes, IntoBytes, Unaligned, Immutable)]
5714
    #[repr(transparent)]
5715
    struct Unsized([u8]);
5716
5717
    impl Unsized {
5718
        fn from_mut_slice(slc: &mut [u8]) -> &mut Unsized {
5719
            // SAFETY: This *probably* sound - since the layouts of `[u8]` and
5720
            // `Unsized` are the same, so are the layouts of `&mut [u8]` and
5721
            // `&mut Unsized`. [1] Even if it turns out that this isn't actually
5722
            // guaranteed by the language spec, we can just change this since
5723
            // it's in test code.
5724
            //
5725
            // [1] https://github.com/rust-lang/unsafe-code-guidelines/issues/375
5726
            unsafe { mem::transmute(slc) }
5727
        }
5728
    }
5729
5730
    #[test]
5731
    fn test_known_layout() {
5732
        // Test that `$ty` and `ManuallyDrop<$ty>` have the expected layout.
5733
        // Test that `PhantomData<$ty>` has the same layout as `()` regardless
5734
        // of `$ty`.
5735
        macro_rules! test {
5736
            ($ty:ty, $expect:expr) => {
5737
                let expect = $expect;
5738
                assert_eq!(<$ty as KnownLayout>::LAYOUT, expect);
5739
                assert_eq!(<ManuallyDrop<$ty> as KnownLayout>::LAYOUT, expect);
5740
                assert_eq!(<PhantomData<$ty> as KnownLayout>::LAYOUT, <() as KnownLayout>::LAYOUT);
5741
            };
5742
        }
5743
5744
        let layout =
5745
            |offset, align, trailing_slice_elem_size, statically_shallow_unpadded| DstLayout {
5746
                align: NonZeroUsize::new(align).unwrap(),
5747
                size_info: match trailing_slice_elem_size {
5748
                    None => SizeInfo::Sized { size: offset },
5749
                    Some(elem_size) => {
5750
                        SizeInfo::SliceDst(TrailingSliceLayout { offset, elem_size })
5751
                    }
5752
                },
5753
                statically_shallow_unpadded,
5754
            };
5755
5756
        test!((), layout(0, 1, None, false));
5757
        test!(u8, layout(1, 1, None, false));
5758
        // Use `align_of` because `u64` alignment may be smaller than 8 on some
5759
        // platforms.
5760
        test!(u64, layout(8, mem::align_of::<u64>(), None, false));
5761
        test!(AU64, layout(8, 8, None, false));
5762
5763
        test!(Option<&'static ()>, usize::LAYOUT);
5764
5765
        test!([()], layout(0, 1, Some(0), true));
5766
        test!([u8], layout(0, 1, Some(1), true));
5767
        test!(str, layout(0, 1, Some(1), true));
5768
    }
5769
5770
    #[cfg(feature = "derive")]
5771
    #[test]
5772
    fn test_known_layout_derive() {
5773
        // In this and other files (`late_compile_pass.rs`,
5774
        // `mid_compile_pass.rs`, and `struct.rs`), we test success and failure
5775
        // modes of `derive(KnownLayout)` for the following combination of
5776
        // properties:
5777
        //
5778
        // +------------+--------------------------------------+-----------+
5779
        // |            |      trailing field properties       |           |
5780
        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
5781
        // |------------+----------+----------------+----------+-----------|
5782
        // |          N |        N |              N |        N |      KL00 |
5783
        // |          N |        N |              N |        Y |      KL01 |
5784
        // |          N |        N |              Y |        N |      KL02 |
5785
        // |          N |        N |              Y |        Y |      KL03 |
5786
        // |          N |        Y |              N |        N |      KL04 |
5787
        // |          N |        Y |              N |        Y |      KL05 |
5788
        // |          N |        Y |              Y |        N |      KL06 |
5789
        // |          N |        Y |              Y |        Y |      KL07 |
5790
        // |          Y |        N |              N |        N |      KL08 |
5791
        // |          Y |        N |              N |        Y |      KL09 |
5792
        // |          Y |        N |              Y |        N |      KL10 |
5793
        // |          Y |        N |              Y |        Y |      KL11 |
5794
        // |          Y |        Y |              N |        N |      KL12 |
5795
        // |          Y |        Y |              N |        Y |      KL13 |
5796
        // |          Y |        Y |              Y |        N |      KL14 |
5797
        // |          Y |        Y |              Y |        Y |      KL15 |
5798
        // +------------+----------+----------------+----------+-----------+
5799
5800
        struct NotKnownLayout<T = ()> {
5801
            _t: T,
5802
        }
5803
5804
        #[derive(KnownLayout)]
5805
        #[repr(C)]
5806
        struct AlignSize<const ALIGN: usize, const SIZE: usize>
5807
        where
5808
            elain::Align<ALIGN>: elain::Alignment,
5809
        {
5810
            _align: elain::Align<ALIGN>,
5811
            size: [u8; SIZE],
5812
        }
5813
5814
        type AU16 = AlignSize<2, 2>;
5815
        type AU32 = AlignSize<4, 4>;
5816
5817
        fn _assert_kl<T: ?Sized + KnownLayout>(_: &T) {}
5818
5819
        let sized_layout = |align, size| DstLayout {
5820
            align: NonZeroUsize::new(align).unwrap(),
5821
            size_info: SizeInfo::Sized { size },
5822
            statically_shallow_unpadded: false,
5823
        };
5824
5825
        let unsized_layout = |align, elem_size, offset, statically_shallow_unpadded| DstLayout {
5826
            align: NonZeroUsize::new(align).unwrap(),
5827
            size_info: SizeInfo::SliceDst(TrailingSliceLayout { offset, elem_size }),
5828
            statically_shallow_unpadded,
5829
        };
5830
5831
        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
5832
        // |          N |        N |              N |        Y |      KL01 |
5833
        #[allow(dead_code)]
5834
        #[derive(KnownLayout)]
5835
        struct KL01(NotKnownLayout<AU32>, NotKnownLayout<AU16>);
5836
5837
        let expected = DstLayout::for_type::<KL01>();
5838
5839
        assert_eq!(<KL01 as KnownLayout>::LAYOUT, expected);
5840
        assert_eq!(<KL01 as KnownLayout>::LAYOUT, sized_layout(4, 8));
5841
5842
        // ...with `align(N)`:
5843
        #[allow(dead_code)]
5844
        #[derive(KnownLayout)]
5845
        #[repr(align(64))]
5846
        struct KL01Align(NotKnownLayout<AU32>, NotKnownLayout<AU16>);
5847
5848
        let expected = DstLayout::for_type::<KL01Align>();
5849
5850
        assert_eq!(<KL01Align as KnownLayout>::LAYOUT, expected);
5851
        assert_eq!(<KL01Align as KnownLayout>::LAYOUT, sized_layout(64, 64));
5852
5853
        // ...with `packed`:
5854
        #[allow(dead_code)]
5855
        #[derive(KnownLayout)]
5856
        #[repr(packed)]
5857
        struct KL01Packed(NotKnownLayout<AU32>, NotKnownLayout<AU16>);
5858
5859
        let expected = DstLayout::for_type::<KL01Packed>();
5860
5861
        assert_eq!(<KL01Packed as KnownLayout>::LAYOUT, expected);
5862
        assert_eq!(<KL01Packed as KnownLayout>::LAYOUT, sized_layout(1, 6));
5863
5864
        // ...with `packed(N)`:
5865
        #[allow(dead_code)]
5866
        #[derive(KnownLayout)]
5867
        #[repr(packed(2))]
5868
        struct KL01PackedN(NotKnownLayout<AU32>, NotKnownLayout<AU16>);
5869
5870
        assert_impl_all!(KL01PackedN: KnownLayout);
5871
5872
        let expected = DstLayout::for_type::<KL01PackedN>();
5873
5874
        assert_eq!(<KL01PackedN as KnownLayout>::LAYOUT, expected);
5875
        assert_eq!(<KL01PackedN as KnownLayout>::LAYOUT, sized_layout(2, 6));
5876
5877
        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
5878
        // |          N |        N |              Y |        Y |      KL03 |
5879
        #[allow(dead_code)]
5880
        #[derive(KnownLayout)]
5881
        struct KL03(NotKnownLayout, u8);
5882
5883
        let expected = DstLayout::for_type::<KL03>();
5884
5885
        assert_eq!(<KL03 as KnownLayout>::LAYOUT, expected);
5886
        assert_eq!(<KL03 as KnownLayout>::LAYOUT, sized_layout(1, 1));
5887
5888
        // ... with `align(N)`
5889
        #[allow(dead_code)]
5890
        #[derive(KnownLayout)]
5891
        #[repr(align(64))]
5892
        struct KL03Align(NotKnownLayout<AU32>, u8);
5893
5894
        let expected = DstLayout::for_type::<KL03Align>();
5895
5896
        assert_eq!(<KL03Align as KnownLayout>::LAYOUT, expected);
5897
        assert_eq!(<KL03Align as KnownLayout>::LAYOUT, sized_layout(64, 64));
5898
5899
        // ... with `packed`:
5900
        #[allow(dead_code)]
5901
        #[derive(KnownLayout)]
5902
        #[repr(packed)]
5903
        struct KL03Packed(NotKnownLayout<AU32>, u8);
5904
5905
        let expected = DstLayout::for_type::<KL03Packed>();
5906
5907
        assert_eq!(<KL03Packed as KnownLayout>::LAYOUT, expected);
5908
        assert_eq!(<KL03Packed as KnownLayout>::LAYOUT, sized_layout(1, 5));
5909
5910
        // ... with `packed(N)`
5911
        #[allow(dead_code)]
5912
        #[derive(KnownLayout)]
5913
        #[repr(packed(2))]
5914
        struct KL03PackedN(NotKnownLayout<AU32>, u8);
5915
5916
        assert_impl_all!(KL03PackedN: KnownLayout);
5917
5918
        let expected = DstLayout::for_type::<KL03PackedN>();
5919
5920
        assert_eq!(<KL03PackedN as KnownLayout>::LAYOUT, expected);
5921
        assert_eq!(<KL03PackedN as KnownLayout>::LAYOUT, sized_layout(2, 6));
5922
5923
        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
5924
        // |          N |        Y |              N |        Y |      KL05 |
5925
        #[allow(dead_code)]
5926
        #[derive(KnownLayout)]
5927
        struct KL05<T>(u8, T);
5928
5929
        fn _test_kl05<T>(t: T) -> impl KnownLayout {
5930
            KL05(0u8, t)
5931
        }
5932
5933
        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
5934
        // |          N |        Y |              Y |        Y |      KL07 |
5935
        #[allow(dead_code)]
5936
        #[derive(KnownLayout)]
5937
        struct KL07<T: KnownLayout>(u8, T);
5938
5939
        fn _test_kl07<T: KnownLayout>(t: T) -> impl KnownLayout {
5940
            let _ = KL07(0u8, t);
5941
        }
5942
5943
        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
5944
        // |          Y |        N |              Y |        N |      KL10 |
5945
        #[allow(dead_code)]
5946
        #[derive(KnownLayout)]
5947
        #[repr(C)]
5948
        struct KL10(NotKnownLayout<AU32>, [u8]);
5949
5950
        let expected = DstLayout::new_zst(None)
5951
            .extend(DstLayout::for_type::<NotKnownLayout<AU32>>(), None)
5952
            .extend(<[u8] as KnownLayout>::LAYOUT, None)
5953
            .pad_to_align();
5954
5955
        assert_eq!(<KL10 as KnownLayout>::LAYOUT, expected);
5956
        assert_eq!(<KL10 as KnownLayout>::LAYOUT, unsized_layout(4, 1, 4, false));
5957
5958
        // ...with `align(N)`:
5959
        #[allow(dead_code)]
5960
        #[derive(KnownLayout)]
5961
        #[repr(C, align(64))]
5962
        struct KL10Align(NotKnownLayout<AU32>, [u8]);
5963
5964
        let repr_align = NonZeroUsize::new(64);
5965
5966
        let expected = DstLayout::new_zst(repr_align)
5967
            .extend(DstLayout::for_type::<NotKnownLayout<AU32>>(), None)
5968
            .extend(<[u8] as KnownLayout>::LAYOUT, None)
5969
            .pad_to_align();
5970
5971
        assert_eq!(<KL10Align as KnownLayout>::LAYOUT, expected);
5972
        assert_eq!(<KL10Align as KnownLayout>::LAYOUT, unsized_layout(64, 1, 4, false));
5973
5974
        // ...with `packed`:
5975
        #[allow(dead_code)]
5976
        #[derive(KnownLayout)]
5977
        #[repr(C, packed)]
5978
        struct KL10Packed(NotKnownLayout<AU32>, [u8]);
5979
5980
        let repr_packed = NonZeroUsize::new(1);
5981
5982
        let expected = DstLayout::new_zst(None)
5983
            .extend(DstLayout::for_type::<NotKnownLayout<AU32>>(), repr_packed)
5984
            .extend(<[u8] as KnownLayout>::LAYOUT, repr_packed)
5985
            .pad_to_align();
5986
5987
        assert_eq!(<KL10Packed as KnownLayout>::LAYOUT, expected);
5988
        assert_eq!(<KL10Packed as KnownLayout>::LAYOUT, unsized_layout(1, 1, 4, false));
5989
5990
        // ...with `packed(N)`:
5991
        #[allow(dead_code)]
5992
        #[derive(KnownLayout)]
5993
        #[repr(C, packed(2))]
5994
        struct KL10PackedN(NotKnownLayout<AU32>, [u8]);
5995
5996
        let repr_packed = NonZeroUsize::new(2);
5997
5998
        let expected = DstLayout::new_zst(None)
5999
            .extend(DstLayout::for_type::<NotKnownLayout<AU32>>(), repr_packed)
6000
            .extend(<[u8] as KnownLayout>::LAYOUT, repr_packed)
6001
            .pad_to_align();
6002
6003
        assert_eq!(<KL10PackedN as KnownLayout>::LAYOUT, expected);
6004
        assert_eq!(<KL10PackedN as KnownLayout>::LAYOUT, unsized_layout(2, 1, 4, false));
6005
6006
        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
6007
        // |          Y |        N |              Y |        Y |      KL11 |
6008
        #[allow(dead_code)]
6009
        #[derive(KnownLayout)]
6010
        #[repr(C)]
6011
        struct KL11(NotKnownLayout<AU64>, u8);
6012
6013
        let expected = DstLayout::new_zst(None)
6014
            .extend(DstLayout::for_type::<NotKnownLayout<AU64>>(), None)
6015
            .extend(<u8 as KnownLayout>::LAYOUT, None)
6016
            .pad_to_align();
6017
6018
        assert_eq!(<KL11 as KnownLayout>::LAYOUT, expected);
6019
        assert_eq!(<KL11 as KnownLayout>::LAYOUT, sized_layout(8, 16));
6020
6021
        // ...with `align(N)`:
6022
        #[allow(dead_code)]
6023
        #[derive(KnownLayout)]
6024
        #[repr(C, align(64))]
6025
        struct KL11Align(NotKnownLayout<AU64>, u8);
6026
6027
        let repr_align = NonZeroUsize::new(64);
6028
6029
        let expected = DstLayout::new_zst(repr_align)
6030
            .extend(DstLayout::for_type::<NotKnownLayout<AU64>>(), None)
6031
            .extend(<u8 as KnownLayout>::LAYOUT, None)
6032
            .pad_to_align();
6033
6034
        assert_eq!(<KL11Align as KnownLayout>::LAYOUT, expected);
6035
        assert_eq!(<KL11Align as KnownLayout>::LAYOUT, sized_layout(64, 64));
6036
6037
        // ...with `packed`:
6038
        #[allow(dead_code)]
6039
        #[derive(KnownLayout)]
6040
        #[repr(C, packed)]
6041
        struct KL11Packed(NotKnownLayout<AU64>, u8);
6042
6043
        let repr_packed = NonZeroUsize::new(1);
6044
6045
        let expected = DstLayout::new_zst(None)
6046
            .extend(DstLayout::for_type::<NotKnownLayout<AU64>>(), repr_packed)
6047
            .extend(<u8 as KnownLayout>::LAYOUT, repr_packed)
6048
            .pad_to_align();
6049
6050
        assert_eq!(<KL11Packed as KnownLayout>::LAYOUT, expected);
6051
        assert_eq!(<KL11Packed as KnownLayout>::LAYOUT, sized_layout(1, 9));
6052
6053
        // ...with `packed(N)`:
6054
        #[allow(dead_code)]
6055
        #[derive(KnownLayout)]
6056
        #[repr(C, packed(2))]
6057
        struct KL11PackedN(NotKnownLayout<AU64>, u8);
6058
6059
        let repr_packed = NonZeroUsize::new(2);
6060
6061
        let expected = DstLayout::new_zst(None)
6062
            .extend(DstLayout::for_type::<NotKnownLayout<AU64>>(), repr_packed)
6063
            .extend(<u8 as KnownLayout>::LAYOUT, repr_packed)
6064
            .pad_to_align();
6065
6066
        assert_eq!(<KL11PackedN as KnownLayout>::LAYOUT, expected);
6067
        assert_eq!(<KL11PackedN as KnownLayout>::LAYOUT, sized_layout(2, 10));
6068
6069
        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
6070
        // |          Y |        Y |              Y |        N |      KL14 |
6071
        #[allow(dead_code)]
6072
        #[derive(KnownLayout)]
6073
        #[repr(C)]
6074
        struct KL14<T: ?Sized + KnownLayout>(u8, T);
6075
6076
        fn _test_kl14<T: ?Sized + KnownLayout>(kl: &KL14<T>) {
6077
            _assert_kl(kl)
6078
        }
6079
6080
        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
6081
        // |          Y |        Y |              Y |        Y |      KL15 |
6082
        #[allow(dead_code)]
6083
        #[derive(KnownLayout)]
6084
        #[repr(C)]
6085
        struct KL15<T: KnownLayout>(u8, T);
6086
6087
        fn _test_kl15<T: KnownLayout>(t: T) -> impl KnownLayout {
6088
            let _ = KL15(0u8, t);
6089
        }
6090
6091
        // Test a variety of combinations of field types:
6092
        //  - ()
6093
        //  - u8
6094
        //  - AU16
6095
        //  - [()]
6096
        //  - [u8]
6097
        //  - [AU16]
6098
6099
        #[allow(clippy::upper_case_acronyms, dead_code)]
6100
        #[derive(KnownLayout)]
6101
        #[repr(C)]
6102
        struct KLTU<T, U: ?Sized>(T, U);
6103
6104
        assert_eq!(<KLTU<(), ()> as KnownLayout>::LAYOUT, sized_layout(1, 0));
6105
6106
        assert_eq!(<KLTU<(), u8> as KnownLayout>::LAYOUT, sized_layout(1, 1));
6107
6108
        assert_eq!(<KLTU<(), AU16> as KnownLayout>::LAYOUT, sized_layout(2, 2));
6109
6110
        assert_eq!(<KLTU<(), [()]> as KnownLayout>::LAYOUT, unsized_layout(1, 0, 0, false));
6111
6112
        assert_eq!(<KLTU<(), [u8]> as KnownLayout>::LAYOUT, unsized_layout(1, 1, 0, false));
6113
6114
        assert_eq!(<KLTU<(), [AU16]> as KnownLayout>::LAYOUT, unsized_layout(2, 2, 0, false));
6115
6116
        assert_eq!(<KLTU<u8, ()> as KnownLayout>::LAYOUT, sized_layout(1, 1));
6117
6118
        assert_eq!(<KLTU<u8, u8> as KnownLayout>::LAYOUT, sized_layout(1, 2));
6119
6120
        assert_eq!(<KLTU<u8, AU16> as KnownLayout>::LAYOUT, sized_layout(2, 4));
6121
6122
        assert_eq!(<KLTU<u8, [()]> as KnownLayout>::LAYOUT, unsized_layout(1, 0, 1, false));
6123
6124
        assert_eq!(<KLTU<u8, [u8]> as KnownLayout>::LAYOUT, unsized_layout(1, 1, 1, false));
6125
6126
        assert_eq!(<KLTU<u8, [AU16]> as KnownLayout>::LAYOUT, unsized_layout(2, 2, 2, false));
6127
6128
        assert_eq!(<KLTU<AU16, ()> as KnownLayout>::LAYOUT, sized_layout(2, 2));
6129
6130
        assert_eq!(<KLTU<AU16, u8> as KnownLayout>::LAYOUT, sized_layout(2, 4));
6131
6132
        assert_eq!(<KLTU<AU16, AU16> as KnownLayout>::LAYOUT, sized_layout(2, 4));
6133
6134
        assert_eq!(<KLTU<AU16, [()]> as KnownLayout>::LAYOUT, unsized_layout(2, 0, 2, false));
6135
6136
        assert_eq!(<KLTU<AU16, [u8]> as KnownLayout>::LAYOUT, unsized_layout(2, 1, 2, false));
6137
6138
        assert_eq!(<KLTU<AU16, [AU16]> as KnownLayout>::LAYOUT, unsized_layout(2, 2, 2, false));
6139
6140
        // Test a variety of field counts.
6141
6142
        #[derive(KnownLayout)]
6143
        #[repr(C)]
6144
        struct KLF0;
6145
6146
        assert_eq!(<KLF0 as KnownLayout>::LAYOUT, sized_layout(1, 0));
6147
6148
        #[derive(KnownLayout)]
6149
        #[repr(C)]
6150
        struct KLF1([u8]);
6151
6152
        assert_eq!(<KLF1 as KnownLayout>::LAYOUT, unsized_layout(1, 1, 0, true));
6153
6154
        #[derive(KnownLayout)]
6155
        #[repr(C)]
6156
        struct KLF2(NotKnownLayout<u8>, [u8]);
6157
6158
        assert_eq!(<KLF2 as KnownLayout>::LAYOUT, unsized_layout(1, 1, 1, false));
6159
6160
        #[derive(KnownLayout)]
6161
        #[repr(C)]
6162
        struct KLF3(NotKnownLayout<u8>, NotKnownLayout<AU16>, [u8]);
6163
6164
        assert_eq!(<KLF3 as KnownLayout>::LAYOUT, unsized_layout(2, 1, 4, false));
6165
6166
        #[derive(KnownLayout)]
6167
        #[repr(C)]
6168
        struct KLF4(NotKnownLayout<u8>, NotKnownLayout<AU16>, NotKnownLayout<AU32>, [u8]);
6169
6170
        assert_eq!(<KLF4 as KnownLayout>::LAYOUT, unsized_layout(4, 1, 8, false));
6171
    }
6172
6173
    #[test]
6174
    fn test_object_safety() {
6175
        fn _takes_no_cell(_: &dyn Immutable) {}
6176
        fn _takes_unaligned(_: &dyn Unaligned) {}
6177
    }
6178
6179
    #[test]
6180
    fn test_from_zeros_only() {
6181
        // Test types that implement `FromZeros` but not `FromBytes`.
6182
6183
        assert!(!bool::new_zeroed());
6184
        assert_eq!(char::new_zeroed(), '\0');
6185
6186
        #[cfg(feature = "alloc")]
6187
        {
6188
            assert_eq!(bool::new_box_zeroed(), Ok(Box::new(false)));
6189
            assert_eq!(char::new_box_zeroed(), Ok(Box::new('\0')));
6190
6191
            assert_eq!(
6192
                <[bool]>::new_box_zeroed_with_elems(3).unwrap().as_ref(),
6193
                [false, false, false]
6194
            );
6195
            assert_eq!(
6196
                <[char]>::new_box_zeroed_with_elems(3).unwrap().as_ref(),
6197
                ['\0', '\0', '\0']
6198
            );
6199
6200
            assert_eq!(bool::new_vec_zeroed(3).unwrap().as_ref(), [false, false, false]);
6201
            assert_eq!(char::new_vec_zeroed(3).unwrap().as_ref(), ['\0', '\0', '\0']);
6202
        }
6203
6204
        let mut string = "hello".to_string();
6205
        let s: &mut str = string.as_mut();
6206
        assert_eq!(s, "hello");
6207
        s.zero();
6208
        assert_eq!(s, "\0\0\0\0\0");
6209
    }
6210
6211
    #[test]
6212
    fn test_zst_count_preserved() {
6213
        // Test that, when an explicit count is provided to for a type with a
6214
        // ZST trailing slice element, that count is preserved. This is
6215
        // important since, for such types, all element counts result in objects
6216
        // of the same size, and so the correct behavior is ambiguous. However,
6217
        // preserving the count as requested by the user is the behavior that we
6218
        // document publicly.
6219
6220
        // FromZeros methods
6221
        #[cfg(feature = "alloc")]
6222
        assert_eq!(<[()]>::new_box_zeroed_with_elems(3).unwrap().len(), 3);
6223
        #[cfg(feature = "alloc")]
6224
        assert_eq!(<()>::new_vec_zeroed(3).unwrap().len(), 3);
6225
6226
        // FromBytes methods
6227
        assert_eq!(<[()]>::ref_from_bytes_with_elems(&[][..], 3).unwrap().len(), 3);
6228
        assert_eq!(<[()]>::ref_from_prefix_with_elems(&[][..], 3).unwrap().0.len(), 3);
6229
        assert_eq!(<[()]>::ref_from_suffix_with_elems(&[][..], 3).unwrap().1.len(), 3);
6230
        assert_eq!(<[()]>::mut_from_bytes_with_elems(&mut [][..], 3).unwrap().len(), 3);
6231
        assert_eq!(<[()]>::mut_from_prefix_with_elems(&mut [][..], 3).unwrap().0.len(), 3);
6232
        assert_eq!(<[()]>::mut_from_suffix_with_elems(&mut [][..], 3).unwrap().1.len(), 3);
6233
    }
6234
6235
    #[test]
6236
    fn test_read_write() {
6237
        const VAL: u64 = 0x12345678;
6238
        #[cfg(target_endian = "big")]
6239
        const VAL_BYTES: [u8; 8] = VAL.to_be_bytes();
6240
        #[cfg(target_endian = "little")]
6241
        const VAL_BYTES: [u8; 8] = VAL.to_le_bytes();
6242
        const ZEROS: [u8; 8] = [0u8; 8];
6243
6244
        // Test `FromBytes::{read_from, read_from_prefix, read_from_suffix}`.
6245
6246
        assert_eq!(u64::read_from_bytes(&VAL_BYTES[..]), Ok(VAL));
6247
        // The first 8 bytes are from `VAL_BYTES` and the second 8 bytes are all
6248
        // zeros.
6249
        let bytes_with_prefix: [u8; 16] = transmute!([VAL_BYTES, [0; 8]]);
6250
        assert_eq!(u64::read_from_prefix(&bytes_with_prefix[..]), Ok((VAL, &ZEROS[..])));
6251
        assert_eq!(u64::read_from_suffix(&bytes_with_prefix[..]), Ok((&VAL_BYTES[..], 0)));
6252
        // The first 8 bytes are all zeros and the second 8 bytes are from
6253
        // `VAL_BYTES`
6254
        let bytes_with_suffix: [u8; 16] = transmute!([[0; 8], VAL_BYTES]);
6255
        assert_eq!(u64::read_from_prefix(&bytes_with_suffix[..]), Ok((0, &VAL_BYTES[..])));
6256
        assert_eq!(u64::read_from_suffix(&bytes_with_suffix[..]), Ok((&ZEROS[..], VAL)));
6257
6258
        // Test `IntoBytes::{write_to, write_to_prefix, write_to_suffix}`.
6259
6260
        let mut bytes = [0u8; 8];
6261
        assert_eq!(VAL.write_to(&mut bytes[..]), Ok(()));
6262
        assert_eq!(bytes, VAL_BYTES);
6263
        let mut bytes = [0u8; 16];
6264
        assert_eq!(VAL.write_to_prefix(&mut bytes[..]), Ok(()));
6265
        let want: [u8; 16] = transmute!([VAL_BYTES, [0; 8]]);
6266
        assert_eq!(bytes, want);
6267
        let mut bytes = [0u8; 16];
6268
        assert_eq!(VAL.write_to_suffix(&mut bytes[..]), Ok(()));
6269
        let want: [u8; 16] = transmute!([[0; 8], VAL_BYTES]);
6270
        assert_eq!(bytes, want);
6271
    }
6272
6273
    #[test]
6274
    #[cfg(feature = "std")]
6275
    fn test_read_io_with_padding_soundness() {
6276
        // This test is designed to exhibit potential UB in
6277
        // `FromBytes::read_from_io`. (see #2319, #2320).
6278
6279
        // On most platforms (where `align_of::<u16>() == 2`), `WithPadding`
6280
        // will have inter-field padding between `x` and `y`.
6281
        #[derive(FromBytes)]
6282
        #[repr(C)]
6283
        struct WithPadding {
6284
            x: u8,
6285
            y: u16,
6286
        }
6287
        struct ReadsInRead;
6288
        impl std::io::Read for ReadsInRead {
6289
            fn read(&mut self, buf: &mut [u8]) -> std::io::Result<usize> {
6290
                // This body branches on every byte of `buf`, ensuring that it
6291
                // exhibits UB if any byte of `buf` is uninitialized.
6292
                if buf.iter().all(|&x| x == 0) {
6293
                    Ok(buf.len())
6294
                } else {
6295
                    buf.iter_mut().for_each(|x| *x = 0);
6296
                    Ok(buf.len())
6297
                }
6298
            }
6299
        }
6300
        assert!(matches!(WithPadding::read_from_io(ReadsInRead), Ok(WithPadding { x: 0, y: 0 })));
6301
    }
6302
6303
    #[test]
6304
    #[cfg(feature = "std")]
6305
    fn test_read_write_io() {
6306
        let mut long_buffer = [0, 0, 0, 0];
6307
        assert!(matches!(u16::MAX.write_to_io(&mut long_buffer[..]), Ok(())));
6308
        assert_eq!(long_buffer, [255, 255, 0, 0]);
6309
        assert!(matches!(u16::read_from_io(&long_buffer[..]), Ok(u16::MAX)));
6310
6311
        let mut short_buffer = [0, 0];
6312
        assert!(u32::MAX.write_to_io(&mut short_buffer[..]).is_err());
6313
        assert_eq!(short_buffer, [255, 255]);
6314
        assert!(u32::read_from_io(&short_buffer[..]).is_err());
6315
    }
6316
6317
    #[test]
6318
    fn test_try_from_bytes_try_read_from() {
6319
        assert_eq!(<bool as TryFromBytes>::try_read_from_bytes(&[0]), Ok(false));
6320
        assert_eq!(<bool as TryFromBytes>::try_read_from_bytes(&[1]), Ok(true));
6321
6322
        assert_eq!(<bool as TryFromBytes>::try_read_from_prefix(&[0, 2]), Ok((false, &[2][..])));
6323
        assert_eq!(<bool as TryFromBytes>::try_read_from_prefix(&[1, 2]), Ok((true, &[2][..])));
6324
6325
        assert_eq!(<bool as TryFromBytes>::try_read_from_suffix(&[2, 0]), Ok((&[2][..], false)));
6326
        assert_eq!(<bool as TryFromBytes>::try_read_from_suffix(&[2, 1]), Ok((&[2][..], true)));
6327
6328
        // If we don't pass enough bytes, it fails.
6329
        assert!(matches!(
6330
            <u8 as TryFromBytes>::try_read_from_bytes(&[]),
6331
            Err(TryReadError::Size(_))
6332
        ));
6333
        assert!(matches!(
6334
            <u8 as TryFromBytes>::try_read_from_prefix(&[]),
6335
            Err(TryReadError::Size(_))
6336
        ));
6337
        assert!(matches!(
6338
            <u8 as TryFromBytes>::try_read_from_suffix(&[]),
6339
            Err(TryReadError::Size(_))
6340
        ));
6341
6342
        // If we pass too many bytes, it fails.
6343
        assert!(matches!(
6344
            <u8 as TryFromBytes>::try_read_from_bytes(&[0, 0]),
6345
            Err(TryReadError::Size(_))
6346
        ));
6347
6348
        // If we pass an invalid value, it fails.
6349
        assert!(matches!(
6350
            <bool as TryFromBytes>::try_read_from_bytes(&[2]),
6351
            Err(TryReadError::Validity(_))
6352
        ));
6353
        assert!(matches!(
6354
            <bool as TryFromBytes>::try_read_from_prefix(&[2, 0]),
6355
            Err(TryReadError::Validity(_))
6356
        ));
6357
        assert!(matches!(
6358
            <bool as TryFromBytes>::try_read_from_suffix(&[0, 2]),
6359
            Err(TryReadError::Validity(_))
6360
        ));
6361
6362
        // Reading from a misaligned buffer should still succeed. Since `AU64`'s
6363
        // alignment is 8, and since we read from two adjacent addresses one
6364
        // byte apart, it is guaranteed that at least one of them (though
6365
        // possibly both) will be misaligned.
6366
        let bytes: [u8; 9] = [0, 0, 0, 0, 0, 0, 0, 0, 0];
6367
        assert_eq!(<AU64 as TryFromBytes>::try_read_from_bytes(&bytes[..8]), Ok(AU64(0)));
6368
        assert_eq!(<AU64 as TryFromBytes>::try_read_from_bytes(&bytes[1..9]), Ok(AU64(0)));
6369
6370
        assert_eq!(
6371
            <AU64 as TryFromBytes>::try_read_from_prefix(&bytes[..8]),
6372
            Ok((AU64(0), &[][..]))
6373
        );
6374
        assert_eq!(
6375
            <AU64 as TryFromBytes>::try_read_from_prefix(&bytes[1..9]),
6376
            Ok((AU64(0), &[][..]))
6377
        );
6378
6379
        assert_eq!(
6380
            <AU64 as TryFromBytes>::try_read_from_suffix(&bytes[..8]),
6381
            Ok((&[][..], AU64(0)))
6382
        );
6383
        assert_eq!(
6384
            <AU64 as TryFromBytes>::try_read_from_suffix(&bytes[1..9]),
6385
            Ok((&[][..], AU64(0)))
6386
        );
6387
    }
6388
6389
    #[test]
6390
    fn test_ref_from_mut_from() {
6391
        // Test `FromBytes::{ref_from, mut_from}{,_prefix,Suffix}` success cases
6392
        // Exhaustive coverage for these methods is covered by the `Ref` tests above,
6393
        // which these helper methods defer to.
6394
6395
        let mut buf =
6396
            Align::<[u8; 16], AU64>::new([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]);
6397
6398
        assert_eq!(
6399
            AU64::ref_from_bytes(&buf.t[8..]).unwrap().0.to_ne_bytes(),
6400
            [8, 9, 10, 11, 12, 13, 14, 15]
6401
        );
6402
        let suffix = AU64::mut_from_bytes(&mut buf.t[8..]).unwrap();
6403
        suffix.0 = 0x0101010101010101;
6404
        // The `[u8:9]` is a non-half size of the full buffer, which would catch
6405
        // `from_prefix` having the same implementation as `from_suffix` (issues #506, #511).
6406
        assert_eq!(
6407
            <[u8; 9]>::ref_from_suffix(&buf.t[..]).unwrap(),
6408
            (&[0, 1, 2, 3, 4, 5, 6][..], &[7u8, 1, 1, 1, 1, 1, 1, 1, 1])
6409
        );
6410
        let (prefix, suffix) = AU64::mut_from_suffix(&mut buf.t[1..]).unwrap();
6411
        assert_eq!(prefix, &mut [1u8, 2, 3, 4, 5, 6, 7][..]);
6412
        suffix.0 = 0x0202020202020202;
6413
        let (prefix, suffix) = <[u8; 10]>::mut_from_suffix(&mut buf.t[..]).unwrap();
6414
        assert_eq!(prefix, &mut [0u8, 1, 2, 3, 4, 5][..]);
6415
        suffix[0] = 42;
6416
        assert_eq!(
6417
            <[u8; 9]>::ref_from_prefix(&buf.t[..]).unwrap(),
6418
            (&[0u8, 1, 2, 3, 4, 5, 42, 7, 2], &[2u8, 2, 2, 2, 2, 2, 2][..])
6419
        );
6420
        <[u8; 2]>::mut_from_prefix(&mut buf.t[..]).unwrap().0[1] = 30;
6421
        assert_eq!(buf.t, [0, 30, 2, 3, 4, 5, 42, 7, 2, 2, 2, 2, 2, 2, 2, 2]);
6422
    }
6423
6424
    #[test]
6425
    fn test_ref_from_mut_from_error() {
6426
        // Test `FromBytes::{ref_from, mut_from}{,_prefix,Suffix}` error cases.
6427
6428
        // Fail because the buffer is too large.
6429
        let mut buf = Align::<[u8; 16], AU64>::default();
6430
        // `buf.t` should be aligned to 8, so only the length check should fail.
6431
        assert!(AU64::ref_from_bytes(&buf.t[..]).is_err());
6432
        assert!(AU64::mut_from_bytes(&mut buf.t[..]).is_err());
6433
        assert!(<[u8; 8]>::ref_from_bytes(&buf.t[..]).is_err());
6434
        assert!(<[u8; 8]>::mut_from_bytes(&mut buf.t[..]).is_err());
6435
6436
        // Fail because the buffer is too small.
6437
        let mut buf = Align::<[u8; 4], AU64>::default();
6438
        assert!(AU64::ref_from_bytes(&buf.t[..]).is_err());
6439
        assert!(AU64::mut_from_bytes(&mut buf.t[..]).is_err());
6440
        assert!(<[u8; 8]>::ref_from_bytes(&buf.t[..]).is_err());
6441
        assert!(<[u8; 8]>::mut_from_bytes(&mut buf.t[..]).is_err());
6442
        assert!(AU64::ref_from_prefix(&buf.t[..]).is_err());
6443
        assert!(AU64::mut_from_prefix(&mut buf.t[..]).is_err());
6444
        assert!(AU64::ref_from_suffix(&buf.t[..]).is_err());
6445
        assert!(AU64::mut_from_suffix(&mut buf.t[..]).is_err());
6446
        assert!(<[u8; 8]>::ref_from_prefix(&buf.t[..]).is_err());
6447
        assert!(<[u8; 8]>::mut_from_prefix(&mut buf.t[..]).is_err());
6448
        assert!(<[u8; 8]>::ref_from_suffix(&buf.t[..]).is_err());
6449
        assert!(<[u8; 8]>::mut_from_suffix(&mut buf.t[..]).is_err());
6450
6451
        // Fail because the alignment is insufficient.
6452
        let mut buf = Align::<[u8; 13], AU64>::default();
6453
        assert!(AU64::ref_from_bytes(&buf.t[1..]).is_err());
6454
        assert!(AU64::mut_from_bytes(&mut buf.t[1..]).is_err());
6455
        assert!(AU64::ref_from_bytes(&buf.t[1..]).is_err());
6456
        assert!(AU64::mut_from_bytes(&mut buf.t[1..]).is_err());
6457
        assert!(AU64::ref_from_prefix(&buf.t[1..]).is_err());
6458
        assert!(AU64::mut_from_prefix(&mut buf.t[1..]).is_err());
6459
        assert!(AU64::ref_from_suffix(&buf.t[..]).is_err());
6460
        assert!(AU64::mut_from_suffix(&mut buf.t[..]).is_err());
6461
    }
6462
6463
    #[test]
6464
    fn test_to_methods() {
6465
        /// Run a series of tests by calling `IntoBytes` methods on `t`.
6466
        ///
6467
        /// `bytes` is the expected byte sequence returned from `t.as_bytes()`
6468
        /// before `t` has been modified. `post_mutation` is the expected
6469
        /// sequence returned from `t.as_bytes()` after `t.as_mut_bytes()[0]`
6470
        /// has had its bits flipped (by applying `^= 0xFF`).
6471
        ///
6472
        /// `N` is the size of `t` in bytes.
6473
        fn test<T: FromBytes + IntoBytes + Immutable + Debug + Eq + ?Sized, const N: usize>(
6474
            t: &mut T,
6475
            bytes: &[u8],
6476
            post_mutation: &T,
6477
        ) {
6478
            // Test that we can access the underlying bytes, and that we get the
6479
            // right bytes and the right number of bytes.
6480
            assert_eq!(t.as_bytes(), bytes);
6481
6482
            // Test that changes to the underlying byte slices are reflected in
6483
            // the original object.
6484
            t.as_mut_bytes()[0] ^= 0xFF;
6485
            assert_eq!(t, post_mutation);
6486
            t.as_mut_bytes()[0] ^= 0xFF;
6487
6488
            // `write_to` rejects slices that are too small or too large.
6489
            assert!(t.write_to(&mut vec![0; N - 1][..]).is_err());
6490
            assert!(t.write_to(&mut vec![0; N + 1][..]).is_err());
6491
6492
            // `write_to` works as expected.
6493
            let mut bytes = [0; N];
6494
            assert_eq!(t.write_to(&mut bytes[..]), Ok(()));
6495
            assert_eq!(bytes, t.as_bytes());
6496
6497
            // `write_to_prefix` rejects slices that are too small.
6498
            assert!(t.write_to_prefix(&mut vec![0; N - 1][..]).is_err());
6499
6500
            // `write_to_prefix` works with exact-sized slices.
6501
            let mut bytes = [0; N];
6502
            assert_eq!(t.write_to_prefix(&mut bytes[..]), Ok(()));
6503
            assert_eq!(bytes, t.as_bytes());
6504
6505
            // `write_to_prefix` works with too-large slices, and any bytes past
6506
            // the prefix aren't modified.
6507
            let mut too_many_bytes = vec![0; N + 1];
6508
            too_many_bytes[N] = 123;
6509
            assert_eq!(t.write_to_prefix(&mut too_many_bytes[..]), Ok(()));
6510
            assert_eq!(&too_many_bytes[..N], t.as_bytes());
6511
            assert_eq!(too_many_bytes[N], 123);
6512
6513
            // `write_to_suffix` rejects slices that are too small.
6514
            assert!(t.write_to_suffix(&mut vec![0; N - 1][..]).is_err());
6515
6516
            // `write_to_suffix` works with exact-sized slices.
6517
            let mut bytes = [0; N];
6518
            assert_eq!(t.write_to_suffix(&mut bytes[..]), Ok(()));
6519
            assert_eq!(bytes, t.as_bytes());
6520
6521
            // `write_to_suffix` works with too-large slices, and any bytes
6522
            // before the suffix aren't modified.
6523
            let mut too_many_bytes = vec![0; N + 1];
6524
            too_many_bytes[0] = 123;
6525
            assert_eq!(t.write_to_suffix(&mut too_many_bytes[..]), Ok(()));
6526
            assert_eq!(&too_many_bytes[1..], t.as_bytes());
6527
            assert_eq!(too_many_bytes[0], 123);
6528
        }
6529
6530
        #[derive(Debug, Eq, PartialEq, FromBytes, IntoBytes, Immutable)]
6531
        #[repr(C)]
6532
        struct Foo {
6533
            a: u32,
6534
            b: Wrapping<u32>,
6535
            c: Option<NonZeroU32>,
6536
        }
6537
6538
        let expected_bytes: Vec<u8> = if cfg!(target_endian = "little") {
6539
            vec![1, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0]
6540
        } else {
6541
            vec![0, 0, 0, 1, 0, 0, 0, 2, 0, 0, 0, 0]
6542
        };
6543
        let post_mutation_expected_a =
6544
            if cfg!(target_endian = "little") { 0x00_00_00_FE } else { 0xFF_00_00_01 };
6545
        test::<_, 12>(
6546
            &mut Foo { a: 1, b: Wrapping(2), c: None },
6547
            expected_bytes.as_bytes(),
6548
            &Foo { a: post_mutation_expected_a, b: Wrapping(2), c: None },
6549
        );
6550
        test::<_, 3>(
6551
            Unsized::from_mut_slice(&mut [1, 2, 3]),
6552
            &[1, 2, 3],
6553
            Unsized::from_mut_slice(&mut [0xFE, 2, 3]),
6554
        );
6555
    }
6556
6557
    #[test]
6558
    fn test_array() {
6559
        #[derive(FromBytes, IntoBytes, Immutable)]
6560
        #[repr(C)]
6561
        struct Foo {
6562
            a: [u16; 33],
6563
        }
6564
6565
        let foo = Foo { a: [0xFFFF; 33] };
6566
        let expected = [0xFFu8; 66];
6567
        assert_eq!(foo.as_bytes(), &expected[..]);
6568
    }
6569
6570
    #[test]
6571
    fn test_new_zeroed() {
6572
        assert!(!bool::new_zeroed());
6573
        assert_eq!(u64::new_zeroed(), 0);
6574
        // This test exists in order to exercise unsafe code, especially when
6575
        // running under Miri.
6576
        #[allow(clippy::unit_cmp)]
6577
        {
6578
            assert_eq!(<()>::new_zeroed(), ());
6579
        }
6580
    }
6581
6582
    #[test]
6583
    fn test_transparent_packed_generic_struct() {
6584
        #[derive(IntoBytes, FromBytes, Unaligned)]
6585
        #[repr(transparent)]
6586
        #[allow(dead_code)] // We never construct this type
6587
        struct Foo<T> {
6588
            _t: T,
6589
            _phantom: PhantomData<()>,
6590
        }
6591
6592
        assert_impl_all!(Foo<u32>: FromZeros, FromBytes, IntoBytes);
6593
        assert_impl_all!(Foo<u8>: Unaligned);
6594
6595
        #[derive(IntoBytes, FromBytes, Unaligned)]
6596
        #[repr(C, packed)]
6597
        #[allow(dead_code)] // We never construct this type
6598
        struct Bar<T, U> {
6599
            _t: T,
6600
            _u: U,
6601
        }
6602
6603
        assert_impl_all!(Bar<u8, AU64>: FromZeros, FromBytes, IntoBytes, Unaligned);
6604
    }
6605
6606
    #[cfg(feature = "alloc")]
6607
    mod alloc {
6608
        use super::*;
6609
6610
        #[cfg(not(no_zerocopy_panic_in_const_and_vec_try_reserve_1_57_0))]
6611
        #[test]
6612
        fn test_extend_vec_zeroed() {
6613
            // Test extending when there is an existing allocation.
6614
            let mut v = vec![100u16, 200, 300];
6615
            FromZeros::extend_vec_zeroed(&mut v, 3).unwrap();
6616
            assert_eq!(v.len(), 6);
6617
            assert_eq!(&*v, &[100, 200, 300, 0, 0, 0]);
6618
            drop(v);
6619
6620
            // Test extending when there is no existing allocation.
6621
            let mut v: Vec<u64> = Vec::new();
6622
            FromZeros::extend_vec_zeroed(&mut v, 3).unwrap();
6623
            assert_eq!(v.len(), 3);
6624
            assert_eq!(&*v, &[0, 0, 0]);
6625
            drop(v);
6626
        }
6627
6628
        #[cfg(not(no_zerocopy_panic_in_const_and_vec_try_reserve_1_57_0))]
6629
        #[test]
6630
        fn test_extend_vec_zeroed_zst() {
6631
            // Test extending when there is an existing (fake) allocation.
6632
            let mut v = vec![(), (), ()];
6633
            <()>::extend_vec_zeroed(&mut v, 3).unwrap();
6634
            assert_eq!(v.len(), 6);
6635
            assert_eq!(&*v, &[(), (), (), (), (), ()]);
6636
            drop(v);
6637
6638
            // Test extending when there is no existing (fake) allocation.
6639
            let mut v: Vec<()> = Vec::new();
6640
            <()>::extend_vec_zeroed(&mut v, 3).unwrap();
6641
            assert_eq!(&*v, &[(), (), ()]);
6642
            drop(v);
6643
        }
6644
6645
        #[cfg(not(no_zerocopy_panic_in_const_and_vec_try_reserve_1_57_0))]
6646
        #[test]
6647
        fn test_insert_vec_zeroed() {
6648
            // Insert at start (no existing allocation).
6649
            let mut v: Vec<u64> = Vec::new();
6650
            u64::insert_vec_zeroed(&mut v, 0, 2).unwrap();
6651
            assert_eq!(v.len(), 2);
6652
            assert_eq!(&*v, &[0, 0]);
6653
            drop(v);
6654
6655
            // Insert at start.
6656
            let mut v = vec![100u64, 200, 300];
6657
            u64::insert_vec_zeroed(&mut v, 0, 2).unwrap();
6658
            assert_eq!(v.len(), 5);
6659
            assert_eq!(&*v, &[0, 0, 100, 200, 300]);
6660
            drop(v);
6661
6662
            // Insert at middle.
6663
            let mut v = vec![100u64, 200, 300];
6664
            u64::insert_vec_zeroed(&mut v, 1, 1).unwrap();
6665
            assert_eq!(v.len(), 4);
6666
            assert_eq!(&*v, &[100, 0, 200, 300]);
6667
            drop(v);
6668
6669
            // Insert at end.
6670
            let mut v = vec![100u64, 200, 300];
6671
            u64::insert_vec_zeroed(&mut v, 3, 1).unwrap();
6672
            assert_eq!(v.len(), 4);
6673
            assert_eq!(&*v, &[100, 200, 300, 0]);
6674
            drop(v);
6675
        }
6676
6677
        #[cfg(not(no_zerocopy_panic_in_const_and_vec_try_reserve_1_57_0))]
6678
        #[test]
6679
        fn test_insert_vec_zeroed_zst() {
6680
            // Insert at start (no existing fake allocation).
6681
            let mut v: Vec<()> = Vec::new();
6682
            <()>::insert_vec_zeroed(&mut v, 0, 2).unwrap();
6683
            assert_eq!(v.len(), 2);
6684
            assert_eq!(&*v, &[(), ()]);
6685
            drop(v);
6686
6687
            // Insert at start.
6688
            let mut v = vec![(), (), ()];
6689
            <()>::insert_vec_zeroed(&mut v, 0, 2).unwrap();
6690
            assert_eq!(v.len(), 5);
6691
            assert_eq!(&*v, &[(), (), (), (), ()]);
6692
            drop(v);
6693
6694
            // Insert at middle.
6695
            let mut v = vec![(), (), ()];
6696
            <()>::insert_vec_zeroed(&mut v, 1, 1).unwrap();
6697
            assert_eq!(v.len(), 4);
6698
            assert_eq!(&*v, &[(), (), (), ()]);
6699
            drop(v);
6700
6701
            // Insert at end.
6702
            let mut v = vec![(), (), ()];
6703
            <()>::insert_vec_zeroed(&mut v, 3, 1).unwrap();
6704
            assert_eq!(v.len(), 4);
6705
            assert_eq!(&*v, &[(), (), (), ()]);
6706
            drop(v);
6707
        }
6708
6709
        #[test]
6710
        fn test_new_box_zeroed() {
6711
            assert_eq!(u64::new_box_zeroed(), Ok(Box::new(0)));
6712
        }
6713
6714
        #[test]
6715
        fn test_new_box_zeroed_array() {
6716
            drop(<[u32; 0x1000]>::new_box_zeroed());
6717
        }
6718
6719
        #[test]
6720
        fn test_new_box_zeroed_zst() {
6721
            // This test exists in order to exercise unsafe code, especially
6722
            // when running under Miri.
6723
            #[allow(clippy::unit_cmp)]
6724
            {
6725
                assert_eq!(<()>::new_box_zeroed(), Ok(Box::new(())));
6726
            }
6727
        }
6728
6729
        #[test]
6730
        fn test_new_box_zeroed_with_elems() {
6731
            let mut s: Box<[u64]> = <[u64]>::new_box_zeroed_with_elems(3).unwrap();
6732
            assert_eq!(s.len(), 3);
6733
            assert_eq!(&*s, &[0, 0, 0]);
6734
            s[1] = 3;
6735
            assert_eq!(&*s, &[0, 3, 0]);
6736
        }
6737
6738
        #[test]
6739
        fn test_new_box_zeroed_with_elems_empty() {
6740
            let s: Box<[u64]> = <[u64]>::new_box_zeroed_with_elems(0).unwrap();
6741
            assert_eq!(s.len(), 0);
6742
        }
6743
6744
        #[test]
6745
        fn test_new_box_zeroed_with_elems_zst() {
6746
            let mut s: Box<[()]> = <[()]>::new_box_zeroed_with_elems(3).unwrap();
6747
            assert_eq!(s.len(), 3);
6748
            assert!(s.get(10).is_none());
6749
            // This test exists in order to exercise unsafe code, especially
6750
            // when running under Miri.
6751
            #[allow(clippy::unit_cmp)]
6752
            {
6753
                assert_eq!(s[1], ());
6754
            }
6755
            s[2] = ();
6756
        }
6757
6758
        #[test]
6759
        fn test_new_box_zeroed_with_elems_zst_empty() {
6760
            let s: Box<[()]> = <[()]>::new_box_zeroed_with_elems(0).unwrap();
6761
            assert_eq!(s.len(), 0);
6762
        }
6763
6764
        #[test]
6765
        fn new_box_zeroed_with_elems_errors() {
6766
            assert_eq!(<[u16]>::new_box_zeroed_with_elems(usize::MAX), Err(AllocError));
6767
6768
            let max = <usize as core::convert::TryFrom<_>>::try_from(isize::MAX).unwrap();
6769
            assert_eq!(
6770
                <[u16]>::new_box_zeroed_with_elems((max / mem::size_of::<u16>()) + 1),
6771
                Err(AllocError)
6772
            );
6773
        }
6774
    }
6775
}