Coverage Report

Created: 2025-10-12 08:06

next uncovered line (L), next uncovered region (R), next uncovered branch (B)
/rust/registry/src/index.crates.io-1949cf8c6b5b557f/zerocopy-0.8.27/src/lib.rs
Line
Count
Source
1
// Copyright 2018 The Fuchsia Authors
2
//
3
// Licensed under the 2-Clause BSD License <LICENSE-BSD or
4
// https://opensource.org/license/bsd-2-clause>, Apache License, Version 2.0
5
// <LICENSE-APACHE or https://www.apache.org/licenses/LICENSE-2.0>, or the MIT
6
// license <LICENSE-MIT or https://opensource.org/licenses/MIT>, at your option.
7
// This file may not be copied, modified, or distributed except according to
8
// those terms.
9
10
// After updating the following doc comment, make sure to run the following
11
// command to update `README.md` based on its contents:
12
//
13
//   cargo -q run --manifest-path tools/Cargo.toml -p generate-readme > README.md
14
15
//! ***<span style="font-size: 140%">Fast, safe, <span
16
//! style="color:red;">compile error</span>. Pick two.</span>***
17
//!
18
//! Zerocopy makes zero-cost memory manipulation effortless. We write `unsafe`
19
//! so you don't have to.
20
//!
21
//! *For an overview of what's changed from zerocopy 0.7, check out our [release
22
//! notes][release-notes], which include a step-by-step upgrading guide.*
23
//!
24
//! *Have questions? Need more out of zerocopy? Submit a [customer request
25
//! issue][customer-request-issue] or ask the maintainers on
26
//! [GitHub][github-q-a] or [Discord][discord]!*
27
//!
28
//! [customer-request-issue]: https://github.com/google/zerocopy/issues/new/choose
29
//! [release-notes]: https://github.com/google/zerocopy/discussions/1680
30
//! [github-q-a]: https://github.com/google/zerocopy/discussions/categories/q-a
31
//! [discord]: https://discord.gg/MAvWH2R6zk
32
//!
33
//! # Overview
34
//!
35
//! ##### Conversion Traits
36
//!
37
//! Zerocopy provides four derivable traits for zero-cost conversions:
38
//! - [`TryFromBytes`] indicates that a type may safely be converted from
39
//!   certain byte sequences (conditional on runtime checks)
40
//! - [`FromZeros`] indicates that a sequence of zero bytes represents a valid
41
//!   instance of a type
42
//! - [`FromBytes`] indicates that a type may safely be converted from an
43
//!   arbitrary byte sequence
44
//! - [`IntoBytes`] indicates that a type may safely be converted *to* a byte
45
//!   sequence
46
//!
47
//! These traits support sized types, slices, and [slice DSTs][slice-dsts].
48
//!
49
//! [slice-dsts]: KnownLayout#dynamically-sized-types
50
//!
51
//! ##### Marker Traits
52
//!
53
//! Zerocopy provides three derivable marker traits that do not provide any
54
//! functionality themselves, but are required to call certain methods provided
55
//! by the conversion traits:
56
//! - [`KnownLayout`] indicates that zerocopy can reason about certain layout
57
//!   qualities of a type
58
//! - [`Immutable`] indicates that a type is free from interior mutability,
59
//!   except by ownership or an exclusive (`&mut`) borrow
60
//! - [`Unaligned`] indicates that a type's alignment requirement is 1
61
//!
62
//! You should generally derive these marker traits whenever possible.
63
//!
64
//! ##### Conversion Macros
65
//!
66
//! Zerocopy provides six macros for safe casting between types:
67
//!
68
//! - ([`try_`][try_transmute])[`transmute`] (conditionally) converts a value of
69
//!   one type to a value of another type of the same size
70
//! - ([`try_`][try_transmute_mut])[`transmute_mut`] (conditionally) converts a
71
//!   mutable reference of one type to a mutable reference of another type of
72
//!   the same size
73
//! - ([`try_`][try_transmute_ref])[`transmute_ref`] (conditionally) converts a
74
//!   mutable or immutable reference of one type to an immutable reference of
75
//!   another type of the same size
76
//!
77
//! These macros perform *compile-time* size and alignment checks, meaning that
78
//! unconditional casts have zero cost at runtime. Conditional casts do not need
79
//! to validate size or alignment runtime, but do need to validate contents.
80
//!
81
//! These macros cannot be used in generic contexts. For generic conversions,
82
//! use the methods defined by the [conversion traits](#conversion-traits).
83
//!
84
//! ##### Byteorder-Aware Numerics
85
//!
86
//! Zerocopy provides byte-order aware integer types that support these
87
//! conversions; see the [`byteorder`] module. These types are especially useful
88
//! for network parsing.
89
//!
90
//! # Cargo Features
91
//!
92
//! - **`alloc`**
93
//!   By default, `zerocopy` is `no_std`. When the `alloc` feature is enabled,
94
//!   the `alloc` crate is added as a dependency, and some allocation-related
95
//!   functionality is added.
96
//!
97
//! - **`std`**
98
//!   By default, `zerocopy` is `no_std`. When the `std` feature is enabled, the
99
//!   `std` crate is added as a dependency (ie, `no_std` is disabled), and
100
//!   support for some `std` types is added. `std` implies `alloc`.
101
//!
102
//! - **`derive`**
103
//!   Provides derives for the core marker traits via the `zerocopy-derive`
104
//!   crate. These derives are re-exported from `zerocopy`, so it is not
105
//!   necessary to depend on `zerocopy-derive` directly.
106
//!
107
//!   However, you may experience better compile times if you instead directly
108
//!   depend on both `zerocopy` and `zerocopy-derive` in your `Cargo.toml`,
109
//!   since doing so will allow Rust to compile these crates in parallel. To do
110
//!   so, do *not* enable the `derive` feature, and list both dependencies in
111
//!   your `Cargo.toml` with the same leading non-zero version number; e.g:
112
//!
113
//!   ```toml
114
//!   [dependencies]
115
//!   zerocopy = "0.X"
116
//!   zerocopy-derive = "0.X"
117
//!   ```
118
//!
119
//!   To avoid the risk of [duplicate import errors][duplicate-import-errors] if
120
//!   one of your dependencies enables zerocopy's `derive` feature, import
121
//!   derives as `use zerocopy_derive::*` rather than by name (e.g., `use
122
//!   zerocopy_derive::FromBytes`).
123
//!
124
//! - **`simd`**
125
//!   When the `simd` feature is enabled, `FromZeros`, `FromBytes`, and
126
//!   `IntoBytes` impls are emitted for all stable SIMD types which exist on the
127
//!   target platform. Note that the layout of SIMD types is not yet stabilized,
128
//!   so these impls may be removed in the future if layout changes make them
129
//!   invalid. For more information, see the Unsafe Code Guidelines Reference
130
//!   page on the [layout of packed SIMD vectors][simd-layout].
131
//!
132
//! - **`simd-nightly`**
133
//!   Enables the `simd` feature and adds support for SIMD types which are only
134
//!   available on nightly. Since these types are unstable, support for any type
135
//!   may be removed at any point in the future.
136
//!
137
//! - **`float-nightly`**
138
//!   Adds support for the unstable `f16` and `f128` types. These types are
139
//!   not yet fully implemented and may not be supported on all platforms.
140
//!
141
//! [duplicate-import-errors]: https://github.com/google/zerocopy/issues/1587
142
//! [simd-layout]: https://rust-lang.github.io/unsafe-code-guidelines/layout/packed-simd-vectors.html
143
//!
144
//! # Security Ethos
145
//!
146
//! Zerocopy is expressly designed for use in security-critical contexts. We
147
//! strive to ensure that that zerocopy code is sound under Rust's current
148
//! memory model, and *any future memory model*. We ensure this by:
149
//! - **...not 'guessing' about Rust's semantics.**
150
//!   We annotate `unsafe` code with a precise rationale for its soundness that
151
//!   cites a relevant section of Rust's official documentation. When Rust's
152
//!   documented semantics are unclear, we work with the Rust Operational
153
//!   Semantics Team to clarify Rust's documentation.
154
//! - **...rigorously testing our implementation.**
155
//!   We run tests using [Miri], ensuring that zerocopy is sound across a wide
156
//!   array of supported target platforms of varying endianness and pointer
157
//!   width, and across both current and experimental memory models of Rust.
158
//! - **...formally proving the correctness of our implementation.**
159
//!   We apply formal verification tools like [Kani][kani] to prove zerocopy's
160
//!   correctness.
161
//!
162
//! For more information, see our full [soundness policy].
163
//!
164
//! [Miri]: https://github.com/rust-lang/miri
165
//! [Kani]: https://github.com/model-checking/kani
166
//! [soundness policy]: https://github.com/google/zerocopy/blob/main/POLICIES.md#soundness
167
//!
168
//! # Relationship to Project Safe Transmute
169
//!
170
//! [Project Safe Transmute] is an official initiative of the Rust Project to
171
//! develop language-level support for safer transmutation. The Project consults
172
//! with crates like zerocopy to identify aspects of safer transmutation that
173
//! would benefit from compiler support, and has developed an [experimental,
174
//! compiler-supported analysis][mcp-transmutability] which determines whether,
175
//! for a given type, any value of that type may be soundly transmuted into
176
//! another type. Once this functionality is sufficiently mature, zerocopy
177
//! intends to replace its internal transmutability analysis (implemented by our
178
//! custom derives) with the compiler-supported one. This change will likely be
179
//! an implementation detail that is invisible to zerocopy's users.
180
//!
181
//! Project Safe Transmute will not replace the need for most of zerocopy's
182
//! higher-level abstractions. The experimental compiler analysis is a tool for
183
//! checking the soundness of `unsafe` code, not a tool to avoid writing
184
//! `unsafe` code altogether. For the foreseeable future, crates like zerocopy
185
//! will still be required in order to provide higher-level abstractions on top
186
//! of the building block provided by Project Safe Transmute.
187
//!
188
//! [Project Safe Transmute]: https://rust-lang.github.io/rfcs/2835-project-safe-transmute.html
189
//! [mcp-transmutability]: https://github.com/rust-lang/compiler-team/issues/411
190
//!
191
//! # MSRV
192
//!
193
//! See our [MSRV policy].
194
//!
195
//! [MSRV policy]: https://github.com/google/zerocopy/blob/main/POLICIES.md#msrv
196
//!
197
//! # Changelog
198
//!
199
//! Zerocopy uses [GitHub Releases].
200
//!
201
//! [GitHub Releases]: https://github.com/google/zerocopy/releases
202
//!
203
//! # Thanks
204
//!
205
//! Zerocopy is maintained by engineers at Google and Amazon with help from
206
//! [many wonderful contributors][contributors]. Thank you to everyone who has
207
//! lent a hand in making Rust a little more secure!
208
//!
209
//! [contributors]: https://github.com/google/zerocopy/graphs/contributors
210
211
// Sometimes we want to use lints which were added after our MSRV.
212
// `unknown_lints` is `warn` by default and we deny warnings in CI, so without
213
// this attribute, any unknown lint would cause a CI failure when testing with
214
// our MSRV.
215
#![allow(unknown_lints, non_local_definitions, unreachable_patterns)]
216
#![deny(renamed_and_removed_lints)]
217
#![deny(
218
    anonymous_parameters,
219
    deprecated_in_future,
220
    late_bound_lifetime_arguments,
221
    missing_copy_implementations,
222
    missing_debug_implementations,
223
    missing_docs,
224
    path_statements,
225
    patterns_in_fns_without_body,
226
    rust_2018_idioms,
227
    trivial_numeric_casts,
228
    unreachable_pub,
229
    unsafe_op_in_unsafe_fn,
230
    unused_extern_crates,
231
    // We intentionally choose not to deny `unused_qualifications`. When items
232
    // are added to the prelude (e.g., `core::mem::size_of`), this has the
233
    // consequence of making some uses trigger this lint on the latest toolchain
234
    // (e.g., `mem::size_of`), but fixing it (e.g. by replacing with `size_of`)
235
    // does not work on older toolchains.
236
    //
237
    // We tested a more complicated fix in #1413, but ultimately decided that,
238
    // since this lint is just a minor style lint, the complexity isn't worth it
239
    // - it's fine to occasionally have unused qualifications slip through,
240
    // especially since these do not affect our user-facing API in any way.
241
    variant_size_differences
242
)]
243
#![cfg_attr(
244
    __ZEROCOPY_INTERNAL_USE_ONLY_NIGHTLY_FEATURES_IN_TESTS,
245
    deny(fuzzy_provenance_casts, lossy_provenance_casts)
246
)]
247
#![deny(
248
    clippy::all,
249
    clippy::alloc_instead_of_core,
250
    clippy::arithmetic_side_effects,
251
    clippy::as_underscore,
252
    clippy::assertions_on_result_states,
253
    clippy::as_conversions,
254
    clippy::correctness,
255
    clippy::dbg_macro,
256
    clippy::decimal_literal_representation,
257
    clippy::double_must_use,
258
    clippy::get_unwrap,
259
    clippy::indexing_slicing,
260
    clippy::missing_inline_in_public_items,
261
    clippy::missing_safety_doc,
262
    clippy::must_use_candidate,
263
    clippy::must_use_unit,
264
    clippy::obfuscated_if_else,
265
    clippy::perf,
266
    clippy::print_stdout,
267
    clippy::return_self_not_must_use,
268
    clippy::std_instead_of_core,
269
    clippy::style,
270
    clippy::suspicious,
271
    clippy::todo,
272
    clippy::undocumented_unsafe_blocks,
273
    clippy::unimplemented,
274
    clippy::unnested_or_patterns,
275
    clippy::unwrap_used,
276
    clippy::use_debug
277
)]
278
// `clippy::incompatible_msrv` (implied by `clippy::suspicious`): This sometimes
279
// has false positives, and we test on our MSRV in CI, so it doesn't help us
280
// anyway.
281
#![allow(clippy::needless_lifetimes, clippy::type_complexity, clippy::incompatible_msrv)]
282
#![deny(
283
    rustdoc::bare_urls,
284
    rustdoc::broken_intra_doc_links,
285
    rustdoc::invalid_codeblock_attributes,
286
    rustdoc::invalid_html_tags,
287
    rustdoc::invalid_rust_codeblocks,
288
    rustdoc::missing_crate_level_docs,
289
    rustdoc::private_intra_doc_links
290
)]
291
// In test code, it makes sense to weight more heavily towards concise, readable
292
// code over correct or debuggable code.
293
#![cfg_attr(any(test, kani), allow(
294
    // In tests, you get line numbers and have access to source code, so panic
295
    // messages are less important. You also often unwrap a lot, which would
296
    // make expect'ing instead very verbose.
297
    clippy::unwrap_used,
298
    // In tests, there's no harm to "panic risks" - the worst that can happen is
299
    // that your test will fail, and you'll fix it. By contrast, panic risks in
300
    // production code introduce the possibly of code panicking unexpectedly "in
301
    // the field".
302
    clippy::arithmetic_side_effects,
303
    clippy::indexing_slicing,
304
))]
305
#![cfg_attr(not(any(test, kani, feature = "std")), no_std)]
306
// NOTE: This attribute should have the effect of causing CI to fail if
307
// `stdarch_x86_avx512` - which is currently stable in 1.89.0-nightly as of this
308
// writing on 2025-06-10 - has its stabilization rolled back.
309
//
310
// FIXME(#2583): Remove once `stdarch_x86_avx512` is stabilized in 1.89.0, and
311
// 1.89.0 has been released as stable.
312
#![cfg_attr(
313
    all(feature = "simd-nightly", any(target_arch = "x86", target_arch = "x86_64")),
314
    expect(stable_features)
315
)]
316
// FIXME(#2583): Remove once `stdarch_x86_avx512` is stabilized in 1.89.0, and
317
// 1.89.0 has been released as stable. Replace with version detection for 1.89.0
318
// (see #2574 for a draft implementation).
319
#![cfg_attr(
320
    all(feature = "simd-nightly", any(target_arch = "x86", target_arch = "x86_64")),
321
    feature(stdarch_x86_avx512)
322
)]
323
#![cfg_attr(
324
    all(feature = "simd-nightly", target_arch = "arm"),
325
    feature(stdarch_arm_dsp, stdarch_arm_neon_intrinsics)
326
)]
327
#![cfg_attr(
328
    all(feature = "simd-nightly", any(target_arch = "powerpc", target_arch = "powerpc64")),
329
    feature(stdarch_powerpc)
330
)]
331
#![cfg_attr(feature = "float-nightly", feature(f16, f128))]
332
#![cfg_attr(doc_cfg, feature(doc_cfg))]
333
#![cfg_attr(
334
    __ZEROCOPY_INTERNAL_USE_ONLY_NIGHTLY_FEATURES_IN_TESTS,
335
    feature(layout_for_ptr, coverage_attribute)
336
)]
337
338
// This is a hack to allow zerocopy-derive derives to work in this crate. They
339
// assume that zerocopy is linked as an extern crate, so they access items from
340
// it as `zerocopy::Xxx`. This makes that still work.
341
#[cfg(any(feature = "derive", test))]
342
extern crate self as zerocopy;
343
344
#[doc(hidden)]
345
#[macro_use]
346
pub mod util;
347
348
pub mod byte_slice;
349
pub mod byteorder;
350
mod deprecated;
351
352
#[doc(hidden)]
353
pub mod doctests;
354
355
// This module is `pub` so that zerocopy's error types and error handling
356
// documentation is grouped together in a cohesive module. In practice, we
357
// expect most users to use the re-export of `error`'s items to avoid identifier
358
// stuttering.
359
pub mod error;
360
mod impls;
361
#[doc(hidden)]
362
pub mod layout;
363
mod macros;
364
#[doc(hidden)]
365
pub mod pointer;
366
mod r#ref;
367
mod split_at;
368
// FIXME(#252): If we make this pub, come up with a better name.
369
mod wrappers;
370
371
use core::{
372
    cell::{Cell, UnsafeCell},
373
    cmp::Ordering,
374
    fmt::{self, Debug, Display, Formatter},
375
    hash::Hasher,
376
    marker::PhantomData,
377
    mem::{self, ManuallyDrop, MaybeUninit as CoreMaybeUninit},
378
    num::{
379
        NonZeroI128, NonZeroI16, NonZeroI32, NonZeroI64, NonZeroI8, NonZeroIsize, NonZeroU128,
380
        NonZeroU16, NonZeroU32, NonZeroU64, NonZeroU8, NonZeroUsize, Wrapping,
381
    },
382
    ops::{Deref, DerefMut},
383
    ptr::{self, NonNull},
384
    slice,
385
};
386
#[cfg(feature = "std")]
387
use std::io;
388
389
use crate::pointer::invariant::{self, BecauseExclusive};
390
pub use crate::{
391
    byte_slice::*,
392
    byteorder::*,
393
    error::*,
394
    r#ref::*,
395
    split_at::{Split, SplitAt},
396
    wrappers::*,
397
};
398
399
#[cfg(any(feature = "alloc", test, kani))]
400
extern crate alloc;
401
#[cfg(any(feature = "alloc", test))]
402
use alloc::{boxed::Box, vec::Vec};
403
#[cfg(any(feature = "alloc", test))]
404
use core::alloc::Layout;
405
406
use util::MetadataOf;
407
408
// Used by `KnownLayout`.
409
#[doc(hidden)]
410
pub use crate::layout::*;
411
// Used by `TryFromBytes::is_bit_valid`.
412
#[doc(hidden)]
413
pub use crate::pointer::{invariant::BecauseImmutable, Maybe, Ptr};
414
// For each trait polyfill, as soon as the corresponding feature is stable, the
415
// polyfill import will be unused because method/function resolution will prefer
416
// the inherent method/function over a trait method/function. Thus, we suppress
417
// the `unused_imports` warning.
418
//
419
// See the documentation on `util::polyfills` for more information.
420
#[allow(unused_imports)]
421
use crate::util::polyfills::{self, NonNullExt as _, NumExt as _};
422
423
#[rustversion::nightly]
424
#[cfg(all(test, not(__ZEROCOPY_INTERNAL_USE_ONLY_NIGHTLY_FEATURES_IN_TESTS)))]
425
const _: () = {
426
    #[deprecated = "some tests may be skipped due to missing RUSTFLAGS=\"--cfg __ZEROCOPY_INTERNAL_USE_ONLY_NIGHTLY_FEATURES_IN_TESTS\""]
427
    const _WARNING: () = ();
428
    #[warn(deprecated)]
429
    _WARNING
430
};
431
432
// These exist so that code which was written against the old names will get
433
// less confusing error messages when they upgrade to a more recent version of
434
// zerocopy. On our MSRV toolchain, the error messages read, for example:
435
//
436
//   error[E0603]: trait `FromZeroes` is private
437
//       --> examples/deprecated.rs:1:15
438
//        |
439
//   1    | use zerocopy::FromZeroes;
440
//        |               ^^^^^^^^^^ private trait
441
//        |
442
//   note: the trait `FromZeroes` is defined here
443
//       --> /Users/josh/workspace/zerocopy/src/lib.rs:1845:5
444
//        |
445
//   1845 | use FromZeros as FromZeroes;
446
//        |     ^^^^^^^^^^^^^^^^^^^^^^^
447
//
448
// The "note" provides enough context to make it easy to figure out how to fix
449
// the error.
450
/// Implements [`KnownLayout`].
451
///
452
/// This derive analyzes various aspects of a type's layout that are needed for
453
/// some of zerocopy's APIs. It can be applied to structs, enums, and unions;
454
/// e.g.:
455
///
456
/// ```
457
/// # use zerocopy_derive::KnownLayout;
458
/// #[derive(KnownLayout)]
459
/// struct MyStruct {
460
/// # /*
461
///     ...
462
/// # */
463
/// }
464
///
465
/// #[derive(KnownLayout)]
466
/// enum MyEnum {
467
/// #   V00,
468
/// # /*
469
///     ...
470
/// # */
471
/// }
472
///
473
/// #[derive(KnownLayout)]
474
/// union MyUnion {
475
/// #   variant: u8,
476
/// # /*
477
///     ...
478
/// # */
479
/// }
480
/// ```
481
///
482
/// # Limitations
483
///
484
/// This derive cannot currently be applied to unsized structs without an
485
/// explicit `repr` attribute.
486
///
487
/// Some invocations of this derive run afoul of a [known bug] in Rust's type
488
/// privacy checker. For example, this code:
489
///
490
/// ```compile_fail,E0446
491
/// use zerocopy::*;
492
/// # use zerocopy_derive::*;
493
///
494
/// #[derive(KnownLayout)]
495
/// #[repr(C)]
496
/// pub struct PublicType {
497
///     leading: Foo,
498
///     trailing: Bar,
499
/// }
500
///
501
/// #[derive(KnownLayout)]
502
/// struct Foo;
503
///
504
/// #[derive(KnownLayout)]
505
/// struct Bar;
506
/// ```
507
///
508
/// ...results in a compilation error:
509
///
510
/// ```text
511
/// error[E0446]: private type `Bar` in public interface
512
///  --> examples/bug.rs:3:10
513
///    |
514
/// 3  | #[derive(KnownLayout)]
515
///    |          ^^^^^^^^^^^ can't leak private type
516
/// ...
517
/// 14 | struct Bar;
518
///    | ---------- `Bar` declared as private
519
///    |
520
///    = note: this error originates in the derive macro `KnownLayout` (in Nightly builds, run with -Z macro-backtrace for more info)
521
/// ```
522
///
523
/// This issue arises when `#[derive(KnownLayout)]` is applied to `repr(C)`
524
/// structs whose trailing field type is less public than the enclosing struct.
525
///
526
/// To work around this, mark the trailing field type `pub` and annotate it with
527
/// `#[doc(hidden)]`; e.g.:
528
///
529
/// ```no_run
530
/// use zerocopy::*;
531
/// # use zerocopy_derive::*;
532
///
533
/// #[derive(KnownLayout)]
534
/// #[repr(C)]
535
/// pub struct PublicType {
536
///     leading: Foo,
537
///     trailing: Bar,
538
/// }
539
///
540
/// #[derive(KnownLayout)]
541
/// struct Foo;
542
///
543
/// #[doc(hidden)]
544
/// #[derive(KnownLayout)]
545
/// pub struct Bar; // <- `Bar` is now also `pub`
546
/// ```
547
///
548
/// [known bug]: https://github.com/rust-lang/rust/issues/45713
549
#[cfg(any(feature = "derive", test))]
550
#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
551
pub use zerocopy_derive::KnownLayout;
552
#[allow(unused)]
553
use {FromZeros as FromZeroes, IntoBytes as AsBytes, Ref as LayoutVerified};
554
555
/// Indicates that zerocopy can reason about certain aspects of a type's layout.
556
///
557
/// This trait is required by many of zerocopy's APIs. It supports sized types,
558
/// slices, and [slice DSTs](#dynamically-sized-types).
559
///
560
/// # Implementation
561
///
562
/// **Do not implement this trait yourself!** Instead, use
563
/// [`#[derive(KnownLayout)]`][derive]; e.g.:
564
///
565
/// ```
566
/// # use zerocopy_derive::KnownLayout;
567
/// #[derive(KnownLayout)]
568
/// struct MyStruct {
569
/// # /*
570
///     ...
571
/// # */
572
/// }
573
///
574
/// #[derive(KnownLayout)]
575
/// enum MyEnum {
576
/// # /*
577
///     ...
578
/// # */
579
/// }
580
///
581
/// #[derive(KnownLayout)]
582
/// union MyUnion {
583
/// #   variant: u8,
584
/// # /*
585
///     ...
586
/// # */
587
/// }
588
/// ```
589
///
590
/// This derive performs a sophisticated analysis to deduce the layout
591
/// characteristics of types. You **must** implement this trait via the derive.
592
///
593
/// # Dynamically-sized types
594
///
595
/// `KnownLayout` supports slice-based dynamically sized types ("slice DSTs").
596
///
597
/// A slice DST is a type whose trailing field is either a slice or another
598
/// slice DST, rather than a type with fixed size. For example:
599
///
600
/// ```
601
/// #[repr(C)]
602
/// struct PacketHeader {
603
/// # /*
604
///     ...
605
/// # */
606
/// }
607
///
608
/// #[repr(C)]
609
/// struct Packet {
610
///     header: PacketHeader,
611
///     body: [u8],
612
/// }
613
/// ```
614
///
615
/// It can be useful to think of slice DSTs as a generalization of slices - in
616
/// other words, a normal slice is just the special case of a slice DST with
617
/// zero leading fields. In particular:
618
/// - Like slices, slice DSTs can have different lengths at runtime
619
/// - Like slices, slice DSTs cannot be passed by-value, but only by reference
620
///   or via other indirection such as `Box`
621
/// - Like slices, a reference (or `Box`, or other pointer type) to a slice DST
622
///   encodes the number of elements in the trailing slice field
623
///
624
/// ## Slice DST layout
625
///
626
/// Just like other composite Rust types, the layout of a slice DST is not
627
/// well-defined unless it is specified using an explicit `#[repr(...)]`
628
/// attribute such as `#[repr(C)]`. [Other representations are
629
/// supported][reprs], but in this section, we'll use `#[repr(C)]` as our
630
/// example.
631
///
632
/// A `#[repr(C)]` slice DST is laid out [just like sized `#[repr(C)]`
633
/// types][repr-c-structs], but the presenence of a variable-length field
634
/// introduces the possibility of *dynamic padding*. In particular, it may be
635
/// necessary to add trailing padding *after* the trailing slice field in order
636
/// to satisfy the outer type's alignment, and the amount of padding required
637
/// may be a function of the length of the trailing slice field. This is just a
638
/// natural consequence of the normal `#[repr(C)]` rules applied to slice DSTs,
639
/// but it can result in surprising behavior. For example, consider the
640
/// following type:
641
///
642
/// ```
643
/// #[repr(C)]
644
/// struct Foo {
645
///     a: u32,
646
///     b: u8,
647
///     z: [u16],
648
/// }
649
/// ```
650
///
651
/// Assuming that `u32` has alignment 4 (this is not true on all platforms),
652
/// then `Foo` has alignment 4 as well. Here is the smallest possible value for
653
/// `Foo`:
654
///
655
/// ```text
656
/// byte offset | 01234567
657
///       field | aaaab---
658
///                    ><
659
/// ```
660
///
661
/// In this value, `z` has length 0. Abiding by `#[repr(C)]`, the lowest offset
662
/// that we can place `z` at is 5, but since `z` has alignment 2, we need to
663
/// round up to offset 6. This means that there is one byte of padding between
664
/// `b` and `z`, then 0 bytes of `z` itself (denoted `><` in this diagram), and
665
/// then two bytes of padding after `z` in order to satisfy the overall
666
/// alignment of `Foo`. The size of this instance is 8 bytes.
667
///
668
/// What about if `z` has length 1?
669
///
670
/// ```text
671
/// byte offset | 01234567
672
///       field | aaaab-zz
673
/// ```
674
///
675
/// In this instance, `z` has length 1, and thus takes up 2 bytes. That means
676
/// that we no longer need padding after `z` in order to satisfy `Foo`'s
677
/// alignment. We've now seen two different values of `Foo` with two different
678
/// lengths of `z`, but they both have the same size - 8 bytes.
679
///
680
/// What about if `z` has length 2?
681
///
682
/// ```text
683
/// byte offset | 012345678901
684
///       field | aaaab-zzzz--
685
/// ```
686
///
687
/// Now `z` has length 2, and thus takes up 4 bytes. This brings our un-padded
688
/// size to 10, and so we now need another 2 bytes of padding after `z` to
689
/// satisfy `Foo`'s alignment.
690
///
691
/// Again, all of this is just a logical consequence of the `#[repr(C)]` rules
692
/// applied to slice DSTs, but it can be surprising that the amount of trailing
693
/// padding becomes a function of the trailing slice field's length, and thus
694
/// can only be computed at runtime.
695
///
696
/// [reprs]: https://doc.rust-lang.org/reference/type-layout.html#representations
697
/// [repr-c-structs]: https://doc.rust-lang.org/reference/type-layout.html#reprc-structs
698
///
699
/// ## What is a valid size?
700
///
701
/// There are two places in zerocopy's API that we refer to "a valid size" of a
702
/// type. In normal casts or conversions, where the source is a byte slice, we
703
/// need to know whether the source byte slice is a valid size of the
704
/// destination type. In prefix or suffix casts, we need to know whether *there
705
/// exists* a valid size of the destination type which fits in the source byte
706
/// slice and, if so, what the largest such size is.
707
///
708
/// As outlined above, a slice DST's size is defined by the number of elements
709
/// in its trailing slice field. However, there is not necessarily a 1-to-1
710
/// mapping between trailing slice field length and overall size. As we saw in
711
/// the previous section with the type `Foo`, instances with both 0 and 1
712
/// elements in the trailing `z` field result in a `Foo` whose size is 8 bytes.
713
///
714
/// When we say "x is a valid size of `T`", we mean one of two things:
715
/// - If `T: Sized`, then we mean that `x == size_of::<T>()`
716
/// - If `T` is a slice DST, then we mean that there exists a `len` such that the instance of
717
///   `T` with `len` trailing slice elements has size `x`
718
///
719
/// When we say "largest possible size of `T` that fits in a byte slice", we
720
/// mean one of two things:
721
/// - If `T: Sized`, then we mean `size_of::<T>()` if the byte slice is at least
722
///   `size_of::<T>()` bytes long
723
/// - If `T` is a slice DST, then we mean to consider all values, `len`, such
724
///   that the instance of `T` with `len` trailing slice elements fits in the
725
///   byte slice, and to choose the largest such `len`, if any
726
///
727
///
728
/// # Safety
729
///
730
/// This trait does not convey any safety guarantees to code outside this crate.
731
///
732
/// You must not rely on the `#[doc(hidden)]` internals of `KnownLayout`. Future
733
/// releases of zerocopy may make backwards-breaking changes to these items,
734
/// including changes that only affect soundness, which may cause code which
735
/// uses those items to silently become unsound.
736
///
737
#[cfg_attr(feature = "derive", doc = "[derive]: zerocopy_derive::KnownLayout")]
738
#[cfg_attr(
739
    not(feature = "derive"),
740
    doc = concat!("[derive]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.KnownLayout.html"),
741
)]
742
#[cfg_attr(
743
    zerocopy_diagnostic_on_unimplemented_1_78_0,
744
    diagnostic::on_unimplemented(note = "Consider adding `#[derive(KnownLayout)]` to `{Self}`")
745
)]
746
pub unsafe trait KnownLayout {
747
    // The `Self: Sized` bound makes it so that `KnownLayout` can still be
748
    // object safe. It's not currently object safe thanks to `const LAYOUT`, and
749
    // it likely won't be in the future, but there's no reason not to be
750
    // forwards-compatible with object safety.
751
    #[doc(hidden)]
752
    fn only_derive_is_allowed_to_implement_this_trait()
753
    where
754
        Self: Sized;
755
756
    /// The type of metadata stored in a pointer to `Self`.
757
    ///
758
    /// This is `()` for sized types and `usize` for slice DSTs.
759
    type PointerMetadata: PointerMetadata;
760
761
    /// A maybe-uninitialized analog of `Self`
762
    ///
763
    /// # Safety
764
    ///
765
    /// `Self::LAYOUT` and `Self::MaybeUninit::LAYOUT` are identical.
766
    /// `Self::MaybeUninit` admits uninitialized bytes in all positions.
767
    #[doc(hidden)]
768
    type MaybeUninit: ?Sized + KnownLayout<PointerMetadata = Self::PointerMetadata>;
769
770
    /// The layout of `Self`.
771
    ///
772
    /// # Safety
773
    ///
774
    /// Callers may assume that `LAYOUT` accurately reflects the layout of
775
    /// `Self`. In particular:
776
    /// - `LAYOUT.align` is equal to `Self`'s alignment
777
    /// - If `Self: Sized`, then `LAYOUT.size_info == SizeInfo::Sized { size }`
778
    ///   where `size == size_of::<Self>()`
779
    /// - If `Self` is a slice DST, then `LAYOUT.size_info ==
780
    ///   SizeInfo::SliceDst(slice_layout)` where:
781
    ///   - The size, `size`, of an instance of `Self` with `elems` trailing
782
    ///     slice elements is equal to `slice_layout.offset +
783
    ///     slice_layout.elem_size * elems` rounded up to the nearest multiple
784
    ///     of `LAYOUT.align`
785
    ///   - For such an instance, any bytes in the range `[slice_layout.offset +
786
    ///     slice_layout.elem_size * elems, size)` are padding and must not be
787
    ///     assumed to be initialized
788
    #[doc(hidden)]
789
    const LAYOUT: DstLayout;
790
791
    /// SAFETY: The returned pointer has the same address and provenance as
792
    /// `bytes`. If `Self` is a DST, the returned pointer's referent has `elems`
793
    /// elements in its trailing slice.
794
    #[doc(hidden)]
795
    fn raw_from_ptr_len(bytes: NonNull<u8>, meta: Self::PointerMetadata) -> NonNull<Self>;
796
797
    /// Extracts the metadata from a pointer to `Self`.
798
    ///
799
    /// # Safety
800
    ///
801
    /// `pointer_to_metadata` always returns the correct metadata stored in
802
    /// `ptr`.
803
    #[doc(hidden)]
804
    fn pointer_to_metadata(ptr: *mut Self) -> Self::PointerMetadata;
805
806
    /// Computes the length of the byte range addressed by `ptr`.
807
    ///
808
    /// Returns `None` if the resulting length would not fit in an `usize`.
809
    ///
810
    /// # Safety
811
    ///
812
    /// Callers may assume that `size_of_val_raw` always returns the correct
813
    /// size.
814
    ///
815
    /// Callers may assume that, if `ptr` addresses a byte range whose length
816
    /// fits in an `usize`, this will return `Some`.
817
    #[doc(hidden)]
818
    #[must_use]
819
    #[inline(always)]
820
0
    fn size_of_val_raw(ptr: NonNull<Self>) -> Option<usize> {
821
0
        let meta = Self::pointer_to_metadata(ptr.as_ptr());
822
        // SAFETY: `size_for_metadata` promises to only return `None` if the
823
        // resulting size would not fit in a `usize`.
824
0
        Self::size_for_metadata(meta)
825
0
    }
826
827
    #[doc(hidden)]
828
    #[must_use]
829
    #[inline(always)]
830
0
    fn raw_dangling() -> NonNull<Self> {
831
0
        let meta = Self::PointerMetadata::from_elem_count(0);
832
0
        Self::raw_from_ptr_len(NonNull::dangling(), meta)
833
0
    }
Unexecuted instantiation: <[half::binary16::f16] as zerocopy::KnownLayout>::raw_dangling
Unexecuted instantiation: <[half::binary16::f16] as zerocopy::KnownLayout>::raw_dangling
Unexecuted instantiation: <[half::binary16::f16] as zerocopy::KnownLayout>::raw_dangling
Unexecuted instantiation: <_ as zerocopy::KnownLayout>::raw_dangling
Unexecuted instantiation: <[half::binary16::f16] as zerocopy::KnownLayout>::raw_dangling
Unexecuted instantiation: <[half::binary16::f16] as zerocopy::KnownLayout>::raw_dangling
Unexecuted instantiation: <[half::binary16::f16] as zerocopy::KnownLayout>::raw_dangling
Unexecuted instantiation: <[half::binary16::f16] as zerocopy::KnownLayout>::raw_dangling
Unexecuted instantiation: <[half::binary16::f16] as zerocopy::KnownLayout>::raw_dangling
Unexecuted instantiation: <[half::binary16::f16] as zerocopy::KnownLayout>::raw_dangling
Unexecuted instantiation: <[half::binary16::f16] as zerocopy::KnownLayout>::raw_dangling
Unexecuted instantiation: <[half::binary16::f16] as zerocopy::KnownLayout>::raw_dangling
Unexecuted instantiation: <[half::binary16::f16] as zerocopy::KnownLayout>::raw_dangling
Unexecuted instantiation: <[half::binary16::f16] as zerocopy::KnownLayout>::raw_dangling
Unexecuted instantiation: <[half::binary16::f16] as zerocopy::KnownLayout>::raw_dangling
834
835
    /// Computes the size of an object of type `Self` with the given pointer
836
    /// metadata.
837
    ///
838
    /// # Safety
839
    ///
840
    /// `size_for_metadata` promises to return `None` if and only if the
841
    /// resulting size would not fit in a `usize`. Note that the returned size
842
    /// could exceed the actual maximum valid size of an allocated object,
843
    /// `isize::MAX`.
844
    ///
845
    /// # Examples
846
    ///
847
    /// ```
848
    /// use zerocopy::KnownLayout;
849
    ///
850
    /// assert_eq!(u8::size_for_metadata(()), Some(1));
851
    /// assert_eq!(u16::size_for_metadata(()), Some(2));
852
    /// assert_eq!(<[u8]>::size_for_metadata(42), Some(42));
853
    /// assert_eq!(<[u16]>::size_for_metadata(42), Some(84));
854
    ///
855
    /// // This size exceeds the maximum valid object size (`isize::MAX`):
856
    /// assert_eq!(<[u8]>::size_for_metadata(usize::MAX), Some(usize::MAX));
857
    ///
858
    /// // This size, if computed, would exceed `usize::MAX`:
859
    /// assert_eq!(<[u16]>::size_for_metadata(usize::MAX), None);
860
    /// ```
861
    #[inline(always)]
862
0
    fn size_for_metadata(meta: Self::PointerMetadata) -> Option<usize> {
863
0
        meta.size_for_metadata(Self::LAYOUT)
864
0
    }
865
}
866
867
/// Efficiently produces the [`TrailingSliceLayout`] of `T`.
868
#[inline(always)]
869
0
pub(crate) fn trailing_slice_layout<T>() -> TrailingSliceLayout
870
0
where
871
0
    T: ?Sized + KnownLayout<PointerMetadata = usize>,
872
{
873
    trait LayoutFacts {
874
        const SIZE_INFO: TrailingSliceLayout;
875
    }
876
877
    impl<T: ?Sized> LayoutFacts for T
878
    where
879
        T: KnownLayout<PointerMetadata = usize>,
880
    {
881
        const SIZE_INFO: TrailingSliceLayout = match T::LAYOUT.size_info {
882
            crate::SizeInfo::Sized { .. } => const_panic!("unreachable"),
883
            crate::SizeInfo::SliceDst(info) => info,
884
        };
885
    }
886
887
0
    T::SIZE_INFO
888
0
}
889
890
/// The metadata associated with a [`KnownLayout`] type.
891
#[doc(hidden)]
892
pub trait PointerMetadata: Copy + Eq + Debug {
893
    /// Constructs a `Self` from an element count.
894
    ///
895
    /// If `Self = ()`, this returns `()`. If `Self = usize`, this returns
896
    /// `elems`. No other types are currently supported.
897
    fn from_elem_count(elems: usize) -> Self;
898
899
    /// Computes the size of the object with the given layout and pointer
900
    /// metadata.
901
    ///
902
    /// # Panics
903
    ///
904
    /// If `Self = ()`, `layout` must describe a sized type. If `Self = usize`,
905
    /// `layout` must describe a slice DST. Otherwise, `size_for_metadata` may
906
    /// panic.
907
    ///
908
    /// # Safety
909
    ///
910
    /// `size_for_metadata` promises to only return `None` if the resulting size
911
    /// would not fit in a `usize`.
912
    fn size_for_metadata(self, layout: DstLayout) -> Option<usize>;
913
}
914
915
impl PointerMetadata for () {
916
    #[inline]
917
    #[allow(clippy::unused_unit)]
918
0
    fn from_elem_count(_elems: usize) -> () {}
919
920
    #[inline]
921
0
    fn size_for_metadata(self, layout: DstLayout) -> Option<usize> {
922
0
        match layout.size_info {
923
0
            SizeInfo::Sized { size } => Some(size),
924
            // NOTE: This branch is unreachable, but we return `None` rather
925
            // than `unreachable!()` to avoid generating panic paths.
926
0
            SizeInfo::SliceDst(_) => None,
927
        }
928
0
    }
929
}
930
931
impl PointerMetadata for usize {
932
    #[inline]
933
0
    fn from_elem_count(elems: usize) -> usize {
934
0
        elems
935
0
    }
Unexecuted instantiation: <usize as zerocopy::PointerMetadata>::from_elem_count
Unexecuted instantiation: <usize as zerocopy::PointerMetadata>::from_elem_count
Unexecuted instantiation: <usize as zerocopy::PointerMetadata>::from_elem_count
Unexecuted instantiation: <usize as zerocopy::PointerMetadata>::from_elem_count
Unexecuted instantiation: <usize as zerocopy::PointerMetadata>::from_elem_count
Unexecuted instantiation: <usize as zerocopy::PointerMetadata>::from_elem_count
Unexecuted instantiation: <usize as zerocopy::PointerMetadata>::from_elem_count
Unexecuted instantiation: <usize as zerocopy::PointerMetadata>::from_elem_count
Unexecuted instantiation: <usize as zerocopy::PointerMetadata>::from_elem_count
Unexecuted instantiation: <usize as zerocopy::PointerMetadata>::from_elem_count
Unexecuted instantiation: <usize as zerocopy::PointerMetadata>::from_elem_count
Unexecuted instantiation: <usize as zerocopy::PointerMetadata>::from_elem_count
Unexecuted instantiation: <usize as zerocopy::PointerMetadata>::from_elem_count
Unexecuted instantiation: <usize as zerocopy::PointerMetadata>::from_elem_count
Unexecuted instantiation: <usize as zerocopy::PointerMetadata>::from_elem_count
936
937
    #[inline]
938
0
    fn size_for_metadata(self, layout: DstLayout) -> Option<usize> {
939
0
        match layout.size_info {
940
0
            SizeInfo::SliceDst(TrailingSliceLayout { offset, elem_size }) => {
941
0
                let slice_len = elem_size.checked_mul(self)?;
942
0
                let without_padding = offset.checked_add(slice_len)?;
943
0
                without_padding.checked_add(util::padding_needed_for(without_padding, layout.align))
944
            }
945
            // NOTE: This branch is unreachable, but we return `None` rather
946
            // than `unreachable!()` to avoid generating panic paths.
947
0
            SizeInfo::Sized { .. } => None,
948
        }
949
0
    }
950
}
951
952
// SAFETY: Delegates safety to `DstLayout::for_slice`.
953
unsafe impl<T> KnownLayout for [T] {
954
    #[allow(clippy::missing_inline_in_public_items, dead_code)]
955
    #[cfg_attr(
956
        all(coverage_nightly, __ZEROCOPY_INTERNAL_USE_ONLY_NIGHTLY_FEATURES_IN_TESTS),
957
        coverage(off)
958
    )]
959
0
    fn only_derive_is_allowed_to_implement_this_trait()
960
0
    where
961
0
        Self: Sized,
962
    {
963
0
    }
964
965
    type PointerMetadata = usize;
966
967
    // SAFETY: `CoreMaybeUninit<T>::LAYOUT` and `T::LAYOUT` are identical
968
    // because `CoreMaybeUninit<T>` has the same size and alignment as `T` [1].
969
    // Consequently, `[CoreMaybeUninit<T>]::LAYOUT` and `[T]::LAYOUT` are
970
    // identical, because they both lack a fixed-sized prefix and because they
971
    // inherit the alignments of their inner element type (which are identical)
972
    // [2][3].
973
    //
974
    // `[CoreMaybeUninit<T>]` admits uninitialized bytes at all positions
975
    // because `CoreMaybeUninit<T>` admits uninitialized bytes at all positions
976
    // and because the inner elements of `[CoreMaybeUninit<T>]` are laid out
977
    // back-to-back [2][3].
978
    //
979
    // [1] Per https://doc.rust-lang.org/1.81.0/std/mem/union.MaybeUninit.html#layout-1:
980
    //
981
    //   `MaybeUninit<T>` is guaranteed to have the same size, alignment, and ABI as
982
    //   `T`
983
    //
984
    // [2] Per https://doc.rust-lang.org/1.82.0/reference/type-layout.html#slice-layout:
985
    //
986
    //   Slices have the same layout as the section of the array they slice.
987
    //
988
    // [3] Per https://doc.rust-lang.org/1.82.0/reference/type-layout.html#array-layout:
989
    //
990
    //   An array of `[T; N]` has a size of `size_of::<T>() * N` and the same
991
    //   alignment of `T`. Arrays are laid out so that the zero-based `nth`
992
    //   element of the array is offset from the start of the array by `n *
993
    //   size_of::<T>()` bytes.
994
    type MaybeUninit = [CoreMaybeUninit<T>];
995
996
    const LAYOUT: DstLayout = DstLayout::for_slice::<T>();
997
998
    // SAFETY: `.cast` preserves address and provenance. The returned pointer
999
    // refers to an object with `elems` elements by construction.
1000
    #[inline(always)]
1001
0
    fn raw_from_ptr_len(data: NonNull<u8>, elems: usize) -> NonNull<Self> {
1002
        // FIXME(#67): Remove this allow. See NonNullExt for more details.
1003
        #[allow(unstable_name_collisions)]
1004
0
        NonNull::slice_from_raw_parts(data.cast::<T>(), elems)
1005
0
    }
Unexecuted instantiation: <[half::binary16::f16] as zerocopy::KnownLayout>::raw_from_ptr_len
Unexecuted instantiation: <[half::binary16::f16] as zerocopy::KnownLayout>::raw_from_ptr_len
Unexecuted instantiation: <[u16] as zerocopy::KnownLayout>::raw_from_ptr_len
Unexecuted instantiation: <[half::binary16::f16] as zerocopy::KnownLayout>::raw_from_ptr_len
Unexecuted instantiation: <[_] as zerocopy::KnownLayout>::raw_from_ptr_len
Unexecuted instantiation: <[half::binary16::f16] as zerocopy::KnownLayout>::raw_from_ptr_len
Unexecuted instantiation: <[half::binary16::f16] as zerocopy::KnownLayout>::raw_from_ptr_len
Unexecuted instantiation: <[half::binary16::f16] as zerocopy::KnownLayout>::raw_from_ptr_len
Unexecuted instantiation: <[half::binary16::f16] as zerocopy::KnownLayout>::raw_from_ptr_len
Unexecuted instantiation: <[half::binary16::f16] as zerocopy::KnownLayout>::raw_from_ptr_len
Unexecuted instantiation: <[half::binary16::f16] as zerocopy::KnownLayout>::raw_from_ptr_len
Unexecuted instantiation: <[half::binary16::f16] as zerocopy::KnownLayout>::raw_from_ptr_len
Unexecuted instantiation: <[half::binary16::f16] as zerocopy::KnownLayout>::raw_from_ptr_len
Unexecuted instantiation: <[half::binary16::f16] as zerocopy::KnownLayout>::raw_from_ptr_len
Unexecuted instantiation: <[half::binary16::f16] as zerocopy::KnownLayout>::raw_from_ptr_len
Unexecuted instantiation: <[half::binary16::f16] as zerocopy::KnownLayout>::raw_from_ptr_len
1006
1007
    #[inline(always)]
1008
0
    fn pointer_to_metadata(ptr: *mut [T]) -> usize {
1009
        #[allow(clippy::as_conversions)]
1010
0
        let slc = ptr as *const [()];
1011
1012
        // SAFETY:
1013
        // - `()` has alignment 1, so `slc` is trivially aligned.
1014
        // - `slc` was derived from a non-null pointer.
1015
        // - The size is 0 regardless of the length, so it is sound to
1016
        //   materialize a reference regardless of location.
1017
        // - By invariant, `self.ptr` has valid provenance.
1018
0
        let slc = unsafe { &*slc };
1019
1020
        // This is correct because the preceding `as` cast preserves the number
1021
        // of slice elements. [1]
1022
        //
1023
        // [1] Per https://doc.rust-lang.org/reference/expressions/operator-expr.html#pointer-to-pointer-cast:
1024
        //
1025
        //   For slice types like `[T]` and `[U]`, the raw pointer types `*const
1026
        //   [T]`, `*mut [T]`, `*const [U]`, and `*mut [U]` encode the number of
1027
        //   elements in this slice. Casts between these raw pointer types
1028
        //   preserve the number of elements. ... The same holds for `str` and
1029
        //   any compound type whose unsized tail is a slice type, such as
1030
        //   struct `Foo(i32, [u8])` or `(u64, Foo)`.
1031
0
        slc.len()
1032
0
    }
Unexecuted instantiation: <[half::binary16::f16] as zerocopy::KnownLayout>::pointer_to_metadata
Unexecuted instantiation: <[_] as zerocopy::KnownLayout>::pointer_to_metadata
1033
}
1034
1035
#[rustfmt::skip]
1036
impl_known_layout!(
1037
    (),
1038
    u8, i8, u16, i16, u32, i32, u64, i64, u128, i128, usize, isize, f32, f64,
1039
    bool, char,
1040
    NonZeroU8, NonZeroI8, NonZeroU16, NonZeroI16, NonZeroU32, NonZeroI32,
1041
    NonZeroU64, NonZeroI64, NonZeroU128, NonZeroI128, NonZeroUsize, NonZeroIsize
1042
);
1043
#[rustfmt::skip]
1044
#[cfg(feature = "float-nightly")]
1045
impl_known_layout!(
1046
    #[cfg_attr(doc_cfg, doc(cfg(feature = "float-nightly")))]
1047
    f16,
1048
    #[cfg_attr(doc_cfg, doc(cfg(feature = "float-nightly")))]
1049
    f128
1050
);
1051
#[rustfmt::skip]
1052
impl_known_layout!(
1053
    T         => Option<T>,
1054
    T: ?Sized => PhantomData<T>,
1055
    T         => Wrapping<T>,
1056
    T         => CoreMaybeUninit<T>,
1057
    T: ?Sized => *const T,
1058
    T: ?Sized => *mut T,
1059
    T: ?Sized => &'_ T,
1060
    T: ?Sized => &'_ mut T,
1061
);
1062
impl_known_layout!(const N: usize, T => [T; N]);
1063
1064
// SAFETY: `str` has the same representation as `[u8]`. `ManuallyDrop<T>` [1],
1065
// `UnsafeCell<T>` [2], and `Cell<T>` [3] have the same representation as `T`.
1066
//
1067
// [1] Per https://doc.rust-lang.org/1.85.0/std/mem/struct.ManuallyDrop.html:
1068
//
1069
//   `ManuallyDrop<T>` is guaranteed to have the same layout and bit validity as
1070
//   `T`
1071
//
1072
// [2] Per https://doc.rust-lang.org/1.85.0/core/cell/struct.UnsafeCell.html#memory-layout:
1073
//
1074
//   `UnsafeCell<T>` has the same in-memory representation as its inner type
1075
//   `T`.
1076
//
1077
// [3] Per https://doc.rust-lang.org/1.85.0/core/cell/struct.Cell.html#memory-layout:
1078
//
1079
//   `Cell<T>` has the same in-memory representation as `T`.
1080
const _: () = unsafe {
1081
    unsafe_impl_known_layout!(
1082
        #[repr([u8])]
1083
        str
1084
    );
1085
    unsafe_impl_known_layout!(T: ?Sized + KnownLayout => #[repr(T)] ManuallyDrop<T>);
1086
    unsafe_impl_known_layout!(T: ?Sized + KnownLayout => #[repr(T)] UnsafeCell<T>);
1087
    unsafe_impl_known_layout!(T: ?Sized + KnownLayout => #[repr(T)] Cell<T>);
1088
};
1089
1090
// SAFETY:
1091
// - By consequence of the invariant on `T::MaybeUninit` that `T::LAYOUT` and
1092
//   `T::MaybeUninit::LAYOUT` are equal, `T` and `T::MaybeUninit` have the same:
1093
//   - Fixed prefix size
1094
//   - Alignment
1095
//   - (For DSTs) trailing slice element size
1096
// - By consequence of the above, referents `T::MaybeUninit` and `T` have the
1097
//   require the same kind of pointer metadata, and thus it is valid to perform
1098
//   an `as` cast from `*mut T` and `*mut T::MaybeUninit`, and this operation
1099
//   preserves referent size (ie, `size_of_val_raw`).
1100
const _: () = unsafe {
1101
    unsafe_impl_known_layout!(T: ?Sized + KnownLayout => #[repr(T::MaybeUninit)] MaybeUninit<T>)
1102
};
1103
1104
/// Analyzes whether a type is [`FromZeros`].
1105
///
1106
/// This derive analyzes, at compile time, whether the annotated type satisfies
1107
/// the [safety conditions] of `FromZeros` and implements `FromZeros` and its
1108
/// supertraits if it is sound to do so. This derive can be applied to structs,
1109
/// enums, and unions; e.g.:
1110
///
1111
/// ```
1112
/// # use zerocopy_derive::{FromZeros, Immutable};
1113
/// #[derive(FromZeros)]
1114
/// struct MyStruct {
1115
/// # /*
1116
///     ...
1117
/// # */
1118
/// }
1119
///
1120
/// #[derive(FromZeros)]
1121
/// #[repr(u8)]
1122
/// enum MyEnum {
1123
/// #   Variant0,
1124
/// # /*
1125
///     ...
1126
/// # */
1127
/// }
1128
///
1129
/// #[derive(FromZeros, Immutable)]
1130
/// union MyUnion {
1131
/// #   variant: u8,
1132
/// # /*
1133
///     ...
1134
/// # */
1135
/// }
1136
/// ```
1137
///
1138
/// [safety conditions]: trait@FromZeros#safety
1139
///
1140
/// # Analysis
1141
///
1142
/// *This section describes, roughly, the analysis performed by this derive to
1143
/// determine whether it is sound to implement `FromZeros` for a given type.
1144
/// Unless you are modifying the implementation of this derive, or attempting to
1145
/// manually implement `FromZeros` for a type yourself, you don't need to read
1146
/// this section.*
1147
///
1148
/// If a type has the following properties, then this derive can implement
1149
/// `FromZeros` for that type:
1150
///
1151
/// - If the type is a struct, all of its fields must be `FromZeros`.
1152
/// - If the type is an enum:
1153
///   - It must have a defined representation (`repr`s `C`, `u8`, `u16`, `u32`,
1154
///     `u64`, `usize`, `i8`, `i16`, `i32`, `i64`, or `isize`).
1155
///   - It must have a variant with a discriminant/tag of `0`, and its fields
1156
///     must be `FromZeros`. See [the reference] for a description of
1157
///     discriminant values are specified.
1158
///   - The fields of that variant must be `FromZeros`.
1159
///
1160
/// This analysis is subject to change. Unsafe code may *only* rely on the
1161
/// documented [safety conditions] of `FromZeros`, and must *not* rely on the
1162
/// implementation details of this derive.
1163
///
1164
/// [the reference]: https://doc.rust-lang.org/reference/items/enumerations.html#custom-discriminant-values-for-fieldless-enumerations
1165
///
1166
/// ## Why isn't an explicit representation required for structs?
1167
///
1168
/// Neither this derive, nor the [safety conditions] of `FromZeros`, requires
1169
/// that structs are marked with `#[repr(C)]`.
1170
///
1171
/// Per the [Rust reference](reference),
1172
///
1173
/// > The representation of a type can change the padding between fields, but
1174
/// > does not change the layout of the fields themselves.
1175
///
1176
/// [reference]: https://doc.rust-lang.org/reference/type-layout.html#representations
1177
///
1178
/// Since the layout of structs only consists of padding bytes and field bytes,
1179
/// a struct is soundly `FromZeros` if:
1180
/// 1. its padding is soundly `FromZeros`, and
1181
/// 2. its fields are soundly `FromZeros`.
1182
///
1183
/// The answer to the first question is always yes: padding bytes do not have
1184
/// any validity constraints. A [discussion] of this question in the Unsafe Code
1185
/// Guidelines Working Group concluded that it would be virtually unimaginable
1186
/// for future versions of rustc to add validity constraints to padding bytes.
1187
///
1188
/// [discussion]: https://github.com/rust-lang/unsafe-code-guidelines/issues/174
1189
///
1190
/// Whether a struct is soundly `FromZeros` therefore solely depends on whether
1191
/// its fields are `FromZeros`.
1192
// FIXME(#146): Document why we don't require an enum to have an explicit `repr`
1193
// attribute.
1194
#[cfg(any(feature = "derive", test))]
1195
#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
1196
pub use zerocopy_derive::FromZeros;
1197
/// Analyzes whether a type is [`Immutable`].
1198
///
1199
/// This derive analyzes, at compile time, whether the annotated type satisfies
1200
/// the [safety conditions] of `Immutable` and implements `Immutable` if it is
1201
/// sound to do so. This derive can be applied to structs, enums, and unions;
1202
/// e.g.:
1203
///
1204
/// ```
1205
/// # use zerocopy_derive::Immutable;
1206
/// #[derive(Immutable)]
1207
/// struct MyStruct {
1208
/// # /*
1209
///     ...
1210
/// # */
1211
/// }
1212
///
1213
/// #[derive(Immutable)]
1214
/// enum MyEnum {
1215
/// #   Variant0,
1216
/// # /*
1217
///     ...
1218
/// # */
1219
/// }
1220
///
1221
/// #[derive(Immutable)]
1222
/// union MyUnion {
1223
/// #   variant: u8,
1224
/// # /*
1225
///     ...
1226
/// # */
1227
/// }
1228
/// ```
1229
///
1230
/// # Analysis
1231
///
1232
/// *This section describes, roughly, the analysis performed by this derive to
1233
/// determine whether it is sound to implement `Immutable` for a given type.
1234
/// Unless you are modifying the implementation of this derive, you don't need
1235
/// to read this section.*
1236
///
1237
/// If a type has the following properties, then this derive can implement
1238
/// `Immutable` for that type:
1239
///
1240
/// - All fields must be `Immutable`.
1241
///
1242
/// This analysis is subject to change. Unsafe code may *only* rely on the
1243
/// documented [safety conditions] of `Immutable`, and must *not* rely on the
1244
/// implementation details of this derive.
1245
///
1246
/// [safety conditions]: trait@Immutable#safety
1247
#[cfg(any(feature = "derive", test))]
1248
#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
1249
pub use zerocopy_derive::Immutable;
1250
1251
/// Types which are free from interior mutability.
1252
///
1253
/// `T: Immutable` indicates that `T` does not permit interior mutation, except
1254
/// by ownership or an exclusive (`&mut`) borrow.
1255
///
1256
/// # Implementation
1257
///
1258
/// **Do not implement this trait yourself!** Instead, use
1259
/// [`#[derive(Immutable)]`][derive] (requires the `derive` Cargo feature);
1260
/// e.g.:
1261
///
1262
/// ```
1263
/// # use zerocopy_derive::Immutable;
1264
/// #[derive(Immutable)]
1265
/// struct MyStruct {
1266
/// # /*
1267
///     ...
1268
/// # */
1269
/// }
1270
///
1271
/// #[derive(Immutable)]
1272
/// enum MyEnum {
1273
/// # /*
1274
///     ...
1275
/// # */
1276
/// }
1277
///
1278
/// #[derive(Immutable)]
1279
/// union MyUnion {
1280
/// #   variant: u8,
1281
/// # /*
1282
///     ...
1283
/// # */
1284
/// }
1285
/// ```
1286
///
1287
/// This derive performs a sophisticated, compile-time safety analysis to
1288
/// determine whether a type is `Immutable`.
1289
///
1290
/// # Safety
1291
///
1292
/// Unsafe code outside of this crate must not make any assumptions about `T`
1293
/// based on `T: Immutable`. We reserve the right to relax the requirements for
1294
/// `Immutable` in the future, and if unsafe code outside of this crate makes
1295
/// assumptions based on `T: Immutable`, future relaxations may cause that code
1296
/// to become unsound.
1297
///
1298
// # Safety (Internal)
1299
//
1300
// If `T: Immutable`, unsafe code *inside of this crate* may assume that, given
1301
// `t: &T`, `t` does not contain any [`UnsafeCell`]s at any byte location
1302
// within the byte range addressed by `t`. This includes ranges of length 0
1303
// (e.g., `UnsafeCell<()>` and `[UnsafeCell<u8>; 0]`). If a type implements
1304
// `Immutable` which violates this assumptions, it may cause this crate to
1305
// exhibit [undefined behavior].
1306
//
1307
// [`UnsafeCell`]: core::cell::UnsafeCell
1308
// [undefined behavior]: https://raphlinus.github.io/programming/rust/2018/08/17/undefined-behavior.html
1309
#[cfg_attr(
1310
    feature = "derive",
1311
    doc = "[derive]: zerocopy_derive::Immutable",
1312
    doc = "[derive-analysis]: zerocopy_derive::Immutable#analysis"
1313
)]
1314
#[cfg_attr(
1315
    not(feature = "derive"),
1316
    doc = concat!("[derive]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.Immutable.html"),
1317
    doc = concat!("[derive-analysis]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.Immutable.html#analysis"),
1318
)]
1319
#[cfg_attr(
1320
    zerocopy_diagnostic_on_unimplemented_1_78_0,
1321
    diagnostic::on_unimplemented(note = "Consider adding `#[derive(Immutable)]` to `{Self}`")
1322
)]
1323
pub unsafe trait Immutable {
1324
    // The `Self: Sized` bound makes it so that `Immutable` is still object
1325
    // safe.
1326
    #[doc(hidden)]
1327
    fn only_derive_is_allowed_to_implement_this_trait()
1328
    where
1329
        Self: Sized;
1330
}
1331
1332
/// Implements [`TryFromBytes`].
1333
///
1334
/// This derive synthesizes the runtime checks required to check whether a
1335
/// sequence of initialized bytes corresponds to a valid instance of a type.
1336
/// This derive can be applied to structs, enums, and unions; e.g.:
1337
///
1338
/// ```
1339
/// # use zerocopy_derive::{TryFromBytes, Immutable};
1340
/// #[derive(TryFromBytes)]
1341
/// struct MyStruct {
1342
/// # /*
1343
///     ...
1344
/// # */
1345
/// }
1346
///
1347
/// #[derive(TryFromBytes)]
1348
/// #[repr(u8)]
1349
/// enum MyEnum {
1350
/// #   V00,
1351
/// # /*
1352
///     ...
1353
/// # */
1354
/// }
1355
///
1356
/// #[derive(TryFromBytes, Immutable)]
1357
/// union MyUnion {
1358
/// #   variant: u8,
1359
/// # /*
1360
///     ...
1361
/// # */
1362
/// }
1363
/// ```
1364
///
1365
/// # Portability
1366
///
1367
/// To ensure consistent endianness for enums with multi-byte representations,
1368
/// explicitly specify and convert each discriminant using `.to_le()` or
1369
/// `.to_be()`; e.g.:
1370
///
1371
/// ```
1372
/// # use zerocopy_derive::TryFromBytes;
1373
/// // `DataStoreVersion` is encoded in little-endian.
1374
/// #[derive(TryFromBytes)]
1375
/// #[repr(u32)]
1376
/// pub enum DataStoreVersion {
1377
///     /// Version 1 of the data store.
1378
///     V1 = 9u32.to_le(),
1379
///
1380
///     /// Version 2 of the data store.
1381
///     V2 = 10u32.to_le(),
1382
/// }
1383
/// ```
1384
///
1385
/// [safety conditions]: trait@TryFromBytes#safety
1386
#[cfg(any(feature = "derive", test))]
1387
#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
1388
pub use zerocopy_derive::TryFromBytes;
1389
1390
/// Types for which some bit patterns are valid.
1391
///
1392
/// A memory region of the appropriate length which contains initialized bytes
1393
/// can be viewed as a `TryFromBytes` type so long as the runtime value of those
1394
/// bytes corresponds to a [*valid instance*] of that type. For example,
1395
/// [`bool`] is `TryFromBytes`, so zerocopy can transmute a [`u8`] into a
1396
/// [`bool`] so long as it first checks that the value of the [`u8`] is `0` or
1397
/// `1`.
1398
///
1399
/// # Implementation
1400
///
1401
/// **Do not implement this trait yourself!** Instead, use
1402
/// [`#[derive(TryFromBytes)]`][derive]; e.g.:
1403
///
1404
/// ```
1405
/// # use zerocopy_derive::{TryFromBytes, Immutable};
1406
/// #[derive(TryFromBytes)]
1407
/// struct MyStruct {
1408
/// # /*
1409
///     ...
1410
/// # */
1411
/// }
1412
///
1413
/// #[derive(TryFromBytes)]
1414
/// #[repr(u8)]
1415
/// enum MyEnum {
1416
/// #   V00,
1417
/// # /*
1418
///     ...
1419
/// # */
1420
/// }
1421
///
1422
/// #[derive(TryFromBytes, Immutable)]
1423
/// union MyUnion {
1424
/// #   variant: u8,
1425
/// # /*
1426
///     ...
1427
/// # */
1428
/// }
1429
/// ```
1430
///
1431
/// This derive ensures that the runtime check of whether bytes correspond to a
1432
/// valid instance is sound. You **must** implement this trait via the derive.
1433
///
1434
/// # What is a "valid instance"?
1435
///
1436
/// In Rust, each type has *bit validity*, which refers to the set of bit
1437
/// patterns which may appear in an instance of that type. It is impossible for
1438
/// safe Rust code to produce values which violate bit validity (ie, values
1439
/// outside of the "valid" set of bit patterns). If `unsafe` code produces an
1440
/// invalid value, this is considered [undefined behavior].
1441
///
1442
/// Rust's bit validity rules are currently being decided, which means that some
1443
/// types have three classes of bit patterns: those which are definitely valid,
1444
/// and whose validity is documented in the language; those which may or may not
1445
/// be considered valid at some point in the future; and those which are
1446
/// definitely invalid.
1447
///
1448
/// Zerocopy takes a conservative approach, and only considers a bit pattern to
1449
/// be valid if its validity is a documenteed guarantee provided by the
1450
/// language.
1451
///
1452
/// For most use cases, Rust's current guarantees align with programmers'
1453
/// intuitions about what ought to be valid. As a result, zerocopy's
1454
/// conservatism should not affect most users.
1455
///
1456
/// If you are negatively affected by lack of support for a particular type,
1457
/// we encourage you to let us know by [filing an issue][github-repo].
1458
///
1459
/// # `TryFromBytes` is not symmetrical with [`IntoBytes`]
1460
///
1461
/// There are some types which implement both `TryFromBytes` and [`IntoBytes`],
1462
/// but for which `TryFromBytes` is not guaranteed to accept all byte sequences
1463
/// produced by `IntoBytes`. In other words, for some `T: TryFromBytes +
1464
/// IntoBytes`, there exist values of `t: T` such that
1465
/// `TryFromBytes::try_ref_from_bytes(t.as_bytes()) == None`. Code should not
1466
/// generally assume that values produced by `IntoBytes` will necessarily be
1467
/// accepted as valid by `TryFromBytes`.
1468
///
1469
/// # Safety
1470
///
1471
/// On its own, `T: TryFromBytes` does not make any guarantees about the layout
1472
/// or representation of `T`. It merely provides the ability to perform a
1473
/// validity check at runtime via methods like [`try_ref_from_bytes`].
1474
///
1475
/// You must not rely on the `#[doc(hidden)]` internals of `TryFromBytes`.
1476
/// Future releases of zerocopy may make backwards-breaking changes to these
1477
/// items, including changes that only affect soundness, which may cause code
1478
/// which uses those items to silently become unsound.
1479
///
1480
/// [undefined behavior]: https://raphlinus.github.io/programming/rust/2018/08/17/undefined-behavior.html
1481
/// [github-repo]: https://github.com/google/zerocopy
1482
/// [`try_ref_from_bytes`]: TryFromBytes::try_ref_from_bytes
1483
/// [*valid instance*]: #what-is-a-valid-instance
1484
#[cfg_attr(feature = "derive", doc = "[derive]: zerocopy_derive::TryFromBytes")]
1485
#[cfg_attr(
1486
    not(feature = "derive"),
1487
    doc = concat!("[derive]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.TryFromBytes.html"),
1488
)]
1489
#[cfg_attr(
1490
    zerocopy_diagnostic_on_unimplemented_1_78_0,
1491
    diagnostic::on_unimplemented(note = "Consider adding `#[derive(TryFromBytes)]` to `{Self}`")
1492
)]
1493
pub unsafe trait TryFromBytes {
1494
    // The `Self: Sized` bound makes it so that `TryFromBytes` is still object
1495
    // safe.
1496
    #[doc(hidden)]
1497
    fn only_derive_is_allowed_to_implement_this_trait()
1498
    where
1499
        Self: Sized;
1500
1501
    /// Does a given memory range contain a valid instance of `Self`?
1502
    ///
1503
    /// # Safety
1504
    ///
1505
    /// Unsafe code may assume that, if `is_bit_valid(candidate)` returns true,
1506
    /// `*candidate` contains a valid `Self`.
1507
    ///
1508
    /// # Panics
1509
    ///
1510
    /// `is_bit_valid` may panic. Callers are responsible for ensuring that any
1511
    /// `unsafe` code remains sound even in the face of `is_bit_valid`
1512
    /// panicking. (We support user-defined validation routines; so long as
1513
    /// these routines are not required to be `unsafe`, there is no way to
1514
    /// ensure that these do not generate panics.)
1515
    ///
1516
    /// Besides user-defined validation routines panicking, `is_bit_valid` will
1517
    /// either panic or fail to compile if called on a pointer with [`Shared`]
1518
    /// aliasing when `Self: !Immutable`.
1519
    ///
1520
    /// [`UnsafeCell`]: core::cell::UnsafeCell
1521
    /// [`Shared`]: invariant::Shared
1522
    #[doc(hidden)]
1523
    fn is_bit_valid<A: invariant::Reference>(candidate: Maybe<'_, Self, A>) -> bool;
1524
1525
    /// Attempts to interpret the given `source` as a `&Self`.
1526
    ///
1527
    /// If the bytes of `source` are a valid instance of `Self`, this method
1528
    /// returns a reference to those bytes interpreted as a `Self`. If the
1529
    /// length of `source` is not a [valid size of `Self`][valid-size], or if
1530
    /// `source` is not appropriately aligned, or if `source` is not a valid
1531
    /// instance of `Self`, this returns `Err`. If [`Self:
1532
    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
1533
    /// error][ConvertError::from].
1534
    ///
1535
    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
1536
    ///
1537
    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
1538
    /// [self-unaligned]: Unaligned
1539
    /// [slice-dst]: KnownLayout#dynamically-sized-types
1540
    ///
1541
    /// # Compile-Time Assertions
1542
    ///
1543
    /// This method cannot yet be used on unsized types whose dynamically-sized
1544
    /// component is zero-sized. Attempting to use this method on such types
1545
    /// results in a compile-time assertion error; e.g.:
1546
    ///
1547
    /// ```compile_fail,E0080
1548
    /// use zerocopy::*;
1549
    /// # use zerocopy_derive::*;
1550
    ///
1551
    /// #[derive(TryFromBytes, Immutable, KnownLayout)]
1552
    /// #[repr(C)]
1553
    /// struct ZSTy {
1554
    ///     leading_sized: u16,
1555
    ///     trailing_dst: [()],
1556
    /// }
1557
    ///
1558
    /// let _ = ZSTy::try_ref_from_bytes(0u16.as_bytes()); // âš  Compile Error!
1559
    /// ```
1560
    ///
1561
    /// # Examples
1562
    ///
1563
    /// ```
1564
    /// use zerocopy::TryFromBytes;
1565
    /// # use zerocopy_derive::*;
1566
    ///
1567
    /// // The only valid value of this type is the byte `0xC0`
1568
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1569
    /// #[repr(u8)]
1570
    /// enum C0 { xC0 = 0xC0 }
1571
    ///
1572
    /// // The only valid value of this type is the byte sequence `0xC0C0`.
1573
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1574
    /// #[repr(C)]
1575
    /// struct C0C0(C0, C0);
1576
    ///
1577
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1578
    /// #[repr(C)]
1579
    /// struct Packet {
1580
    ///     magic_number: C0C0,
1581
    ///     mug_size: u8,
1582
    ///     temperature: u8,
1583
    ///     marshmallows: [[u8; 2]],
1584
    /// }
1585
    ///
1586
    /// let bytes = &[0xC0, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5][..];
1587
    ///
1588
    /// let packet = Packet::try_ref_from_bytes(bytes).unwrap();
1589
    ///
1590
    /// assert_eq!(packet.mug_size, 240);
1591
    /// assert_eq!(packet.temperature, 77);
1592
    /// assert_eq!(packet.marshmallows, [[0, 1], [2, 3], [4, 5]]);
1593
    ///
1594
    /// // These bytes are not valid instance of `Packet`.
1595
    /// let bytes = &[0x10, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5][..];
1596
    /// assert!(Packet::try_ref_from_bytes(bytes).is_err());
1597
    /// ```
1598
    #[must_use = "has no side effects"]
1599
    #[inline]
1600
0
    fn try_ref_from_bytes(source: &[u8]) -> Result<&Self, TryCastError<&[u8], Self>>
1601
0
    where
1602
0
        Self: KnownLayout + Immutable,
1603
    {
1604
0
        static_assert_dst_is_not_zst!(Self);
1605
0
        match Ptr::from_ref(source).try_cast_into_no_leftover::<Self, BecauseImmutable>(None) {
1606
0
            Ok(source) => {
1607
                // This call may panic. If that happens, it doesn't cause any soundness
1608
                // issues, as we have not generated any invalid state which we need to
1609
                // fix before returning.
1610
                //
1611
                // Note that one panic or post-monomorphization error condition is
1612
                // calling `try_into_valid` (and thus `is_bit_valid`) with a shared
1613
                // pointer when `Self: !Immutable`. Since `Self: Immutable`, this panic
1614
                // condition will not happen.
1615
0
                match source.try_into_valid() {
1616
0
                    Ok(valid) => Ok(valid.as_ref()),
1617
0
                    Err(e) => {
1618
0
                        Err(e.map_src(|src| src.as_bytes::<BecauseImmutable>().as_ref()).into())
1619
                    }
1620
                }
1621
            }
1622
0
            Err(e) => Err(e.map_src(Ptr::as_ref).into()),
1623
        }
1624
0
    }
1625
1626
    /// Attempts to interpret the prefix of the given `source` as a `&Self`.
1627
    ///
1628
    /// This method computes the [largest possible size of `Self`][valid-size]
1629
    /// that can fit in the leading bytes of `source`. If that prefix is a valid
1630
    /// instance of `Self`, this method returns a reference to those bytes
1631
    /// interpreted as `Self`, and a reference to the remaining bytes. If there
1632
    /// are insufficient bytes, or if `source` is not appropriately aligned, or
1633
    /// if those bytes are not a valid instance of `Self`, this returns `Err`.
1634
    /// If [`Self: Unaligned`][self-unaligned], you can [infallibly discard the
1635
    /// alignment error][ConvertError::from].
1636
    ///
1637
    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
1638
    ///
1639
    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
1640
    /// [self-unaligned]: Unaligned
1641
    /// [slice-dst]: KnownLayout#dynamically-sized-types
1642
    ///
1643
    /// # Compile-Time Assertions
1644
    ///
1645
    /// This method cannot yet be used on unsized types whose dynamically-sized
1646
    /// component is zero-sized. Attempting to use this method on such types
1647
    /// results in a compile-time assertion error; e.g.:
1648
    ///
1649
    /// ```compile_fail,E0080
1650
    /// use zerocopy::*;
1651
    /// # use zerocopy_derive::*;
1652
    ///
1653
    /// #[derive(TryFromBytes, Immutable, KnownLayout)]
1654
    /// #[repr(C)]
1655
    /// struct ZSTy {
1656
    ///     leading_sized: u16,
1657
    ///     trailing_dst: [()],
1658
    /// }
1659
    ///
1660
    /// let _ = ZSTy::try_ref_from_prefix(0u16.as_bytes()); // âš  Compile Error!
1661
    /// ```
1662
    ///
1663
    /// # Examples
1664
    ///
1665
    /// ```
1666
    /// use zerocopy::TryFromBytes;
1667
    /// # use zerocopy_derive::*;
1668
    ///
1669
    /// // The only valid value of this type is the byte `0xC0`
1670
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1671
    /// #[repr(u8)]
1672
    /// enum C0 { xC0 = 0xC0 }
1673
    ///
1674
    /// // The only valid value of this type is the bytes `0xC0C0`.
1675
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1676
    /// #[repr(C)]
1677
    /// struct C0C0(C0, C0);
1678
    ///
1679
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1680
    /// #[repr(C)]
1681
    /// struct Packet {
1682
    ///     magic_number: C0C0,
1683
    ///     mug_size: u8,
1684
    ///     temperature: u8,
1685
    ///     marshmallows: [[u8; 2]],
1686
    /// }
1687
    ///
1688
    /// // These are more bytes than are needed to encode a `Packet`.
1689
    /// let bytes = &[0xC0, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5, 6][..];
1690
    ///
1691
    /// let (packet, suffix) = Packet::try_ref_from_prefix(bytes).unwrap();
1692
    ///
1693
    /// assert_eq!(packet.mug_size, 240);
1694
    /// assert_eq!(packet.temperature, 77);
1695
    /// assert_eq!(packet.marshmallows, [[0, 1], [2, 3], [4, 5]]);
1696
    /// assert_eq!(suffix, &[6u8][..]);
1697
    ///
1698
    /// // These bytes are not valid instance of `Packet`.
1699
    /// let bytes = &[0x10, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5, 6][..];
1700
    /// assert!(Packet::try_ref_from_prefix(bytes).is_err());
1701
    /// ```
1702
    #[must_use = "has no side effects"]
1703
    #[inline]
1704
0
    fn try_ref_from_prefix(source: &[u8]) -> Result<(&Self, &[u8]), TryCastError<&[u8], Self>>
1705
0
    where
1706
0
        Self: KnownLayout + Immutable,
1707
    {
1708
0
        static_assert_dst_is_not_zst!(Self);
1709
0
        try_ref_from_prefix_suffix(source, CastType::Prefix, None)
1710
0
    }
1711
1712
    /// Attempts to interpret the suffix of the given `source` as a `&Self`.
1713
    ///
1714
    /// This method computes the [largest possible size of `Self`][valid-size]
1715
    /// that can fit in the trailing bytes of `source`. If that suffix is a
1716
    /// valid instance of `Self`, this method returns a reference to those bytes
1717
    /// interpreted as `Self`, and a reference to the preceding bytes. If there
1718
    /// are insufficient bytes, or if the suffix of `source` would not be
1719
    /// appropriately aligned, or if the suffix is not a valid instance of
1720
    /// `Self`, this returns `Err`. If [`Self: Unaligned`][self-unaligned], you
1721
    /// can [infallibly discard the alignment error][ConvertError::from].
1722
    ///
1723
    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
1724
    ///
1725
    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
1726
    /// [self-unaligned]: Unaligned
1727
    /// [slice-dst]: KnownLayout#dynamically-sized-types
1728
    ///
1729
    /// # Compile-Time Assertions
1730
    ///
1731
    /// This method cannot yet be used on unsized types whose dynamically-sized
1732
    /// component is zero-sized. Attempting to use this method on such types
1733
    /// results in a compile-time assertion error; e.g.:
1734
    ///
1735
    /// ```compile_fail,E0080
1736
    /// use zerocopy::*;
1737
    /// # use zerocopy_derive::*;
1738
    ///
1739
    /// #[derive(TryFromBytes, Immutable, KnownLayout)]
1740
    /// #[repr(C)]
1741
    /// struct ZSTy {
1742
    ///     leading_sized: u16,
1743
    ///     trailing_dst: [()],
1744
    /// }
1745
    ///
1746
    /// let _ = ZSTy::try_ref_from_suffix(0u16.as_bytes()); // âš  Compile Error!
1747
    /// ```
1748
    ///
1749
    /// # Examples
1750
    ///
1751
    /// ```
1752
    /// use zerocopy::TryFromBytes;
1753
    /// # use zerocopy_derive::*;
1754
    ///
1755
    /// // The only valid value of this type is the byte `0xC0`
1756
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1757
    /// #[repr(u8)]
1758
    /// enum C0 { xC0 = 0xC0 }
1759
    ///
1760
    /// // The only valid value of this type is the bytes `0xC0C0`.
1761
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1762
    /// #[repr(C)]
1763
    /// struct C0C0(C0, C0);
1764
    ///
1765
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
1766
    /// #[repr(C)]
1767
    /// struct Packet {
1768
    ///     magic_number: C0C0,
1769
    ///     mug_size: u8,
1770
    ///     temperature: u8,
1771
    ///     marshmallows: [[u8; 2]],
1772
    /// }
1773
    ///
1774
    /// // These are more bytes than are needed to encode a `Packet`.
1775
    /// let bytes = &[0, 0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7][..];
1776
    ///
1777
    /// let (prefix, packet) = Packet::try_ref_from_suffix(bytes).unwrap();
1778
    ///
1779
    /// assert_eq!(packet.mug_size, 240);
1780
    /// assert_eq!(packet.temperature, 77);
1781
    /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
1782
    /// assert_eq!(prefix, &[0u8][..]);
1783
    ///
1784
    /// // These bytes are not valid instance of `Packet`.
1785
    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 77, 240, 0xC0, 0x10][..];
1786
    /// assert!(Packet::try_ref_from_suffix(bytes).is_err());
1787
    /// ```
1788
    #[must_use = "has no side effects"]
1789
    #[inline]
1790
0
    fn try_ref_from_suffix(source: &[u8]) -> Result<(&[u8], &Self), TryCastError<&[u8], Self>>
1791
0
    where
1792
0
        Self: KnownLayout + Immutable,
1793
    {
1794
0
        static_assert_dst_is_not_zst!(Self);
1795
0
        try_ref_from_prefix_suffix(source, CastType::Suffix, None).map(swap)
1796
0
    }
1797
1798
    /// Attempts to interpret the given `source` as a `&mut Self` without
1799
    /// copying.
1800
    ///
1801
    /// If the bytes of `source` are a valid instance of `Self`, this method
1802
    /// returns a reference to those bytes interpreted as a `Self`. If the
1803
    /// length of `source` is not a [valid size of `Self`][valid-size], or if
1804
    /// `source` is not appropriately aligned, or if `source` is not a valid
1805
    /// instance of `Self`, this returns `Err`. If [`Self:
1806
    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
1807
    /// error][ConvertError::from].
1808
    ///
1809
    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
1810
    ///
1811
    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
1812
    /// [self-unaligned]: Unaligned
1813
    /// [slice-dst]: KnownLayout#dynamically-sized-types
1814
    ///
1815
    /// # Compile-Time Assertions
1816
    ///
1817
    /// This method cannot yet be used on unsized types whose dynamically-sized
1818
    /// component is zero-sized. Attempting to use this method on such types
1819
    /// results in a compile-time assertion error; e.g.:
1820
    ///
1821
    /// ```compile_fail,E0080
1822
    /// use zerocopy::*;
1823
    /// # use zerocopy_derive::*;
1824
    ///
1825
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1826
    /// #[repr(C, packed)]
1827
    /// struct ZSTy {
1828
    ///     leading_sized: [u8; 2],
1829
    ///     trailing_dst: [()],
1830
    /// }
1831
    ///
1832
    /// let mut source = [85, 85];
1833
    /// let _ = ZSTy::try_mut_from_bytes(&mut source[..]); // âš  Compile Error!
1834
    /// ```
1835
    ///
1836
    /// # Examples
1837
    ///
1838
    /// ```
1839
    /// use zerocopy::TryFromBytes;
1840
    /// # use zerocopy_derive::*;
1841
    ///
1842
    /// // The only valid value of this type is the byte `0xC0`
1843
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1844
    /// #[repr(u8)]
1845
    /// enum C0 { xC0 = 0xC0 }
1846
    ///
1847
    /// // The only valid value of this type is the bytes `0xC0C0`.
1848
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1849
    /// #[repr(C)]
1850
    /// struct C0C0(C0, C0);
1851
    ///
1852
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1853
    /// #[repr(C, packed)]
1854
    /// struct Packet {
1855
    ///     magic_number: C0C0,
1856
    ///     mug_size: u8,
1857
    ///     temperature: u8,
1858
    ///     marshmallows: [[u8; 2]],
1859
    /// }
1860
    ///
1861
    /// let bytes = &mut [0xC0, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5][..];
1862
    ///
1863
    /// let packet = Packet::try_mut_from_bytes(bytes).unwrap();
1864
    ///
1865
    /// assert_eq!(packet.mug_size, 240);
1866
    /// assert_eq!(packet.temperature, 77);
1867
    /// assert_eq!(packet.marshmallows, [[0, 1], [2, 3], [4, 5]]);
1868
    ///
1869
    /// packet.temperature = 111;
1870
    ///
1871
    /// assert_eq!(bytes, [0xC0, 0xC0, 240, 111, 0, 1, 2, 3, 4, 5]);
1872
    ///
1873
    /// // These bytes are not valid instance of `Packet`.
1874
    /// let bytes = &mut [0x10, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5, 6][..];
1875
    /// assert!(Packet::try_mut_from_bytes(bytes).is_err());
1876
    /// ```
1877
    #[must_use = "has no side effects"]
1878
    #[inline]
1879
0
    fn try_mut_from_bytes(bytes: &mut [u8]) -> Result<&mut Self, TryCastError<&mut [u8], Self>>
1880
0
    where
1881
0
        Self: KnownLayout + IntoBytes,
1882
    {
1883
0
        static_assert_dst_is_not_zst!(Self);
1884
0
        match Ptr::from_mut(bytes).try_cast_into_no_leftover::<Self, BecauseExclusive>(None) {
1885
0
            Ok(source) => {
1886
                // This call may panic. If that happens, it doesn't cause any soundness
1887
                // issues, as we have not generated any invalid state which we need to
1888
                // fix before returning.
1889
                //
1890
                // Note that one panic or post-monomorphization error condition is
1891
                // calling `try_into_valid` (and thus `is_bit_valid`) with a shared
1892
                // pointer when `Self: !Immutable`. Since `Self: Immutable`, this panic
1893
                // condition will not happen.
1894
0
                match source.try_into_valid() {
1895
0
                    Ok(source) => Ok(source.as_mut()),
1896
0
                    Err(e) => {
1897
0
                        Err(e.map_src(|src| src.as_bytes::<BecauseExclusive>().as_mut()).into())
1898
                    }
1899
                }
1900
            }
1901
0
            Err(e) => Err(e.map_src(Ptr::as_mut).into()),
1902
        }
1903
0
    }
1904
1905
    /// Attempts to interpret the prefix of the given `source` as a `&mut
1906
    /// Self`.
1907
    ///
1908
    /// This method computes the [largest possible size of `Self`][valid-size]
1909
    /// that can fit in the leading bytes of `source`. If that prefix is a valid
1910
    /// instance of `Self`, this method returns a reference to those bytes
1911
    /// interpreted as `Self`, and a reference to the remaining bytes. If there
1912
    /// are insufficient bytes, or if `source` is not appropriately aligned, or
1913
    /// if the bytes are not a valid instance of `Self`, this returns `Err`. If
1914
    /// [`Self: Unaligned`][self-unaligned], you can [infallibly discard the
1915
    /// alignment error][ConvertError::from].
1916
    ///
1917
    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
1918
    ///
1919
    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
1920
    /// [self-unaligned]: Unaligned
1921
    /// [slice-dst]: KnownLayout#dynamically-sized-types
1922
    ///
1923
    /// # Compile-Time Assertions
1924
    ///
1925
    /// This method cannot yet be used on unsized types whose dynamically-sized
1926
    /// component is zero-sized. Attempting to use this method on such types
1927
    /// results in a compile-time assertion error; e.g.:
1928
    ///
1929
    /// ```compile_fail,E0080
1930
    /// use zerocopy::*;
1931
    /// # use zerocopy_derive::*;
1932
    ///
1933
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1934
    /// #[repr(C, packed)]
1935
    /// struct ZSTy {
1936
    ///     leading_sized: [u8; 2],
1937
    ///     trailing_dst: [()],
1938
    /// }
1939
    ///
1940
    /// let mut source = [85, 85];
1941
    /// let _ = ZSTy::try_mut_from_prefix(&mut source[..]); // âš  Compile Error!
1942
    /// ```
1943
    ///
1944
    /// # Examples
1945
    ///
1946
    /// ```
1947
    /// use zerocopy::TryFromBytes;
1948
    /// # use zerocopy_derive::*;
1949
    ///
1950
    /// // The only valid value of this type is the byte `0xC0`
1951
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1952
    /// #[repr(u8)]
1953
    /// enum C0 { xC0 = 0xC0 }
1954
    ///
1955
    /// // The only valid value of this type is the bytes `0xC0C0`.
1956
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1957
    /// #[repr(C)]
1958
    /// struct C0C0(C0, C0);
1959
    ///
1960
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
1961
    /// #[repr(C, packed)]
1962
    /// struct Packet {
1963
    ///     magic_number: C0C0,
1964
    ///     mug_size: u8,
1965
    ///     temperature: u8,
1966
    ///     marshmallows: [[u8; 2]],
1967
    /// }
1968
    ///
1969
    /// // These are more bytes than are needed to encode a `Packet`.
1970
    /// let bytes = &mut [0xC0, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5, 6][..];
1971
    ///
1972
    /// let (packet, suffix) = Packet::try_mut_from_prefix(bytes).unwrap();
1973
    ///
1974
    /// assert_eq!(packet.mug_size, 240);
1975
    /// assert_eq!(packet.temperature, 77);
1976
    /// assert_eq!(packet.marshmallows, [[0, 1], [2, 3], [4, 5]]);
1977
    /// assert_eq!(suffix, &[6u8][..]);
1978
    ///
1979
    /// packet.temperature = 111;
1980
    /// suffix[0] = 222;
1981
    ///
1982
    /// assert_eq!(bytes, [0xC0, 0xC0, 240, 111, 0, 1, 2, 3, 4, 5, 222]);
1983
    ///
1984
    /// // These bytes are not valid instance of `Packet`.
1985
    /// let bytes = &mut [0x10, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5, 6][..];
1986
    /// assert!(Packet::try_mut_from_prefix(bytes).is_err());
1987
    /// ```
1988
    #[must_use = "has no side effects"]
1989
    #[inline]
1990
0
    fn try_mut_from_prefix(
1991
0
        source: &mut [u8],
1992
0
    ) -> Result<(&mut Self, &mut [u8]), TryCastError<&mut [u8], Self>>
1993
0
    where
1994
0
        Self: KnownLayout + IntoBytes,
1995
    {
1996
0
        static_assert_dst_is_not_zst!(Self);
1997
0
        try_mut_from_prefix_suffix(source, CastType::Prefix, None)
1998
0
    }
1999
2000
    /// Attempts to interpret the suffix of the given `source` as a `&mut
2001
    /// Self`.
2002
    ///
2003
    /// This method computes the [largest possible size of `Self`][valid-size]
2004
    /// that can fit in the trailing bytes of `source`. If that suffix is a
2005
    /// valid instance of `Self`, this method returns a reference to those bytes
2006
    /// interpreted as `Self`, and a reference to the preceding bytes. If there
2007
    /// are insufficient bytes, or if the suffix of `source` would not be
2008
    /// appropriately aligned, or if the suffix is not a valid instance of
2009
    /// `Self`, this returns `Err`. If [`Self: Unaligned`][self-unaligned], you
2010
    /// can [infallibly discard the alignment error][ConvertError::from].
2011
    ///
2012
    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
2013
    ///
2014
    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
2015
    /// [self-unaligned]: Unaligned
2016
    /// [slice-dst]: KnownLayout#dynamically-sized-types
2017
    ///
2018
    /// # Compile-Time Assertions
2019
    ///
2020
    /// This method cannot yet be used on unsized types whose dynamically-sized
2021
    /// component is zero-sized. Attempting to use this method on such types
2022
    /// results in a compile-time assertion error; e.g.:
2023
    ///
2024
    /// ```compile_fail,E0080
2025
    /// use zerocopy::*;
2026
    /// # use zerocopy_derive::*;
2027
    ///
2028
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2029
    /// #[repr(C, packed)]
2030
    /// struct ZSTy {
2031
    ///     leading_sized: u16,
2032
    ///     trailing_dst: [()],
2033
    /// }
2034
    ///
2035
    /// let mut source = [85, 85];
2036
    /// let _ = ZSTy::try_mut_from_suffix(&mut source[..]); // âš  Compile Error!
2037
    /// ```
2038
    ///
2039
    /// # Examples
2040
    ///
2041
    /// ```
2042
    /// use zerocopy::TryFromBytes;
2043
    /// # use zerocopy_derive::*;
2044
    ///
2045
    /// // The only valid value of this type is the byte `0xC0`
2046
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2047
    /// #[repr(u8)]
2048
    /// enum C0 { xC0 = 0xC0 }
2049
    ///
2050
    /// // The only valid value of this type is the bytes `0xC0C0`.
2051
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2052
    /// #[repr(C)]
2053
    /// struct C0C0(C0, C0);
2054
    ///
2055
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2056
    /// #[repr(C, packed)]
2057
    /// struct Packet {
2058
    ///     magic_number: C0C0,
2059
    ///     mug_size: u8,
2060
    ///     temperature: u8,
2061
    ///     marshmallows: [[u8; 2]],
2062
    /// }
2063
    ///
2064
    /// // These are more bytes than are needed to encode a `Packet`.
2065
    /// let bytes = &mut [0, 0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7][..];
2066
    ///
2067
    /// let (prefix, packet) = Packet::try_mut_from_suffix(bytes).unwrap();
2068
    ///
2069
    /// assert_eq!(packet.mug_size, 240);
2070
    /// assert_eq!(packet.temperature, 77);
2071
    /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
2072
    /// assert_eq!(prefix, &[0u8][..]);
2073
    ///
2074
    /// prefix[0] = 111;
2075
    /// packet.temperature = 222;
2076
    ///
2077
    /// assert_eq!(bytes, [111, 0xC0, 0xC0, 240, 222, 2, 3, 4, 5, 6, 7]);
2078
    ///
2079
    /// // These bytes are not valid instance of `Packet`.
2080
    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 77, 240, 0xC0, 0x10][..];
2081
    /// assert!(Packet::try_mut_from_suffix(bytes).is_err());
2082
    /// ```
2083
    #[must_use = "has no side effects"]
2084
    #[inline]
2085
0
    fn try_mut_from_suffix(
2086
0
        source: &mut [u8],
2087
0
    ) -> Result<(&mut [u8], &mut Self), TryCastError<&mut [u8], Self>>
2088
0
    where
2089
0
        Self: KnownLayout + IntoBytes,
2090
    {
2091
0
        static_assert_dst_is_not_zst!(Self);
2092
0
        try_mut_from_prefix_suffix(source, CastType::Suffix, None).map(swap)
2093
0
    }
2094
2095
    /// Attempts to interpret the given `source` as a `&Self` with a DST length
2096
    /// equal to `count`.
2097
    ///
2098
    /// This method attempts to return a reference to `source` interpreted as a
2099
    /// `Self` with `count` trailing elements. If the length of `source` is not
2100
    /// equal to the size of `Self` with `count` elements, if `source` is not
2101
    /// appropriately aligned, or if `source` does not contain a valid instance
2102
    /// of `Self`, this returns `Err`. If [`Self: Unaligned`][self-unaligned],
2103
    /// you can [infallibly discard the alignment error][ConvertError::from].
2104
    ///
2105
    /// [self-unaligned]: Unaligned
2106
    /// [slice-dst]: KnownLayout#dynamically-sized-types
2107
    ///
2108
    /// # Examples
2109
    ///
2110
    /// ```
2111
    /// # #![allow(non_camel_case_types)] // For C0::xC0
2112
    /// use zerocopy::TryFromBytes;
2113
    /// # use zerocopy_derive::*;
2114
    ///
2115
    /// // The only valid value of this type is the byte `0xC0`
2116
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2117
    /// #[repr(u8)]
2118
    /// enum C0 { xC0 = 0xC0 }
2119
    ///
2120
    /// // The only valid value of this type is the bytes `0xC0C0`.
2121
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2122
    /// #[repr(C)]
2123
    /// struct C0C0(C0, C0);
2124
    ///
2125
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2126
    /// #[repr(C)]
2127
    /// struct Packet {
2128
    ///     magic_number: C0C0,
2129
    ///     mug_size: u8,
2130
    ///     temperature: u8,
2131
    ///     marshmallows: [[u8; 2]],
2132
    /// }
2133
    ///
2134
    /// let bytes = &[0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7][..];
2135
    ///
2136
    /// let packet = Packet::try_ref_from_bytes_with_elems(bytes, 3).unwrap();
2137
    ///
2138
    /// assert_eq!(packet.mug_size, 240);
2139
    /// assert_eq!(packet.temperature, 77);
2140
    /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
2141
    ///
2142
    /// // These bytes are not valid instance of `Packet`.
2143
    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 77, 240, 0xC0, 0xC0][..];
2144
    /// assert!(Packet::try_ref_from_bytes_with_elems(bytes, 3).is_err());
2145
    /// ```
2146
    ///
2147
    /// Since an explicit `count` is provided, this method supports types with
2148
    /// zero-sized trailing slice elements. Methods such as [`try_ref_from_bytes`]
2149
    /// which do not take an explicit count do not support such types.
2150
    ///
2151
    /// ```
2152
    /// use core::num::NonZeroU16;
2153
    /// use zerocopy::*;
2154
    /// # use zerocopy_derive::*;
2155
    ///
2156
    /// #[derive(TryFromBytes, Immutable, KnownLayout)]
2157
    /// #[repr(C)]
2158
    /// struct ZSTy {
2159
    ///     leading_sized: NonZeroU16,
2160
    ///     trailing_dst: [()],
2161
    /// }
2162
    ///
2163
    /// let src = 0xCAFEu16.as_bytes();
2164
    /// let zsty = ZSTy::try_ref_from_bytes_with_elems(src, 42).unwrap();
2165
    /// assert_eq!(zsty.trailing_dst.len(), 42);
2166
    /// ```
2167
    ///
2168
    /// [`try_ref_from_bytes`]: TryFromBytes::try_ref_from_bytes
2169
    #[must_use = "has no side effects"]
2170
    #[inline]
2171
0
    fn try_ref_from_bytes_with_elems(
2172
0
        source: &[u8],
2173
0
        count: usize,
2174
0
    ) -> Result<&Self, TryCastError<&[u8], Self>>
2175
0
    where
2176
0
        Self: KnownLayout<PointerMetadata = usize> + Immutable,
2177
    {
2178
0
        match Ptr::from_ref(source).try_cast_into_no_leftover::<Self, BecauseImmutable>(Some(count))
2179
        {
2180
0
            Ok(source) => {
2181
                // This call may panic. If that happens, it doesn't cause any soundness
2182
                // issues, as we have not generated any invalid state which we need to
2183
                // fix before returning.
2184
                //
2185
                // Note that one panic or post-monomorphization error condition is
2186
                // calling `try_into_valid` (and thus `is_bit_valid`) with a shared
2187
                // pointer when `Self: !Immutable`. Since `Self: Immutable`, this panic
2188
                // condition will not happen.
2189
0
                match source.try_into_valid() {
2190
0
                    Ok(source) => Ok(source.as_ref()),
2191
0
                    Err(e) => {
2192
0
                        Err(e.map_src(|src| src.as_bytes::<BecauseImmutable>().as_ref()).into())
2193
                    }
2194
                }
2195
            }
2196
0
            Err(e) => Err(e.map_src(Ptr::as_ref).into()),
2197
        }
2198
0
    }
2199
2200
    /// Attempts to interpret the prefix of the given `source` as a `&Self` with
2201
    /// a DST length equal to `count`.
2202
    ///
2203
    /// This method attempts to return a reference to the prefix of `source`
2204
    /// interpreted as a `Self` with `count` trailing elements, and a reference
2205
    /// to the remaining bytes. If the length of `source` is less than the size
2206
    /// of `Self` with `count` elements, if `source` is not appropriately
2207
    /// aligned, or if the prefix of `source` does not contain a valid instance
2208
    /// of `Self`, this returns `Err`. If [`Self: Unaligned`][self-unaligned],
2209
    /// you can [infallibly discard the alignment error][ConvertError::from].
2210
    ///
2211
    /// [self-unaligned]: Unaligned
2212
    /// [slice-dst]: KnownLayout#dynamically-sized-types
2213
    ///
2214
    /// # Examples
2215
    ///
2216
    /// ```
2217
    /// # #![allow(non_camel_case_types)] // For C0::xC0
2218
    /// use zerocopy::TryFromBytes;
2219
    /// # use zerocopy_derive::*;
2220
    ///
2221
    /// // The only valid value of this type is the byte `0xC0`
2222
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2223
    /// #[repr(u8)]
2224
    /// enum C0 { xC0 = 0xC0 }
2225
    ///
2226
    /// // The only valid value of this type is the bytes `0xC0C0`.
2227
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2228
    /// #[repr(C)]
2229
    /// struct C0C0(C0, C0);
2230
    ///
2231
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2232
    /// #[repr(C)]
2233
    /// struct Packet {
2234
    ///     magic_number: C0C0,
2235
    ///     mug_size: u8,
2236
    ///     temperature: u8,
2237
    ///     marshmallows: [[u8; 2]],
2238
    /// }
2239
    ///
2240
    /// let bytes = &[0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7, 8][..];
2241
    ///
2242
    /// let (packet, suffix) = Packet::try_ref_from_prefix_with_elems(bytes, 3).unwrap();
2243
    ///
2244
    /// assert_eq!(packet.mug_size, 240);
2245
    /// assert_eq!(packet.temperature, 77);
2246
    /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
2247
    /// assert_eq!(suffix, &[8u8][..]);
2248
    ///
2249
    /// // These bytes are not valid instance of `Packet`.
2250
    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7, 8, 77, 240, 0xC0, 0xC0][..];
2251
    /// assert!(Packet::try_ref_from_prefix_with_elems(bytes, 3).is_err());
2252
    /// ```
2253
    ///
2254
    /// Since an explicit `count` is provided, this method supports types with
2255
    /// zero-sized trailing slice elements. Methods such as [`try_ref_from_prefix`]
2256
    /// which do not take an explicit count do not support such types.
2257
    ///
2258
    /// ```
2259
    /// use core::num::NonZeroU16;
2260
    /// use zerocopy::*;
2261
    /// # use zerocopy_derive::*;
2262
    ///
2263
    /// #[derive(TryFromBytes, Immutable, KnownLayout)]
2264
    /// #[repr(C)]
2265
    /// struct ZSTy {
2266
    ///     leading_sized: NonZeroU16,
2267
    ///     trailing_dst: [()],
2268
    /// }
2269
    ///
2270
    /// let src = 0xCAFEu16.as_bytes();
2271
    /// let (zsty, _) = ZSTy::try_ref_from_prefix_with_elems(src, 42).unwrap();
2272
    /// assert_eq!(zsty.trailing_dst.len(), 42);
2273
    /// ```
2274
    ///
2275
    /// [`try_ref_from_prefix`]: TryFromBytes::try_ref_from_prefix
2276
    #[must_use = "has no side effects"]
2277
    #[inline]
2278
0
    fn try_ref_from_prefix_with_elems(
2279
0
        source: &[u8],
2280
0
        count: usize,
2281
0
    ) -> Result<(&Self, &[u8]), TryCastError<&[u8], Self>>
2282
0
    where
2283
0
        Self: KnownLayout<PointerMetadata = usize> + Immutable,
2284
    {
2285
0
        try_ref_from_prefix_suffix(source, CastType::Prefix, Some(count))
2286
0
    }
2287
2288
    /// Attempts to interpret the suffix of the given `source` as a `&Self` with
2289
    /// a DST length equal to `count`.
2290
    ///
2291
    /// This method attempts to return a reference to the suffix of `source`
2292
    /// interpreted as a `Self` with `count` trailing elements, and a reference
2293
    /// to the preceding bytes. If the length of `source` is less than the size
2294
    /// of `Self` with `count` elements, if the suffix of `source` is not
2295
    /// appropriately aligned, or if the suffix of `source` does not contain a
2296
    /// valid instance of `Self`, this returns `Err`. If [`Self:
2297
    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
2298
    /// error][ConvertError::from].
2299
    ///
2300
    /// [self-unaligned]: Unaligned
2301
    /// [slice-dst]: KnownLayout#dynamically-sized-types
2302
    ///
2303
    /// # Examples
2304
    ///
2305
    /// ```
2306
    /// # #![allow(non_camel_case_types)] // For C0::xC0
2307
    /// use zerocopy::TryFromBytes;
2308
    /// # use zerocopy_derive::*;
2309
    ///
2310
    /// // The only valid value of this type is the byte `0xC0`
2311
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2312
    /// #[repr(u8)]
2313
    /// enum C0 { xC0 = 0xC0 }
2314
    ///
2315
    /// // The only valid value of this type is the bytes `0xC0C0`.
2316
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2317
    /// #[repr(C)]
2318
    /// struct C0C0(C0, C0);
2319
    ///
2320
    /// #[derive(TryFromBytes, KnownLayout, Immutable)]
2321
    /// #[repr(C)]
2322
    /// struct Packet {
2323
    ///     magic_number: C0C0,
2324
    ///     mug_size: u8,
2325
    ///     temperature: u8,
2326
    ///     marshmallows: [[u8; 2]],
2327
    /// }
2328
    ///
2329
    /// let bytes = &[123, 0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7][..];
2330
    ///
2331
    /// let (prefix, packet) = Packet::try_ref_from_suffix_with_elems(bytes, 3).unwrap();
2332
    ///
2333
    /// assert_eq!(packet.mug_size, 240);
2334
    /// assert_eq!(packet.temperature, 77);
2335
    /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
2336
    /// assert_eq!(prefix, &[123u8][..]);
2337
    ///
2338
    /// // These bytes are not valid instance of `Packet`.
2339
    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 77, 240, 0xC0, 0xC0][..];
2340
    /// assert!(Packet::try_ref_from_suffix_with_elems(bytes, 3).is_err());
2341
    /// ```
2342
    ///
2343
    /// Since an explicit `count` is provided, this method supports types with
2344
    /// zero-sized trailing slice elements. Methods such as [`try_ref_from_prefix`]
2345
    /// which do not take an explicit count do not support such types.
2346
    ///
2347
    /// ```
2348
    /// use core::num::NonZeroU16;
2349
    /// use zerocopy::*;
2350
    /// # use zerocopy_derive::*;
2351
    ///
2352
    /// #[derive(TryFromBytes, Immutable, KnownLayout)]
2353
    /// #[repr(C)]
2354
    /// struct ZSTy {
2355
    ///     leading_sized: NonZeroU16,
2356
    ///     trailing_dst: [()],
2357
    /// }
2358
    ///
2359
    /// let src = 0xCAFEu16.as_bytes();
2360
    /// let (_, zsty) = ZSTy::try_ref_from_suffix_with_elems(src, 42).unwrap();
2361
    /// assert_eq!(zsty.trailing_dst.len(), 42);
2362
    /// ```
2363
    ///
2364
    /// [`try_ref_from_prefix`]: TryFromBytes::try_ref_from_prefix
2365
    #[must_use = "has no side effects"]
2366
    #[inline]
2367
0
    fn try_ref_from_suffix_with_elems(
2368
0
        source: &[u8],
2369
0
        count: usize,
2370
0
    ) -> Result<(&[u8], &Self), TryCastError<&[u8], Self>>
2371
0
    where
2372
0
        Self: KnownLayout<PointerMetadata = usize> + Immutable,
2373
    {
2374
0
        try_ref_from_prefix_suffix(source, CastType::Suffix, Some(count)).map(swap)
2375
0
    }
2376
2377
    /// Attempts to interpret the given `source` as a `&mut Self` with a DST
2378
    /// length equal to `count`.
2379
    ///
2380
    /// This method attempts to return a reference to `source` interpreted as a
2381
    /// `Self` with `count` trailing elements. If the length of `source` is not
2382
    /// equal to the size of `Self` with `count` elements, if `source` is not
2383
    /// appropriately aligned, or if `source` does not contain a valid instance
2384
    /// of `Self`, this returns `Err`. If [`Self: Unaligned`][self-unaligned],
2385
    /// you can [infallibly discard the alignment error][ConvertError::from].
2386
    ///
2387
    /// [self-unaligned]: Unaligned
2388
    /// [slice-dst]: KnownLayout#dynamically-sized-types
2389
    ///
2390
    /// # Examples
2391
    ///
2392
    /// ```
2393
    /// # #![allow(non_camel_case_types)] // For C0::xC0
2394
    /// use zerocopy::TryFromBytes;
2395
    /// # use zerocopy_derive::*;
2396
    ///
2397
    /// // The only valid value of this type is the byte `0xC0`
2398
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2399
    /// #[repr(u8)]
2400
    /// enum C0 { xC0 = 0xC0 }
2401
    ///
2402
    /// // The only valid value of this type is the bytes `0xC0C0`.
2403
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2404
    /// #[repr(C)]
2405
    /// struct C0C0(C0, C0);
2406
    ///
2407
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2408
    /// #[repr(C, packed)]
2409
    /// struct Packet {
2410
    ///     magic_number: C0C0,
2411
    ///     mug_size: u8,
2412
    ///     temperature: u8,
2413
    ///     marshmallows: [[u8; 2]],
2414
    /// }
2415
    ///
2416
    /// let bytes = &mut [0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7][..];
2417
    ///
2418
    /// let packet = Packet::try_mut_from_bytes_with_elems(bytes, 3).unwrap();
2419
    ///
2420
    /// assert_eq!(packet.mug_size, 240);
2421
    /// assert_eq!(packet.temperature, 77);
2422
    /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
2423
    ///
2424
    /// packet.temperature = 111;
2425
    ///
2426
    /// assert_eq!(bytes, [0xC0, 0xC0, 240, 111, 2, 3, 4, 5, 6, 7]);
2427
    ///
2428
    /// // These bytes are not valid instance of `Packet`.
2429
    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 77, 240, 0xC0, 0xC0][..];
2430
    /// assert!(Packet::try_mut_from_bytes_with_elems(bytes, 3).is_err());
2431
    /// ```
2432
    ///
2433
    /// Since an explicit `count` is provided, this method supports types with
2434
    /// zero-sized trailing slice elements. Methods such as [`try_mut_from_bytes`]
2435
    /// which do not take an explicit count do not support such types.
2436
    ///
2437
    /// ```
2438
    /// use core::num::NonZeroU16;
2439
    /// use zerocopy::*;
2440
    /// # use zerocopy_derive::*;
2441
    ///
2442
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2443
    /// #[repr(C, packed)]
2444
    /// struct ZSTy {
2445
    ///     leading_sized: NonZeroU16,
2446
    ///     trailing_dst: [()],
2447
    /// }
2448
    ///
2449
    /// let mut src = 0xCAFEu16;
2450
    /// let src = src.as_mut_bytes();
2451
    /// let zsty = ZSTy::try_mut_from_bytes_with_elems(src, 42).unwrap();
2452
    /// assert_eq!(zsty.trailing_dst.len(), 42);
2453
    /// ```
2454
    ///
2455
    /// [`try_mut_from_bytes`]: TryFromBytes::try_mut_from_bytes
2456
    #[must_use = "has no side effects"]
2457
    #[inline]
2458
0
    fn try_mut_from_bytes_with_elems(
2459
0
        source: &mut [u8],
2460
0
        count: usize,
2461
0
    ) -> Result<&mut Self, TryCastError<&mut [u8], Self>>
2462
0
    where
2463
0
        Self: KnownLayout<PointerMetadata = usize> + IntoBytes,
2464
    {
2465
0
        match Ptr::from_mut(source).try_cast_into_no_leftover::<Self, BecauseExclusive>(Some(count))
2466
        {
2467
0
            Ok(source) => {
2468
                // This call may panic. If that happens, it doesn't cause any soundness
2469
                // issues, as we have not generated any invalid state which we need to
2470
                // fix before returning.
2471
                //
2472
                // Note that one panic or post-monomorphization error condition is
2473
                // calling `try_into_valid` (and thus `is_bit_valid`) with a shared
2474
                // pointer when `Self: !Immutable`. Since `Self: Immutable`, this panic
2475
                // condition will not happen.
2476
0
                match source.try_into_valid() {
2477
0
                    Ok(source) => Ok(source.as_mut()),
2478
0
                    Err(e) => {
2479
0
                        Err(e.map_src(|src| src.as_bytes::<BecauseExclusive>().as_mut()).into())
2480
                    }
2481
                }
2482
            }
2483
0
            Err(e) => Err(e.map_src(Ptr::as_mut).into()),
2484
        }
2485
0
    }
2486
2487
    /// Attempts to interpret the prefix of the given `source` as a `&mut Self`
2488
    /// with a DST length equal to `count`.
2489
    ///
2490
    /// This method attempts to return a reference to the prefix of `source`
2491
    /// interpreted as a `Self` with `count` trailing elements, and a reference
2492
    /// to the remaining bytes. If the length of `source` is less than the size
2493
    /// of `Self` with `count` elements, if `source` is not appropriately
2494
    /// aligned, or if the prefix of `source` does not contain a valid instance
2495
    /// of `Self`, this returns `Err`. If [`Self: Unaligned`][self-unaligned],
2496
    /// you can [infallibly discard the alignment error][ConvertError::from].
2497
    ///
2498
    /// [self-unaligned]: Unaligned
2499
    /// [slice-dst]: KnownLayout#dynamically-sized-types
2500
    ///
2501
    /// # Examples
2502
    ///
2503
    /// ```
2504
    /// # #![allow(non_camel_case_types)] // For C0::xC0
2505
    /// use zerocopy::TryFromBytes;
2506
    /// # use zerocopy_derive::*;
2507
    ///
2508
    /// // The only valid value of this type is the byte `0xC0`
2509
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2510
    /// #[repr(u8)]
2511
    /// enum C0 { xC0 = 0xC0 }
2512
    ///
2513
    /// // The only valid value of this type is the bytes `0xC0C0`.
2514
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2515
    /// #[repr(C)]
2516
    /// struct C0C0(C0, C0);
2517
    ///
2518
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2519
    /// #[repr(C, packed)]
2520
    /// struct Packet {
2521
    ///     magic_number: C0C0,
2522
    ///     mug_size: u8,
2523
    ///     temperature: u8,
2524
    ///     marshmallows: [[u8; 2]],
2525
    /// }
2526
    ///
2527
    /// let bytes = &mut [0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7, 8][..];
2528
    ///
2529
    /// let (packet, suffix) = Packet::try_mut_from_prefix_with_elems(bytes, 3).unwrap();
2530
    ///
2531
    /// assert_eq!(packet.mug_size, 240);
2532
    /// assert_eq!(packet.temperature, 77);
2533
    /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
2534
    /// assert_eq!(suffix, &[8u8][..]);
2535
    ///
2536
    /// packet.temperature = 111;
2537
    /// suffix[0] = 222;
2538
    ///
2539
    /// assert_eq!(bytes, [0xC0, 0xC0, 240, 111, 2, 3, 4, 5, 6, 7, 222]);
2540
    ///
2541
    /// // These bytes are not valid instance of `Packet`.
2542
    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7, 8, 77, 240, 0xC0, 0xC0][..];
2543
    /// assert!(Packet::try_mut_from_prefix_with_elems(bytes, 3).is_err());
2544
    /// ```
2545
    ///
2546
    /// Since an explicit `count` is provided, this method supports types with
2547
    /// zero-sized trailing slice elements. Methods such as [`try_mut_from_prefix`]
2548
    /// which do not take an explicit count do not support such types.
2549
    ///
2550
    /// ```
2551
    /// use core::num::NonZeroU16;
2552
    /// use zerocopy::*;
2553
    /// # use zerocopy_derive::*;
2554
    ///
2555
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2556
    /// #[repr(C, packed)]
2557
    /// struct ZSTy {
2558
    ///     leading_sized: NonZeroU16,
2559
    ///     trailing_dst: [()],
2560
    /// }
2561
    ///
2562
    /// let mut src = 0xCAFEu16;
2563
    /// let src = src.as_mut_bytes();
2564
    /// let (zsty, _) = ZSTy::try_mut_from_prefix_with_elems(src, 42).unwrap();
2565
    /// assert_eq!(zsty.trailing_dst.len(), 42);
2566
    /// ```
2567
    ///
2568
    /// [`try_mut_from_prefix`]: TryFromBytes::try_mut_from_prefix
2569
    #[must_use = "has no side effects"]
2570
    #[inline]
2571
0
    fn try_mut_from_prefix_with_elems(
2572
0
        source: &mut [u8],
2573
0
        count: usize,
2574
0
    ) -> Result<(&mut Self, &mut [u8]), TryCastError<&mut [u8], Self>>
2575
0
    where
2576
0
        Self: KnownLayout<PointerMetadata = usize> + IntoBytes,
2577
    {
2578
0
        try_mut_from_prefix_suffix(source, CastType::Prefix, Some(count))
2579
0
    }
2580
2581
    /// Attempts to interpret the suffix of the given `source` as a `&mut Self`
2582
    /// with a DST length equal to `count`.
2583
    ///
2584
    /// This method attempts to return a reference to the suffix of `source`
2585
    /// interpreted as a `Self` with `count` trailing elements, and a reference
2586
    /// to the preceding bytes. If the length of `source` is less than the size
2587
    /// of `Self` with `count` elements, if the suffix of `source` is not
2588
    /// appropriately aligned, or if the suffix of `source` does not contain a
2589
    /// valid instance of `Self`, this returns `Err`. If [`Self:
2590
    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
2591
    /// error][ConvertError::from].
2592
    ///
2593
    /// [self-unaligned]: Unaligned
2594
    /// [slice-dst]: KnownLayout#dynamically-sized-types
2595
    ///
2596
    /// # Examples
2597
    ///
2598
    /// ```
2599
    /// # #![allow(non_camel_case_types)] // For C0::xC0
2600
    /// use zerocopy::TryFromBytes;
2601
    /// # use zerocopy_derive::*;
2602
    ///
2603
    /// // The only valid value of this type is the byte `0xC0`
2604
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2605
    /// #[repr(u8)]
2606
    /// enum C0 { xC0 = 0xC0 }
2607
    ///
2608
    /// // The only valid value of this type is the bytes `0xC0C0`.
2609
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2610
    /// #[repr(C)]
2611
    /// struct C0C0(C0, C0);
2612
    ///
2613
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2614
    /// #[repr(C, packed)]
2615
    /// struct Packet {
2616
    ///     magic_number: C0C0,
2617
    ///     mug_size: u8,
2618
    ///     temperature: u8,
2619
    ///     marshmallows: [[u8; 2]],
2620
    /// }
2621
    ///
2622
    /// let bytes = &mut [123, 0xC0, 0xC0, 240, 77, 2, 3, 4, 5, 6, 7][..];
2623
    ///
2624
    /// let (prefix, packet) = Packet::try_mut_from_suffix_with_elems(bytes, 3).unwrap();
2625
    ///
2626
    /// assert_eq!(packet.mug_size, 240);
2627
    /// assert_eq!(packet.temperature, 77);
2628
    /// assert_eq!(packet.marshmallows, [[2, 3], [4, 5], [6, 7]]);
2629
    /// assert_eq!(prefix, &[123u8][..]);
2630
    ///
2631
    /// prefix[0] = 111;
2632
    /// packet.temperature = 222;
2633
    ///
2634
    /// assert_eq!(bytes, [111, 0xC0, 0xC0, 240, 222, 2, 3, 4, 5, 6, 7]);
2635
    ///
2636
    /// // These bytes are not valid instance of `Packet`.
2637
    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7, 8, 77, 240, 0xC0, 0xC0][..];
2638
    /// assert!(Packet::try_mut_from_suffix_with_elems(bytes, 3).is_err());
2639
    /// ```
2640
    ///
2641
    /// Since an explicit `count` is provided, this method supports types with
2642
    /// zero-sized trailing slice elements. Methods such as [`try_mut_from_prefix`]
2643
    /// which do not take an explicit count do not support such types.
2644
    ///
2645
    /// ```
2646
    /// use core::num::NonZeroU16;
2647
    /// use zerocopy::*;
2648
    /// # use zerocopy_derive::*;
2649
    ///
2650
    /// #[derive(TryFromBytes, IntoBytes, KnownLayout)]
2651
    /// #[repr(C, packed)]
2652
    /// struct ZSTy {
2653
    ///     leading_sized: NonZeroU16,
2654
    ///     trailing_dst: [()],
2655
    /// }
2656
    ///
2657
    /// let mut src = 0xCAFEu16;
2658
    /// let src = src.as_mut_bytes();
2659
    /// let (_, zsty) = ZSTy::try_mut_from_suffix_with_elems(src, 42).unwrap();
2660
    /// assert_eq!(zsty.trailing_dst.len(), 42);
2661
    /// ```
2662
    ///
2663
    /// [`try_mut_from_prefix`]: TryFromBytes::try_mut_from_prefix
2664
    #[must_use = "has no side effects"]
2665
    #[inline]
2666
0
    fn try_mut_from_suffix_with_elems(
2667
0
        source: &mut [u8],
2668
0
        count: usize,
2669
0
    ) -> Result<(&mut [u8], &mut Self), TryCastError<&mut [u8], Self>>
2670
0
    where
2671
0
        Self: KnownLayout<PointerMetadata = usize> + IntoBytes,
2672
    {
2673
0
        try_mut_from_prefix_suffix(source, CastType::Suffix, Some(count)).map(swap)
2674
0
    }
2675
2676
    /// Attempts to read the given `source` as a `Self`.
2677
    ///
2678
    /// If `source.len() != size_of::<Self>()` or the bytes are not a valid
2679
    /// instance of `Self`, this returns `Err`.
2680
    ///
2681
    /// # Examples
2682
    ///
2683
    /// ```
2684
    /// use zerocopy::TryFromBytes;
2685
    /// # use zerocopy_derive::*;
2686
    ///
2687
    /// // The only valid value of this type is the byte `0xC0`
2688
    /// #[derive(TryFromBytes)]
2689
    /// #[repr(u8)]
2690
    /// enum C0 { xC0 = 0xC0 }
2691
    ///
2692
    /// // The only valid value of this type is the bytes `0xC0C0`.
2693
    /// #[derive(TryFromBytes)]
2694
    /// #[repr(C)]
2695
    /// struct C0C0(C0, C0);
2696
    ///
2697
    /// #[derive(TryFromBytes)]
2698
    /// #[repr(C)]
2699
    /// struct Packet {
2700
    ///     magic_number: C0C0,
2701
    ///     mug_size: u8,
2702
    ///     temperature: u8,
2703
    /// }
2704
    ///
2705
    /// let bytes = &[0xC0, 0xC0, 240, 77][..];
2706
    ///
2707
    /// let packet = Packet::try_read_from_bytes(bytes).unwrap();
2708
    ///
2709
    /// assert_eq!(packet.mug_size, 240);
2710
    /// assert_eq!(packet.temperature, 77);
2711
    ///
2712
    /// // These bytes are not valid instance of `Packet`.
2713
    /// let bytes = &mut [0x10, 0xC0, 240, 77][..];
2714
    /// assert!(Packet::try_read_from_bytes(bytes).is_err());
2715
    /// ```
2716
    #[must_use = "has no side effects"]
2717
    #[inline]
2718
0
    fn try_read_from_bytes(source: &[u8]) -> Result<Self, TryReadError<&[u8], Self>>
2719
0
    where
2720
0
        Self: Sized,
2721
    {
2722
0
        let candidate = match CoreMaybeUninit::<Self>::read_from_bytes(source) {
2723
0
            Ok(candidate) => candidate,
2724
0
            Err(e) => {
2725
0
                return Err(TryReadError::Size(e.with_dst()));
2726
            }
2727
        };
2728
        // SAFETY: `candidate` was copied from from `source: &[u8]`, so all of
2729
        // its bytes are initialized.
2730
0
        unsafe { try_read_from(source, candidate) }
2731
0
    }
2732
2733
    /// Attempts to read a `Self` from the prefix of the given `source`.
2734
    ///
2735
    /// This attempts to read a `Self` from the first `size_of::<Self>()` bytes
2736
    /// of `source`, returning that `Self` and any remaining bytes. If
2737
    /// `source.len() < size_of::<Self>()` or the bytes are not a valid instance
2738
    /// of `Self`, it returns `Err`.
2739
    ///
2740
    /// # Examples
2741
    ///
2742
    /// ```
2743
    /// use zerocopy::TryFromBytes;
2744
    /// # use zerocopy_derive::*;
2745
    ///
2746
    /// // The only valid value of this type is the byte `0xC0`
2747
    /// #[derive(TryFromBytes)]
2748
    /// #[repr(u8)]
2749
    /// enum C0 { xC0 = 0xC0 }
2750
    ///
2751
    /// // The only valid value of this type is the bytes `0xC0C0`.
2752
    /// #[derive(TryFromBytes)]
2753
    /// #[repr(C)]
2754
    /// struct C0C0(C0, C0);
2755
    ///
2756
    /// #[derive(TryFromBytes)]
2757
    /// #[repr(C)]
2758
    /// struct Packet {
2759
    ///     magic_number: C0C0,
2760
    ///     mug_size: u8,
2761
    ///     temperature: u8,
2762
    /// }
2763
    ///
2764
    /// // These are more bytes than are needed to encode a `Packet`.
2765
    /// let bytes = &[0xC0, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5, 6][..];
2766
    ///
2767
    /// let (packet, suffix) = Packet::try_read_from_prefix(bytes).unwrap();
2768
    ///
2769
    /// assert_eq!(packet.mug_size, 240);
2770
    /// assert_eq!(packet.temperature, 77);
2771
    /// assert_eq!(suffix, &[0u8, 1, 2, 3, 4, 5, 6][..]);
2772
    ///
2773
    /// // These bytes are not valid instance of `Packet`.
2774
    /// let bytes = &[0x10, 0xC0, 240, 77, 0, 1, 2, 3, 4, 5, 6][..];
2775
    /// assert!(Packet::try_read_from_prefix(bytes).is_err());
2776
    /// ```
2777
    #[must_use = "has no side effects"]
2778
    #[inline]
2779
0
    fn try_read_from_prefix(source: &[u8]) -> Result<(Self, &[u8]), TryReadError<&[u8], Self>>
2780
0
    where
2781
0
        Self: Sized,
2782
    {
2783
0
        let (candidate, suffix) = match CoreMaybeUninit::<Self>::read_from_prefix(source) {
2784
0
            Ok(candidate) => candidate,
2785
0
            Err(e) => {
2786
0
                return Err(TryReadError::Size(e.with_dst()));
2787
            }
2788
        };
2789
        // SAFETY: `candidate` was copied from from `source: &[u8]`, so all of
2790
        // its bytes are initialized.
2791
0
        unsafe { try_read_from(source, candidate).map(|slf| (slf, suffix)) }
2792
0
    }
2793
2794
    /// Attempts to read a `Self` from the suffix of the given `source`.
2795
    ///
2796
    /// This attempts to read a `Self` from the last `size_of::<Self>()` bytes
2797
    /// of `source`, returning that `Self` and any preceding bytes. If
2798
    /// `source.len() < size_of::<Self>()` or the bytes are not a valid instance
2799
    /// of `Self`, it returns `Err`.
2800
    ///
2801
    /// # Examples
2802
    ///
2803
    /// ```
2804
    /// # #![allow(non_camel_case_types)] // For C0::xC0
2805
    /// use zerocopy::TryFromBytes;
2806
    /// # use zerocopy_derive::*;
2807
    ///
2808
    /// // The only valid value of this type is the byte `0xC0`
2809
    /// #[derive(TryFromBytes)]
2810
    /// #[repr(u8)]
2811
    /// enum C0 { xC0 = 0xC0 }
2812
    ///
2813
    /// // The only valid value of this type is the bytes `0xC0C0`.
2814
    /// #[derive(TryFromBytes)]
2815
    /// #[repr(C)]
2816
    /// struct C0C0(C0, C0);
2817
    ///
2818
    /// #[derive(TryFromBytes)]
2819
    /// #[repr(C)]
2820
    /// struct Packet {
2821
    ///     magic_number: C0C0,
2822
    ///     mug_size: u8,
2823
    ///     temperature: u8,
2824
    /// }
2825
    ///
2826
    /// // These are more bytes than are needed to encode a `Packet`.
2827
    /// let bytes = &[0, 1, 2, 3, 4, 5, 0xC0, 0xC0, 240, 77][..];
2828
    ///
2829
    /// let (prefix, packet) = Packet::try_read_from_suffix(bytes).unwrap();
2830
    ///
2831
    /// assert_eq!(packet.mug_size, 240);
2832
    /// assert_eq!(packet.temperature, 77);
2833
    /// assert_eq!(prefix, &[0u8, 1, 2, 3, 4, 5][..]);
2834
    ///
2835
    /// // These bytes are not valid instance of `Packet`.
2836
    /// let bytes = &[0, 1, 2, 3, 4, 5, 0x10, 0xC0, 240, 77][..];
2837
    /// assert!(Packet::try_read_from_suffix(bytes).is_err());
2838
    /// ```
2839
    #[must_use = "has no side effects"]
2840
    #[inline]
2841
0
    fn try_read_from_suffix(source: &[u8]) -> Result<(&[u8], Self), TryReadError<&[u8], Self>>
2842
0
    where
2843
0
        Self: Sized,
2844
    {
2845
0
        let (prefix, candidate) = match CoreMaybeUninit::<Self>::read_from_suffix(source) {
2846
0
            Ok(candidate) => candidate,
2847
0
            Err(e) => {
2848
0
                return Err(TryReadError::Size(e.with_dst()));
2849
            }
2850
        };
2851
        // SAFETY: `candidate` was copied from from `source: &[u8]`, so all of
2852
        // its bytes are initialized.
2853
0
        unsafe { try_read_from(source, candidate).map(|slf| (prefix, slf)) }
2854
0
    }
2855
}
2856
2857
#[inline(always)]
2858
0
fn try_ref_from_prefix_suffix<T: TryFromBytes + KnownLayout + Immutable + ?Sized>(
2859
0
    source: &[u8],
2860
0
    cast_type: CastType,
2861
0
    meta: Option<T::PointerMetadata>,
2862
0
) -> Result<(&T, &[u8]), TryCastError<&[u8], T>> {
2863
0
    match Ptr::from_ref(source).try_cast_into::<T, BecauseImmutable>(cast_type, meta) {
2864
0
        Ok((source, prefix_suffix)) => {
2865
            // This call may panic. If that happens, it doesn't cause any soundness
2866
            // issues, as we have not generated any invalid state which we need to
2867
            // fix before returning.
2868
            //
2869
            // Note that one panic or post-monomorphization error condition is
2870
            // calling `try_into_valid` (and thus `is_bit_valid`) with a shared
2871
            // pointer when `Self: !Immutable`. Since `Self: Immutable`, this panic
2872
            // condition will not happen.
2873
0
            match source.try_into_valid() {
2874
0
                Ok(valid) => Ok((valid.as_ref(), prefix_suffix.as_ref())),
2875
0
                Err(e) => Err(e.map_src(|src| src.as_bytes::<BecauseImmutable>().as_ref()).into()),
2876
            }
2877
        }
2878
0
        Err(e) => Err(e.map_src(Ptr::as_ref).into()),
2879
    }
2880
0
}
2881
2882
#[inline(always)]
2883
0
fn try_mut_from_prefix_suffix<T: IntoBytes + TryFromBytes + KnownLayout + ?Sized>(
2884
0
    candidate: &mut [u8],
2885
0
    cast_type: CastType,
2886
0
    meta: Option<T::PointerMetadata>,
2887
0
) -> Result<(&mut T, &mut [u8]), TryCastError<&mut [u8], T>> {
2888
0
    match Ptr::from_mut(candidate).try_cast_into::<T, BecauseExclusive>(cast_type, meta) {
2889
0
        Ok((candidate, prefix_suffix)) => {
2890
            // This call may panic. If that happens, it doesn't cause any soundness
2891
            // issues, as we have not generated any invalid state which we need to
2892
            // fix before returning.
2893
            //
2894
            // Note that one panic or post-monomorphization error condition is
2895
            // calling `try_into_valid` (and thus `is_bit_valid`) with a shared
2896
            // pointer when `Self: !Immutable`. Since `Self: Immutable`, this panic
2897
            // condition will not happen.
2898
0
            match candidate.try_into_valid() {
2899
0
                Ok(valid) => Ok((valid.as_mut(), prefix_suffix.as_mut())),
2900
0
                Err(e) => Err(e.map_src(|src| src.as_bytes::<BecauseExclusive>().as_mut()).into()),
2901
            }
2902
        }
2903
0
        Err(e) => Err(e.map_src(Ptr::as_mut).into()),
2904
    }
2905
0
}
2906
2907
#[inline(always)]
2908
0
fn swap<T, U>((t, u): (T, U)) -> (U, T) {
2909
0
    (u, t)
2910
0
}
2911
2912
/// # Safety
2913
///
2914
/// All bytes of `candidate` must be initialized.
2915
#[inline(always)]
2916
0
unsafe fn try_read_from<S, T: TryFromBytes>(
2917
0
    source: S,
2918
0
    mut candidate: CoreMaybeUninit<T>,
2919
0
) -> Result<T, TryReadError<S, T>> {
2920
    // We use `from_mut` despite not mutating via `c_ptr` so that we don't need
2921
    // to add a `T: Immutable` bound.
2922
0
    let c_ptr = Ptr::from_mut(&mut candidate);
2923
    // SAFETY: `c_ptr` has no uninitialized sub-ranges because it derived from
2924
    // `candidate`, which the caller promises is entirely initialized. Since
2925
    // `candidate` is a `MaybeUninit`, it has no validity requirements, and so
2926
    // no values written to an `Initialized` `c_ptr` can violate its validity.
2927
    // Since `c_ptr` has `Exclusive` aliasing, no mutations may happen except
2928
    // via `c_ptr` so long as it is live, so we don't need to worry about the
2929
    // fact that `c_ptr` may have more restricted validity than `candidate`.
2930
0
    let c_ptr = unsafe { c_ptr.assume_validity::<invariant::Initialized>() };
2931
0
    let c_ptr = c_ptr.transmute();
2932
2933
    // Since we don't have `T: KnownLayout`, we hack around that by using
2934
    // `Wrapping<T>`, which implements `KnownLayout` even if `T` doesn't.
2935
    //
2936
    // This call may panic. If that happens, it doesn't cause any soundness
2937
    // issues, as we have not generated any invalid state which we need to fix
2938
    // before returning.
2939
    //
2940
    // Note that one panic or post-monomorphization error condition is calling
2941
    // `try_into_valid` (and thus `is_bit_valid`) with a shared pointer when
2942
    // `Self: !Immutable`. Since `Self: Immutable`, this panic condition will
2943
    // not happen.
2944
0
    if !Wrapping::<T>::is_bit_valid(c_ptr.forget_aligned()) {
2945
0
        return Err(ValidityError::new(source).into());
2946
0
    }
2947
2948
0
    fn _assert_same_size_and_validity<T>()
2949
0
    where
2950
0
        Wrapping<T>: pointer::TransmuteFrom<T, invariant::Valid, invariant::Valid>,
2951
0
        T: pointer::TransmuteFrom<Wrapping<T>, invariant::Valid, invariant::Valid>,
2952
    {
2953
0
    }
2954
2955
0
    _assert_same_size_and_validity::<T>();
2956
2957
    // SAFETY: We just validated that `candidate` contains a valid
2958
    // `Wrapping<T>`, which has the same size and bit validity as `T`, as
2959
    // guaranteed by the preceding type assertion.
2960
0
    Ok(unsafe { candidate.assume_init() })
2961
0
}
2962
2963
/// Types for which a sequence of `0` bytes is a valid instance.
2964
///
2965
/// Any memory region of the appropriate length which is guaranteed to contain
2966
/// only zero bytes can be viewed as any `FromZeros` type with no runtime
2967
/// overhead. This is useful whenever memory is known to be in a zeroed state,
2968
/// such memory returned from some allocation routines.
2969
///
2970
/// # Warning: Padding bytes
2971
///
2972
/// Note that, when a value is moved or copied, only the non-padding bytes of
2973
/// that value are guaranteed to be preserved. It is unsound to assume that
2974
/// values written to padding bytes are preserved after a move or copy. For more
2975
/// details, see the [`FromBytes` docs][frombytes-warning-padding-bytes].
2976
///
2977
/// [frombytes-warning-padding-bytes]: FromBytes#warning-padding-bytes
2978
///
2979
/// # Implementation
2980
///
2981
/// **Do not implement this trait yourself!** Instead, use
2982
/// [`#[derive(FromZeros)]`][derive]; e.g.:
2983
///
2984
/// ```
2985
/// # use zerocopy_derive::{FromZeros, Immutable};
2986
/// #[derive(FromZeros)]
2987
/// struct MyStruct {
2988
/// # /*
2989
///     ...
2990
/// # */
2991
/// }
2992
///
2993
/// #[derive(FromZeros)]
2994
/// #[repr(u8)]
2995
/// enum MyEnum {
2996
/// #   Variant0,
2997
/// # /*
2998
///     ...
2999
/// # */
3000
/// }
3001
///
3002
/// #[derive(FromZeros, Immutable)]
3003
/// union MyUnion {
3004
/// #   variant: u8,
3005
/// # /*
3006
///     ...
3007
/// # */
3008
/// }
3009
/// ```
3010
///
3011
/// This derive performs a sophisticated, compile-time safety analysis to
3012
/// determine whether a type is `FromZeros`.
3013
///
3014
/// # Safety
3015
///
3016
/// *This section describes what is required in order for `T: FromZeros`, and
3017
/// what unsafe code may assume of such types. If you don't plan on implementing
3018
/// `FromZeros` manually, and you don't plan on writing unsafe code that
3019
/// operates on `FromZeros` types, then you don't need to read this section.*
3020
///
3021
/// If `T: FromZeros`, then unsafe code may assume that it is sound to produce a
3022
/// `T` whose bytes are all initialized to zero. If a type is marked as
3023
/// `FromZeros` which violates this contract, it may cause undefined behavior.
3024
///
3025
/// `#[derive(FromZeros)]` only permits [types which satisfy these
3026
/// requirements][derive-analysis].
3027
///
3028
#[cfg_attr(
3029
    feature = "derive",
3030
    doc = "[derive]: zerocopy_derive::FromZeros",
3031
    doc = "[derive-analysis]: zerocopy_derive::FromZeros#analysis"
3032
)]
3033
#[cfg_attr(
3034
    not(feature = "derive"),
3035
    doc = concat!("[derive]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.FromZeros.html"),
3036
    doc = concat!("[derive-analysis]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.FromZeros.html#analysis"),
3037
)]
3038
#[cfg_attr(
3039
    zerocopy_diagnostic_on_unimplemented_1_78_0,
3040
    diagnostic::on_unimplemented(note = "Consider adding `#[derive(FromZeros)]` to `{Self}`")
3041
)]
3042
pub unsafe trait FromZeros: TryFromBytes {
3043
    // The `Self: Sized` bound makes it so that `FromZeros` is still object
3044
    // safe.
3045
    #[doc(hidden)]
3046
    fn only_derive_is_allowed_to_implement_this_trait()
3047
    where
3048
        Self: Sized;
3049
3050
    /// Overwrites `self` with zeros.
3051
    ///
3052
    /// Sets every byte in `self` to 0. While this is similar to doing `*self =
3053
    /// Self::new_zeroed()`, it differs in that `zero` does not semantically
3054
    /// drop the current value and replace it with a new one — it simply
3055
    /// modifies the bytes of the existing value.
3056
    ///
3057
    /// # Examples
3058
    ///
3059
    /// ```
3060
    /// # use zerocopy::FromZeros;
3061
    /// # use zerocopy_derive::*;
3062
    /// #
3063
    /// #[derive(FromZeros)]
3064
    /// #[repr(C)]
3065
    /// struct PacketHeader {
3066
    ///     src_port: [u8; 2],
3067
    ///     dst_port: [u8; 2],
3068
    ///     length: [u8; 2],
3069
    ///     checksum: [u8; 2],
3070
    /// }
3071
    ///
3072
    /// let mut header = PacketHeader {
3073
    ///     src_port: 100u16.to_be_bytes(),
3074
    ///     dst_port: 200u16.to_be_bytes(),
3075
    ///     length: 300u16.to_be_bytes(),
3076
    ///     checksum: 400u16.to_be_bytes(),
3077
    /// };
3078
    ///
3079
    /// header.zero();
3080
    ///
3081
    /// assert_eq!(header.src_port, [0, 0]);
3082
    /// assert_eq!(header.dst_port, [0, 0]);
3083
    /// assert_eq!(header.length, [0, 0]);
3084
    /// assert_eq!(header.checksum, [0, 0]);
3085
    /// ```
3086
    #[inline(always)]
3087
0
    fn zero(&mut self) {
3088
0
        let slf: *mut Self = self;
3089
0
        let len = mem::size_of_val(self);
3090
        // SAFETY:
3091
        // - `self` is guaranteed by the type system to be valid for writes of
3092
        //   size `size_of_val(self)`.
3093
        // - `u8`'s alignment is 1, and thus `self` is guaranteed to be aligned
3094
        //   as required by `u8`.
3095
        // - Since `Self: FromZeros`, the all-zeros instance is a valid instance
3096
        //   of `Self.`
3097
        //
3098
        // FIXME(#429): Add references to docs and quotes.
3099
0
        unsafe { ptr::write_bytes(slf.cast::<u8>(), 0, len) };
3100
0
    }
3101
3102
    /// Creates an instance of `Self` from zeroed bytes.
3103
    ///
3104
    /// # Examples
3105
    ///
3106
    /// ```
3107
    /// # use zerocopy::FromZeros;
3108
    /// # use zerocopy_derive::*;
3109
    /// #
3110
    /// #[derive(FromZeros)]
3111
    /// #[repr(C)]
3112
    /// struct PacketHeader {
3113
    ///     src_port: [u8; 2],
3114
    ///     dst_port: [u8; 2],
3115
    ///     length: [u8; 2],
3116
    ///     checksum: [u8; 2],
3117
    /// }
3118
    ///
3119
    /// let header: PacketHeader = FromZeros::new_zeroed();
3120
    ///
3121
    /// assert_eq!(header.src_port, [0, 0]);
3122
    /// assert_eq!(header.dst_port, [0, 0]);
3123
    /// assert_eq!(header.length, [0, 0]);
3124
    /// assert_eq!(header.checksum, [0, 0]);
3125
    /// ```
3126
    #[must_use = "has no side effects"]
3127
    #[inline(always)]
3128
0
    fn new_zeroed() -> Self
3129
0
    where
3130
0
        Self: Sized,
3131
    {
3132
        // SAFETY: `FromZeros` says that the all-zeros bit pattern is legal.
3133
0
        unsafe { mem::zeroed() }
3134
0
    }
3135
3136
    /// Creates a `Box<Self>` from zeroed bytes.
3137
    ///
3138
    /// This function is useful for allocating large values on the heap and
3139
    /// zero-initializing them, without ever creating a temporary instance of
3140
    /// `Self` on the stack. For example, `<[u8; 1048576]>::new_box_zeroed()`
3141
    /// will allocate `[u8; 1048576]` directly on the heap; it does not require
3142
    /// storing `[u8; 1048576]` in a temporary variable on the stack.
3143
    ///
3144
    /// On systems that use a heap implementation that supports allocating from
3145
    /// pre-zeroed memory, using `new_box_zeroed` (or related functions) may
3146
    /// have performance benefits.
3147
    ///
3148
    /// # Errors
3149
    ///
3150
    /// Returns an error on allocation failure. Allocation failure is guaranteed
3151
    /// never to cause a panic or an abort.
3152
    #[must_use = "has no side effects (other than allocation)"]
3153
    #[cfg(any(feature = "alloc", test))]
3154
    #[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))]
3155
    #[inline]
3156
    fn new_box_zeroed() -> Result<Box<Self>, AllocError>
3157
    where
3158
        Self: Sized,
3159
    {
3160
        // If `T` is a ZST, then return a proper boxed instance of it. There is
3161
        // no allocation, but `Box` does require a correct dangling pointer.
3162
        let layout = Layout::new::<Self>();
3163
        if layout.size() == 0 {
3164
            // Construct the `Box` from a dangling pointer to avoid calling
3165
            // `Self::new_zeroed`. This ensures that stack space is never
3166
            // allocated for `Self` even on lower opt-levels where this branch
3167
            // might not get optimized out.
3168
3169
            // SAFETY: Per [1], when `T` is a ZST, `Box<T>`'s only validity
3170
            // requirements are that the pointer is non-null and sufficiently
3171
            // aligned. Per [2], `NonNull::dangling` produces a pointer which
3172
            // is sufficiently aligned. Since the produced pointer is a
3173
            // `NonNull`, it is non-null.
3174
            //
3175
            // [1] Per https://doc.rust-lang.org/nightly/std/boxed/index.html#memory-layout:
3176
            //
3177
            //   For zero-sized values, the `Box` pointer has to be non-null and sufficiently aligned.
3178
            //
3179
            // [2] Per https://doc.rust-lang.org/std/ptr/struct.NonNull.html#method.dangling:
3180
            //
3181
            //   Creates a new `NonNull` that is dangling, but well-aligned.
3182
            return Ok(unsafe { Box::from_raw(NonNull::dangling().as_ptr()) });
3183
        }
3184
3185
        // FIXME(#429): Add a "SAFETY" comment and remove this `allow`.
3186
        #[allow(clippy::undocumented_unsafe_blocks)]
3187
        let ptr = unsafe { alloc::alloc::alloc_zeroed(layout).cast::<Self>() };
3188
        if ptr.is_null() {
3189
            return Err(AllocError);
3190
        }
3191
        // FIXME(#429): Add a "SAFETY" comment and remove this `allow`.
3192
        #[allow(clippy::undocumented_unsafe_blocks)]
3193
        Ok(unsafe { Box::from_raw(ptr) })
3194
    }
3195
3196
    /// Creates a `Box<[Self]>` (a boxed slice) from zeroed bytes.
3197
    ///
3198
    /// This function is useful for allocating large values of `[Self]` on the
3199
    /// heap and zero-initializing them, without ever creating a temporary
3200
    /// instance of `[Self; _]` on the stack. For example,
3201
    /// `u8::new_box_slice_zeroed(1048576)` will allocate the slice directly on
3202
    /// the heap; it does not require storing the slice on the stack.
3203
    ///
3204
    /// On systems that use a heap implementation that supports allocating from
3205
    /// pre-zeroed memory, using `new_box_slice_zeroed` may have performance
3206
    /// benefits.
3207
    ///
3208
    /// If `Self` is a zero-sized type, then this function will return a
3209
    /// `Box<[Self]>` that has the correct `len`. Such a box cannot contain any
3210
    /// actual information, but its `len()` property will report the correct
3211
    /// value.
3212
    ///
3213
    /// # Errors
3214
    ///
3215
    /// Returns an error on allocation failure. Allocation failure is
3216
    /// guaranteed never to cause a panic or an abort.
3217
    #[must_use = "has no side effects (other than allocation)"]
3218
    #[cfg(feature = "alloc")]
3219
    #[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))]
3220
    #[inline]
3221
    fn new_box_zeroed_with_elems(count: usize) -> Result<Box<Self>, AllocError>
3222
    where
3223
        Self: KnownLayout<PointerMetadata = usize>,
3224
    {
3225
        // SAFETY: `alloc::alloc::alloc_zeroed` is a valid argument of
3226
        // `new_box`. The referent of the pointer returned by `alloc_zeroed`
3227
        // (and, consequently, the `Box` derived from it) is a valid instance of
3228
        // `Self`, because `Self` is `FromZeros`.
3229
        unsafe { crate::util::new_box(count, alloc::alloc::alloc_zeroed) }
3230
    }
3231
3232
    #[deprecated(since = "0.8.0", note = "renamed to `FromZeros::new_box_zeroed_with_elems`")]
3233
    #[doc(hidden)]
3234
    #[cfg(feature = "alloc")]
3235
    #[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))]
3236
    #[must_use = "has no side effects (other than allocation)"]
3237
    #[inline(always)]
3238
    fn new_box_slice_zeroed(len: usize) -> Result<Box<[Self]>, AllocError>
3239
    where
3240
        Self: Sized,
3241
    {
3242
        <[Self]>::new_box_zeroed_with_elems(len)
3243
    }
3244
3245
    /// Creates a `Vec<Self>` from zeroed bytes.
3246
    ///
3247
    /// This function is useful for allocating large values of `Vec`s and
3248
    /// zero-initializing them, without ever creating a temporary instance of
3249
    /// `[Self; _]` (or many temporary instances of `Self`) on the stack. For
3250
    /// example, `u8::new_vec_zeroed(1048576)` will allocate directly on the
3251
    /// heap; it does not require storing intermediate values on the stack.
3252
    ///
3253
    /// On systems that use a heap implementation that supports allocating from
3254
    /// pre-zeroed memory, using `new_vec_zeroed` may have performance benefits.
3255
    ///
3256
    /// If `Self` is a zero-sized type, then this function will return a
3257
    /// `Vec<Self>` that has the correct `len`. Such a `Vec` cannot contain any
3258
    /// actual information, but its `len()` property will report the correct
3259
    /// value.
3260
    ///
3261
    /// # Errors
3262
    ///
3263
    /// Returns an error on allocation failure. Allocation failure is
3264
    /// guaranteed never to cause a panic or an abort.
3265
    #[must_use = "has no side effects (other than allocation)"]
3266
    #[cfg(feature = "alloc")]
3267
    #[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))]
3268
    #[inline(always)]
3269
    fn new_vec_zeroed(len: usize) -> Result<Vec<Self>, AllocError>
3270
    where
3271
        Self: Sized,
3272
    {
3273
        <[Self]>::new_box_zeroed_with_elems(len).map(Into::into)
3274
    }
3275
3276
    /// Extends a `Vec<Self>` by pushing `additional` new items onto the end of
3277
    /// the vector. The new items are initialized with zeros.
3278
    #[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
3279
    #[cfg(feature = "alloc")]
3280
    #[cfg_attr(doc_cfg, doc(cfg(all(rust = "1.57.0", feature = "alloc"))))]
3281
    #[inline(always)]
3282
    fn extend_vec_zeroed(v: &mut Vec<Self>, additional: usize) -> Result<(), AllocError>
3283
    where
3284
        Self: Sized,
3285
    {
3286
        // PANICS: We pass `v.len()` for `position`, so the `position > v.len()`
3287
        // panic condition is not satisfied.
3288
        <Self as FromZeros>::insert_vec_zeroed(v, v.len(), additional)
3289
    }
3290
3291
    /// Inserts `additional` new items into `Vec<Self>` at `position`. The new
3292
    /// items are initialized with zeros.
3293
    ///
3294
    /// # Panics
3295
    ///
3296
    /// Panics if `position > v.len()`.
3297
    #[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
3298
    #[cfg(feature = "alloc")]
3299
    #[cfg_attr(doc_cfg, doc(cfg(all(rust = "1.57.0", feature = "alloc"))))]
3300
    #[inline]
3301
    fn insert_vec_zeroed(
3302
        v: &mut Vec<Self>,
3303
        position: usize,
3304
        additional: usize,
3305
    ) -> Result<(), AllocError>
3306
    where
3307
        Self: Sized,
3308
    {
3309
        assert!(position <= v.len());
3310
        // We only conditionally compile on versions on which `try_reserve` is
3311
        // stable; the Clippy lint is a false positive.
3312
        v.try_reserve(additional).map_err(|_| AllocError)?;
3313
        // SAFETY: The `try_reserve` call guarantees that these cannot overflow:
3314
        // * `ptr.add(position)`
3315
        // * `position + additional`
3316
        // * `v.len() + additional`
3317
        //
3318
        // `v.len() - position` cannot overflow because we asserted that
3319
        // `position <= v.len()`.
3320
        unsafe {
3321
            // This is a potentially overlapping copy.
3322
            let ptr = v.as_mut_ptr();
3323
            #[allow(clippy::arithmetic_side_effects)]
3324
            ptr.add(position).copy_to(ptr.add(position + additional), v.len() - position);
3325
            ptr.add(position).write_bytes(0, additional);
3326
            #[allow(clippy::arithmetic_side_effects)]
3327
            v.set_len(v.len() + additional);
3328
        }
3329
3330
        Ok(())
3331
    }
3332
}
3333
3334
/// Analyzes whether a type is [`FromBytes`].
3335
///
3336
/// This derive analyzes, at compile time, whether the annotated type satisfies
3337
/// the [safety conditions] of `FromBytes` and implements `FromBytes` and its
3338
/// supertraits if it is sound to do so. This derive can be applied to structs,
3339
/// enums, and unions;
3340
/// e.g.:
3341
///
3342
/// ```
3343
/// # use zerocopy_derive::{FromBytes, FromZeros, Immutable};
3344
/// #[derive(FromBytes)]
3345
/// struct MyStruct {
3346
/// # /*
3347
///     ...
3348
/// # */
3349
/// }
3350
///
3351
/// #[derive(FromBytes)]
3352
/// #[repr(u8)]
3353
/// enum MyEnum {
3354
/// #   V00, V01, V02, V03, V04, V05, V06, V07, V08, V09, V0A, V0B, V0C, V0D, V0E,
3355
/// #   V0F, V10, V11, V12, V13, V14, V15, V16, V17, V18, V19, V1A, V1B, V1C, V1D,
3356
/// #   V1E, V1F, V20, V21, V22, V23, V24, V25, V26, V27, V28, V29, V2A, V2B, V2C,
3357
/// #   V2D, V2E, V2F, V30, V31, V32, V33, V34, V35, V36, V37, V38, V39, V3A, V3B,
3358
/// #   V3C, V3D, V3E, V3F, V40, V41, V42, V43, V44, V45, V46, V47, V48, V49, V4A,
3359
/// #   V4B, V4C, V4D, V4E, V4F, V50, V51, V52, V53, V54, V55, V56, V57, V58, V59,
3360
/// #   V5A, V5B, V5C, V5D, V5E, V5F, V60, V61, V62, V63, V64, V65, V66, V67, V68,
3361
/// #   V69, V6A, V6B, V6C, V6D, V6E, V6F, V70, V71, V72, V73, V74, V75, V76, V77,
3362
/// #   V78, V79, V7A, V7B, V7C, V7D, V7E, V7F, V80, V81, V82, V83, V84, V85, V86,
3363
/// #   V87, V88, V89, V8A, V8B, V8C, V8D, V8E, V8F, V90, V91, V92, V93, V94, V95,
3364
/// #   V96, V97, V98, V99, V9A, V9B, V9C, V9D, V9E, V9F, VA0, VA1, VA2, VA3, VA4,
3365
/// #   VA5, VA6, VA7, VA8, VA9, VAA, VAB, VAC, VAD, VAE, VAF, VB0, VB1, VB2, VB3,
3366
/// #   VB4, VB5, VB6, VB7, VB8, VB9, VBA, VBB, VBC, VBD, VBE, VBF, VC0, VC1, VC2,
3367
/// #   VC3, VC4, VC5, VC6, VC7, VC8, VC9, VCA, VCB, VCC, VCD, VCE, VCF, VD0, VD1,
3368
/// #   VD2, VD3, VD4, VD5, VD6, VD7, VD8, VD9, VDA, VDB, VDC, VDD, VDE, VDF, VE0,
3369
/// #   VE1, VE2, VE3, VE4, VE5, VE6, VE7, VE8, VE9, VEA, VEB, VEC, VED, VEE, VEF,
3370
/// #   VF0, VF1, VF2, VF3, VF4, VF5, VF6, VF7, VF8, VF9, VFA, VFB, VFC, VFD, VFE,
3371
/// #   VFF,
3372
/// # /*
3373
///     ...
3374
/// # */
3375
/// }
3376
///
3377
/// #[derive(FromBytes, Immutable)]
3378
/// union MyUnion {
3379
/// #   variant: u8,
3380
/// # /*
3381
///     ...
3382
/// # */
3383
/// }
3384
/// ```
3385
///
3386
/// [safety conditions]: trait@FromBytes#safety
3387
///
3388
/// # Analysis
3389
///
3390
/// *This section describes, roughly, the analysis performed by this derive to
3391
/// determine whether it is sound to implement `FromBytes` for a given type.
3392
/// Unless you are modifying the implementation of this derive, or attempting to
3393
/// manually implement `FromBytes` for a type yourself, you don't need to read
3394
/// this section.*
3395
///
3396
/// If a type has the following properties, then this derive can implement
3397
/// `FromBytes` for that type:
3398
///
3399
/// - If the type is a struct, all of its fields must be `FromBytes`.
3400
/// - If the type is an enum:
3401
///   - It must have a defined representation which is one of `u8`, `u16`, `i8`,
3402
///     or `i16`.
3403
///   - The maximum number of discriminants must be used (so that every possible
3404
///     bit pattern is a valid one).
3405
///   - Its fields must be `FromBytes`.
3406
///
3407
/// This analysis is subject to change. Unsafe code may *only* rely on the
3408
/// documented [safety conditions] of `FromBytes`, and must *not* rely on the
3409
/// implementation details of this derive.
3410
///
3411
/// ## Why isn't an explicit representation required for structs?
3412
///
3413
/// Neither this derive, nor the [safety conditions] of `FromBytes`, requires
3414
/// that structs are marked with `#[repr(C)]`.
3415
///
3416
/// Per the [Rust reference](reference),
3417
///
3418
/// > The representation of a type can change the padding between fields, but
3419
/// > does not change the layout of the fields themselves.
3420
///
3421
/// [reference]: https://doc.rust-lang.org/reference/type-layout.html#representations
3422
///
3423
/// Since the layout of structs only consists of padding bytes and field bytes,
3424
/// a struct is soundly `FromBytes` if:
3425
/// 1. its padding is soundly `FromBytes`, and
3426
/// 2. its fields are soundly `FromBytes`.
3427
///
3428
/// The answer to the first question is always yes: padding bytes do not have
3429
/// any validity constraints. A [discussion] of this question in the Unsafe Code
3430
/// Guidelines Working Group concluded that it would be virtually unimaginable
3431
/// for future versions of rustc to add validity constraints to padding bytes.
3432
///
3433
/// [discussion]: https://github.com/rust-lang/unsafe-code-guidelines/issues/174
3434
///
3435
/// Whether a struct is soundly `FromBytes` therefore solely depends on whether
3436
/// its fields are `FromBytes`.
3437
#[cfg(any(feature = "derive", test))]
3438
#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
3439
pub use zerocopy_derive::FromBytes;
3440
3441
/// Types for which any bit pattern is valid.
3442
///
3443
/// Any memory region of the appropriate length which contains initialized bytes
3444
/// can be viewed as any `FromBytes` type with no runtime overhead. This is
3445
/// useful for efficiently parsing bytes as structured data.
3446
///
3447
/// # Warning: Padding bytes
3448
///
3449
/// Note that, when a value is moved or copied, only the non-padding bytes of
3450
/// that value are guaranteed to be preserved. It is unsound to assume that
3451
/// values written to padding bytes are preserved after a move or copy. For
3452
/// example, the following is unsound:
3453
///
3454
/// ```rust,no_run
3455
/// use core::mem::{size_of, transmute};
3456
/// use zerocopy::FromZeros;
3457
/// # use zerocopy_derive::*;
3458
///
3459
/// // Assume `Foo` is a type with padding bytes.
3460
/// #[derive(FromZeros, Default)]
3461
/// struct Foo {
3462
/// # /*
3463
///     ...
3464
/// # */
3465
/// }
3466
///
3467
/// let mut foo: Foo = Foo::default();
3468
/// FromZeros::zero(&mut foo);
3469
/// // UNSOUND: Although `FromZeros::zero` writes zeros to all bytes of `foo`,
3470
/// // those writes are not guaranteed to be preserved in padding bytes when
3471
/// // `foo` is moved, so this may expose padding bytes as `u8`s.
3472
/// let foo_bytes: [u8; size_of::<Foo>()] = unsafe { transmute(foo) };
3473
/// ```
3474
///
3475
/// # Implementation
3476
///
3477
/// **Do not implement this trait yourself!** Instead, use
3478
/// [`#[derive(FromBytes)]`][derive]; e.g.:
3479
///
3480
/// ```
3481
/// # use zerocopy_derive::{FromBytes, Immutable};
3482
/// #[derive(FromBytes)]
3483
/// struct MyStruct {
3484
/// # /*
3485
///     ...
3486
/// # */
3487
/// }
3488
///
3489
/// #[derive(FromBytes)]
3490
/// #[repr(u8)]
3491
/// enum MyEnum {
3492
/// #   V00, V01, V02, V03, V04, V05, V06, V07, V08, V09, V0A, V0B, V0C, V0D, V0E,
3493
/// #   V0F, V10, V11, V12, V13, V14, V15, V16, V17, V18, V19, V1A, V1B, V1C, V1D,
3494
/// #   V1E, V1F, V20, V21, V22, V23, V24, V25, V26, V27, V28, V29, V2A, V2B, V2C,
3495
/// #   V2D, V2E, V2F, V30, V31, V32, V33, V34, V35, V36, V37, V38, V39, V3A, V3B,
3496
/// #   V3C, V3D, V3E, V3F, V40, V41, V42, V43, V44, V45, V46, V47, V48, V49, V4A,
3497
/// #   V4B, V4C, V4D, V4E, V4F, V50, V51, V52, V53, V54, V55, V56, V57, V58, V59,
3498
/// #   V5A, V5B, V5C, V5D, V5E, V5F, V60, V61, V62, V63, V64, V65, V66, V67, V68,
3499
/// #   V69, V6A, V6B, V6C, V6D, V6E, V6F, V70, V71, V72, V73, V74, V75, V76, V77,
3500
/// #   V78, V79, V7A, V7B, V7C, V7D, V7E, V7F, V80, V81, V82, V83, V84, V85, V86,
3501
/// #   V87, V88, V89, V8A, V8B, V8C, V8D, V8E, V8F, V90, V91, V92, V93, V94, V95,
3502
/// #   V96, V97, V98, V99, V9A, V9B, V9C, V9D, V9E, V9F, VA0, VA1, VA2, VA3, VA4,
3503
/// #   VA5, VA6, VA7, VA8, VA9, VAA, VAB, VAC, VAD, VAE, VAF, VB0, VB1, VB2, VB3,
3504
/// #   VB4, VB5, VB6, VB7, VB8, VB9, VBA, VBB, VBC, VBD, VBE, VBF, VC0, VC1, VC2,
3505
/// #   VC3, VC4, VC5, VC6, VC7, VC8, VC9, VCA, VCB, VCC, VCD, VCE, VCF, VD0, VD1,
3506
/// #   VD2, VD3, VD4, VD5, VD6, VD7, VD8, VD9, VDA, VDB, VDC, VDD, VDE, VDF, VE0,
3507
/// #   VE1, VE2, VE3, VE4, VE5, VE6, VE7, VE8, VE9, VEA, VEB, VEC, VED, VEE, VEF,
3508
/// #   VF0, VF1, VF2, VF3, VF4, VF5, VF6, VF7, VF8, VF9, VFA, VFB, VFC, VFD, VFE,
3509
/// #   VFF,
3510
/// # /*
3511
///     ...
3512
/// # */
3513
/// }
3514
///
3515
/// #[derive(FromBytes, Immutable)]
3516
/// union MyUnion {
3517
/// #   variant: u8,
3518
/// # /*
3519
///     ...
3520
/// # */
3521
/// }
3522
/// ```
3523
///
3524
/// This derive performs a sophisticated, compile-time safety analysis to
3525
/// determine whether a type is `FromBytes`.
3526
///
3527
/// # Safety
3528
///
3529
/// *This section describes what is required in order for `T: FromBytes`, and
3530
/// what unsafe code may assume of such types. If you don't plan on implementing
3531
/// `FromBytes` manually, and you don't plan on writing unsafe code that
3532
/// operates on `FromBytes` types, then you don't need to read this section.*
3533
///
3534
/// If `T: FromBytes`, then unsafe code may assume that it is sound to produce a
3535
/// `T` whose bytes are initialized to any sequence of valid `u8`s (in other
3536
/// words, any byte value which is not uninitialized). If a type is marked as
3537
/// `FromBytes` which violates this contract, it may cause undefined behavior.
3538
///
3539
/// `#[derive(FromBytes)]` only permits [types which satisfy these
3540
/// requirements][derive-analysis].
3541
///
3542
#[cfg_attr(
3543
    feature = "derive",
3544
    doc = "[derive]: zerocopy_derive::FromBytes",
3545
    doc = "[derive-analysis]: zerocopy_derive::FromBytes#analysis"
3546
)]
3547
#[cfg_attr(
3548
    not(feature = "derive"),
3549
    doc = concat!("[derive]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.FromBytes.html"),
3550
    doc = concat!("[derive-analysis]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.FromBytes.html#analysis"),
3551
)]
3552
#[cfg_attr(
3553
    zerocopy_diagnostic_on_unimplemented_1_78_0,
3554
    diagnostic::on_unimplemented(note = "Consider adding `#[derive(FromBytes)]` to `{Self}`")
3555
)]
3556
pub unsafe trait FromBytes: FromZeros {
3557
    // The `Self: Sized` bound makes it so that `FromBytes` is still object
3558
    // safe.
3559
    #[doc(hidden)]
3560
    fn only_derive_is_allowed_to_implement_this_trait()
3561
    where
3562
        Self: Sized;
3563
3564
    /// Interprets the given `source` as a `&Self`.
3565
    ///
3566
    /// This method attempts to return a reference to `source` interpreted as a
3567
    /// `Self`. If the length of `source` is not a [valid size of
3568
    /// `Self`][valid-size], or if `source` is not appropriately aligned, this
3569
    /// returns `Err`. If [`Self: Unaligned`][self-unaligned], you can
3570
    /// [infallibly discard the alignment error][size-error-from].
3571
    ///
3572
    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
3573
    ///
3574
    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
3575
    /// [self-unaligned]: Unaligned
3576
    /// [size-error-from]: error/struct.SizeError.html#method.from-1
3577
    /// [slice-dst]: KnownLayout#dynamically-sized-types
3578
    ///
3579
    /// # Compile-Time Assertions
3580
    ///
3581
    /// This method cannot yet be used on unsized types whose dynamically-sized
3582
    /// component is zero-sized. Attempting to use this method on such types
3583
    /// results in a compile-time assertion error; e.g.:
3584
    ///
3585
    /// ```compile_fail,E0080
3586
    /// use zerocopy::*;
3587
    /// # use zerocopy_derive::*;
3588
    ///
3589
    /// #[derive(FromBytes, Immutable, KnownLayout)]
3590
    /// #[repr(C)]
3591
    /// struct ZSTy {
3592
    ///     leading_sized: u16,
3593
    ///     trailing_dst: [()],
3594
    /// }
3595
    ///
3596
    /// let _ = ZSTy::ref_from_bytes(0u16.as_bytes()); // âš  Compile Error!
3597
    /// ```
3598
    ///
3599
    /// # Examples
3600
    ///
3601
    /// ```
3602
    /// use zerocopy::FromBytes;
3603
    /// # use zerocopy_derive::*;
3604
    ///
3605
    /// #[derive(FromBytes, KnownLayout, Immutable)]
3606
    /// #[repr(C)]
3607
    /// struct PacketHeader {
3608
    ///     src_port: [u8; 2],
3609
    ///     dst_port: [u8; 2],
3610
    ///     length: [u8; 2],
3611
    ///     checksum: [u8; 2],
3612
    /// }
3613
    ///
3614
    /// #[derive(FromBytes, KnownLayout, Immutable)]
3615
    /// #[repr(C)]
3616
    /// struct Packet {
3617
    ///     header: PacketHeader,
3618
    ///     body: [u8],
3619
    /// }
3620
    ///
3621
    /// // These bytes encode a `Packet`.
3622
    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11][..];
3623
    ///
3624
    /// let packet = Packet::ref_from_bytes(bytes).unwrap();
3625
    ///
3626
    /// assert_eq!(packet.header.src_port, [0, 1]);
3627
    /// assert_eq!(packet.header.dst_port, [2, 3]);
3628
    /// assert_eq!(packet.header.length, [4, 5]);
3629
    /// assert_eq!(packet.header.checksum, [6, 7]);
3630
    /// assert_eq!(packet.body, [8, 9, 10, 11]);
3631
    /// ```
3632
    #[must_use = "has no side effects"]
3633
    #[inline]
3634
0
    fn ref_from_bytes(source: &[u8]) -> Result<&Self, CastError<&[u8], Self>>
3635
0
    where
3636
0
        Self: KnownLayout + Immutable,
3637
    {
3638
0
        static_assert_dst_is_not_zst!(Self);
3639
0
        match Ptr::from_ref(source).try_cast_into_no_leftover::<_, BecauseImmutable>(None) {
3640
0
            Ok(ptr) => Ok(ptr.recall_validity().as_ref()),
3641
0
            Err(err) => Err(err.map_src(|src| src.as_ref())),
3642
        }
3643
0
    }
3644
3645
    /// Interprets the prefix of the given `source` as a `&Self` without
3646
    /// copying.
3647
    ///
3648
    /// This method computes the [largest possible size of `Self`][valid-size]
3649
    /// that can fit in the leading bytes of `source`, then attempts to return
3650
    /// both a reference to those bytes interpreted as a `Self`, and a reference
3651
    /// to the remaining bytes. If there are insufficient bytes, or if `source`
3652
    /// is not appropriately aligned, this returns `Err`. If [`Self:
3653
    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
3654
    /// error][size-error-from].
3655
    ///
3656
    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
3657
    ///
3658
    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
3659
    /// [self-unaligned]: Unaligned
3660
    /// [size-error-from]: error/struct.SizeError.html#method.from-1
3661
    /// [slice-dst]: KnownLayout#dynamically-sized-types
3662
    ///
3663
    /// # Compile-Time Assertions
3664
    ///
3665
    /// This method cannot yet be used on unsized types whose dynamically-sized
3666
    /// component is zero-sized. See [`ref_from_prefix_with_elems`], which does
3667
    /// support such types. Attempting to use this method on such types results
3668
    /// in a compile-time assertion error; e.g.:
3669
    ///
3670
    /// ```compile_fail,E0080
3671
    /// use zerocopy::*;
3672
    /// # use zerocopy_derive::*;
3673
    ///
3674
    /// #[derive(FromBytes, Immutable, KnownLayout)]
3675
    /// #[repr(C)]
3676
    /// struct ZSTy {
3677
    ///     leading_sized: u16,
3678
    ///     trailing_dst: [()],
3679
    /// }
3680
    ///
3681
    /// let _ = ZSTy::ref_from_prefix(0u16.as_bytes()); // âš  Compile Error!
3682
    /// ```
3683
    ///
3684
    /// [`ref_from_prefix_with_elems`]: FromBytes::ref_from_prefix_with_elems
3685
    ///
3686
    /// # Examples
3687
    ///
3688
    /// ```
3689
    /// use zerocopy::FromBytes;
3690
    /// # use zerocopy_derive::*;
3691
    ///
3692
    /// #[derive(FromBytes, KnownLayout, Immutable)]
3693
    /// #[repr(C)]
3694
    /// struct PacketHeader {
3695
    ///     src_port: [u8; 2],
3696
    ///     dst_port: [u8; 2],
3697
    ///     length: [u8; 2],
3698
    ///     checksum: [u8; 2],
3699
    /// }
3700
    ///
3701
    /// #[derive(FromBytes, KnownLayout, Immutable)]
3702
    /// #[repr(C)]
3703
    /// struct Packet {
3704
    ///     header: PacketHeader,
3705
    ///     body: [[u8; 2]],
3706
    /// }
3707
    ///
3708
    /// // These are more bytes than are needed to encode a `Packet`.
3709
    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14][..];
3710
    ///
3711
    /// let (packet, suffix) = Packet::ref_from_prefix(bytes).unwrap();
3712
    ///
3713
    /// assert_eq!(packet.header.src_port, [0, 1]);
3714
    /// assert_eq!(packet.header.dst_port, [2, 3]);
3715
    /// assert_eq!(packet.header.length, [4, 5]);
3716
    /// assert_eq!(packet.header.checksum, [6, 7]);
3717
    /// assert_eq!(packet.body, [[8, 9], [10, 11], [12, 13]]);
3718
    /// assert_eq!(suffix, &[14u8][..]);
3719
    /// ```
3720
    #[must_use = "has no side effects"]
3721
    #[inline]
3722
0
    fn ref_from_prefix(source: &[u8]) -> Result<(&Self, &[u8]), CastError<&[u8], Self>>
3723
0
    where
3724
0
        Self: KnownLayout + Immutable,
3725
    {
3726
0
        static_assert_dst_is_not_zst!(Self);
3727
0
        ref_from_prefix_suffix(source, None, CastType::Prefix)
3728
0
    }
3729
3730
    /// Interprets the suffix of the given bytes as a `&Self`.
3731
    ///
3732
    /// This method computes the [largest possible size of `Self`][valid-size]
3733
    /// that can fit in the trailing bytes of `source`, then attempts to return
3734
    /// both a reference to those bytes interpreted as a `Self`, and a reference
3735
    /// to the preceding bytes. If there are insufficient bytes, or if that
3736
    /// suffix of `source` is not appropriately aligned, this returns `Err`. If
3737
    /// [`Self: Unaligned`][self-unaligned], you can [infallibly discard the
3738
    /// alignment error][size-error-from].
3739
    ///
3740
    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
3741
    ///
3742
    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
3743
    /// [self-unaligned]: Unaligned
3744
    /// [size-error-from]: error/struct.SizeError.html#method.from-1
3745
    /// [slice-dst]: KnownLayout#dynamically-sized-types
3746
    ///
3747
    /// # Compile-Time Assertions
3748
    ///
3749
    /// This method cannot yet be used on unsized types whose dynamically-sized
3750
    /// component is zero-sized. See [`ref_from_suffix_with_elems`], which does
3751
    /// support such types. Attempting to use this method on such types results
3752
    /// in a compile-time assertion error; e.g.:
3753
    ///
3754
    /// ```compile_fail,E0080
3755
    /// use zerocopy::*;
3756
    /// # use zerocopy_derive::*;
3757
    ///
3758
    /// #[derive(FromBytes, Immutable, KnownLayout)]
3759
    /// #[repr(C)]
3760
    /// struct ZSTy {
3761
    ///     leading_sized: u16,
3762
    ///     trailing_dst: [()],
3763
    /// }
3764
    ///
3765
    /// let _ = ZSTy::ref_from_suffix(0u16.as_bytes()); // âš  Compile Error!
3766
    /// ```
3767
    ///
3768
    /// [`ref_from_suffix_with_elems`]: FromBytes::ref_from_suffix_with_elems
3769
    ///
3770
    /// # Examples
3771
    ///
3772
    /// ```
3773
    /// use zerocopy::FromBytes;
3774
    /// # use zerocopy_derive::*;
3775
    ///
3776
    /// #[derive(FromBytes, Immutable, KnownLayout)]
3777
    /// #[repr(C)]
3778
    /// struct PacketTrailer {
3779
    ///     frame_check_sequence: [u8; 4],
3780
    /// }
3781
    ///
3782
    /// // These are more bytes than are needed to encode a `PacketTrailer`.
3783
    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
3784
    ///
3785
    /// let (prefix, trailer) = PacketTrailer::ref_from_suffix(bytes).unwrap();
3786
    ///
3787
    /// assert_eq!(prefix, &[0, 1, 2, 3, 4, 5][..]);
3788
    /// assert_eq!(trailer.frame_check_sequence, [6, 7, 8, 9]);
3789
    /// ```
3790
    #[must_use = "has no side effects"]
3791
    #[inline]
3792
0
    fn ref_from_suffix(source: &[u8]) -> Result<(&[u8], &Self), CastError<&[u8], Self>>
3793
0
    where
3794
0
        Self: Immutable + KnownLayout,
3795
    {
3796
0
        static_assert_dst_is_not_zst!(Self);
3797
0
        ref_from_prefix_suffix(source, None, CastType::Suffix).map(swap)
3798
0
    }
3799
3800
    /// Interprets the given `source` as a `&mut Self`.
3801
    ///
3802
    /// This method attempts to return a reference to `source` interpreted as a
3803
    /// `Self`. If the length of `source` is not a [valid size of
3804
    /// `Self`][valid-size], or if `source` is not appropriately aligned, this
3805
    /// returns `Err`. If [`Self: Unaligned`][self-unaligned], you can
3806
    /// [infallibly discard the alignment error][size-error-from].
3807
    ///
3808
    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
3809
    ///
3810
    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
3811
    /// [self-unaligned]: Unaligned
3812
    /// [size-error-from]: error/struct.SizeError.html#method.from-1
3813
    /// [slice-dst]: KnownLayout#dynamically-sized-types
3814
    ///
3815
    /// # Compile-Time Assertions
3816
    ///
3817
    /// This method cannot yet be used on unsized types whose dynamically-sized
3818
    /// component is zero-sized. See [`mut_from_prefix_with_elems`], which does
3819
    /// support such types. Attempting to use this method on such types results
3820
    /// in a compile-time assertion error; e.g.:
3821
    ///
3822
    /// ```compile_fail,E0080
3823
    /// use zerocopy::*;
3824
    /// # use zerocopy_derive::*;
3825
    ///
3826
    /// #[derive(FromBytes, Immutable, IntoBytes, KnownLayout)]
3827
    /// #[repr(C, packed)]
3828
    /// struct ZSTy {
3829
    ///     leading_sized: [u8; 2],
3830
    ///     trailing_dst: [()],
3831
    /// }
3832
    ///
3833
    /// let mut source = [85, 85];
3834
    /// let _ = ZSTy::mut_from_bytes(&mut source[..]); // âš  Compile Error!
3835
    /// ```
3836
    ///
3837
    /// [`mut_from_prefix_with_elems`]: FromBytes::mut_from_prefix_with_elems
3838
    ///
3839
    /// # Examples
3840
    ///
3841
    /// ```
3842
    /// use zerocopy::FromBytes;
3843
    /// # use zerocopy_derive::*;
3844
    ///
3845
    /// #[derive(FromBytes, IntoBytes, KnownLayout, Immutable)]
3846
    /// #[repr(C)]
3847
    /// struct PacketHeader {
3848
    ///     src_port: [u8; 2],
3849
    ///     dst_port: [u8; 2],
3850
    ///     length: [u8; 2],
3851
    ///     checksum: [u8; 2],
3852
    /// }
3853
    ///
3854
    /// // These bytes encode a `PacketHeader`.
3855
    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7][..];
3856
    ///
3857
    /// let header = PacketHeader::mut_from_bytes(bytes).unwrap();
3858
    ///
3859
    /// assert_eq!(header.src_port, [0, 1]);
3860
    /// assert_eq!(header.dst_port, [2, 3]);
3861
    /// assert_eq!(header.length, [4, 5]);
3862
    /// assert_eq!(header.checksum, [6, 7]);
3863
    ///
3864
    /// header.checksum = [0, 0];
3865
    ///
3866
    /// assert_eq!(bytes, [0, 1, 2, 3, 4, 5, 0, 0]);
3867
    /// ```
3868
    #[must_use = "has no side effects"]
3869
    #[inline]
3870
0
    fn mut_from_bytes(source: &mut [u8]) -> Result<&mut Self, CastError<&mut [u8], Self>>
3871
0
    where
3872
0
        Self: IntoBytes + KnownLayout,
3873
    {
3874
0
        static_assert_dst_is_not_zst!(Self);
3875
0
        match Ptr::from_mut(source).try_cast_into_no_leftover::<_, BecauseExclusive>(None) {
3876
0
            Ok(ptr) => Ok(ptr.recall_validity::<_, (_, (_, _))>().as_mut()),
3877
0
            Err(err) => Err(err.map_src(|src| src.as_mut())),
3878
        }
3879
0
    }
3880
3881
    /// Interprets the prefix of the given `source` as a `&mut Self` without
3882
    /// copying.
3883
    ///
3884
    /// This method computes the [largest possible size of `Self`][valid-size]
3885
    /// that can fit in the leading bytes of `source`, then attempts to return
3886
    /// both a reference to those bytes interpreted as a `Self`, and a reference
3887
    /// to the remaining bytes. If there are insufficient bytes, or if `source`
3888
    /// is not appropriately aligned, this returns `Err`. If [`Self:
3889
    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
3890
    /// error][size-error-from].
3891
    ///
3892
    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
3893
    ///
3894
    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
3895
    /// [self-unaligned]: Unaligned
3896
    /// [size-error-from]: error/struct.SizeError.html#method.from-1
3897
    /// [slice-dst]: KnownLayout#dynamically-sized-types
3898
    ///
3899
    /// # Compile-Time Assertions
3900
    ///
3901
    /// This method cannot yet be used on unsized types whose dynamically-sized
3902
    /// component is zero-sized. See [`mut_from_suffix_with_elems`], which does
3903
    /// support such types. Attempting to use this method on such types results
3904
    /// in a compile-time assertion error; e.g.:
3905
    ///
3906
    /// ```compile_fail,E0080
3907
    /// use zerocopy::*;
3908
    /// # use zerocopy_derive::*;
3909
    ///
3910
    /// #[derive(FromBytes, Immutable, IntoBytes, KnownLayout)]
3911
    /// #[repr(C, packed)]
3912
    /// struct ZSTy {
3913
    ///     leading_sized: [u8; 2],
3914
    ///     trailing_dst: [()],
3915
    /// }
3916
    ///
3917
    /// let mut source = [85, 85];
3918
    /// let _ = ZSTy::mut_from_prefix(&mut source[..]); // âš  Compile Error!
3919
    /// ```
3920
    ///
3921
    /// [`mut_from_suffix_with_elems`]: FromBytes::mut_from_suffix_with_elems
3922
    ///
3923
    /// # Examples
3924
    ///
3925
    /// ```
3926
    /// use zerocopy::FromBytes;
3927
    /// # use zerocopy_derive::*;
3928
    ///
3929
    /// #[derive(FromBytes, IntoBytes, KnownLayout, Immutable)]
3930
    /// #[repr(C)]
3931
    /// struct PacketHeader {
3932
    ///     src_port: [u8; 2],
3933
    ///     dst_port: [u8; 2],
3934
    ///     length: [u8; 2],
3935
    ///     checksum: [u8; 2],
3936
    /// }
3937
    ///
3938
    /// // These are more bytes than are needed to encode a `PacketHeader`.
3939
    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
3940
    ///
3941
    /// let (header, body) = PacketHeader::mut_from_prefix(bytes).unwrap();
3942
    ///
3943
    /// assert_eq!(header.src_port, [0, 1]);
3944
    /// assert_eq!(header.dst_port, [2, 3]);
3945
    /// assert_eq!(header.length, [4, 5]);
3946
    /// assert_eq!(header.checksum, [6, 7]);
3947
    /// assert_eq!(body, &[8, 9][..]);
3948
    ///
3949
    /// header.checksum = [0, 0];
3950
    /// body.fill(1);
3951
    ///
3952
    /// assert_eq!(bytes, [0, 1, 2, 3, 4, 5, 0, 0, 1, 1]);
3953
    /// ```
3954
    #[must_use = "has no side effects"]
3955
    #[inline]
3956
0
    fn mut_from_prefix(
3957
0
        source: &mut [u8],
3958
0
    ) -> Result<(&mut Self, &mut [u8]), CastError<&mut [u8], Self>>
3959
0
    where
3960
0
        Self: IntoBytes + KnownLayout,
3961
    {
3962
0
        static_assert_dst_is_not_zst!(Self);
3963
0
        mut_from_prefix_suffix(source, None, CastType::Prefix)
3964
0
    }
3965
3966
    /// Interprets the suffix of the given `source` as a `&mut Self` without
3967
    /// copying.
3968
    ///
3969
    /// This method computes the [largest possible size of `Self`][valid-size]
3970
    /// that can fit in the trailing bytes of `source`, then attempts to return
3971
    /// both a reference to those bytes interpreted as a `Self`, and a reference
3972
    /// to the preceding bytes. If there are insufficient bytes, or if that
3973
    /// suffix of `source` is not appropriately aligned, this returns `Err`. If
3974
    /// [`Self: Unaligned`][self-unaligned], you can [infallibly discard the
3975
    /// alignment error][size-error-from].
3976
    ///
3977
    /// `Self` may be a sized type, a slice, or a [slice DST][slice-dst].
3978
    ///
3979
    /// [valid-size]: crate::KnownLayout#what-is-a-valid-size
3980
    /// [self-unaligned]: Unaligned
3981
    /// [size-error-from]: error/struct.SizeError.html#method.from-1
3982
    /// [slice-dst]: KnownLayout#dynamically-sized-types
3983
    ///
3984
    /// # Compile-Time Assertions
3985
    ///
3986
    /// This method cannot yet be used on unsized types whose dynamically-sized
3987
    /// component is zero-sized. Attempting to use this method on such types
3988
    /// results in a compile-time assertion error; e.g.:
3989
    ///
3990
    /// ```compile_fail,E0080
3991
    /// use zerocopy::*;
3992
    /// # use zerocopy_derive::*;
3993
    ///
3994
    /// #[derive(FromBytes, Immutable, IntoBytes, KnownLayout)]
3995
    /// #[repr(C, packed)]
3996
    /// struct ZSTy {
3997
    ///     leading_sized: [u8; 2],
3998
    ///     trailing_dst: [()],
3999
    /// }
4000
    ///
4001
    /// let mut source = [85, 85];
4002
    /// let _ = ZSTy::mut_from_suffix(&mut source[..]); // âš  Compile Error!
4003
    /// ```
4004
    ///
4005
    /// # Examples
4006
    ///
4007
    /// ```
4008
    /// use zerocopy::FromBytes;
4009
    /// # use zerocopy_derive::*;
4010
    ///
4011
    /// #[derive(FromBytes, IntoBytes, KnownLayout, Immutable)]
4012
    /// #[repr(C)]
4013
    /// struct PacketTrailer {
4014
    ///     frame_check_sequence: [u8; 4],
4015
    /// }
4016
    ///
4017
    /// // These are more bytes than are needed to encode a `PacketTrailer`.
4018
    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
4019
    ///
4020
    /// let (prefix, trailer) = PacketTrailer::mut_from_suffix(bytes).unwrap();
4021
    ///
4022
    /// assert_eq!(prefix, &[0u8, 1, 2, 3, 4, 5][..]);
4023
    /// assert_eq!(trailer.frame_check_sequence, [6, 7, 8, 9]);
4024
    ///
4025
    /// prefix.fill(0);
4026
    /// trailer.frame_check_sequence.fill(1);
4027
    ///
4028
    /// assert_eq!(bytes, [0, 0, 0, 0, 0, 0, 1, 1, 1, 1]);
4029
    /// ```
4030
    #[must_use = "has no side effects"]
4031
    #[inline]
4032
0
    fn mut_from_suffix(
4033
0
        source: &mut [u8],
4034
0
    ) -> Result<(&mut [u8], &mut Self), CastError<&mut [u8], Self>>
4035
0
    where
4036
0
        Self: IntoBytes + KnownLayout,
4037
    {
4038
0
        static_assert_dst_is_not_zst!(Self);
4039
0
        mut_from_prefix_suffix(source, None, CastType::Suffix).map(swap)
4040
0
    }
4041
4042
    /// Interprets the given `source` as a `&Self` with a DST length equal to
4043
    /// `count`.
4044
    ///
4045
    /// This method attempts to return a reference to `source` interpreted as a
4046
    /// `Self` with `count` trailing elements. If the length of `source` is not
4047
    /// equal to the size of `Self` with `count` elements, or if `source` is not
4048
    /// appropriately aligned, this returns `Err`. If [`Self:
4049
    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
4050
    /// error][size-error-from].
4051
    ///
4052
    /// [self-unaligned]: Unaligned
4053
    /// [size-error-from]: error/struct.SizeError.html#method.from-1
4054
    ///
4055
    /// # Examples
4056
    ///
4057
    /// ```
4058
    /// use zerocopy::FromBytes;
4059
    /// # use zerocopy_derive::*;
4060
    ///
4061
    /// # #[derive(Debug, PartialEq, Eq)]
4062
    /// #[derive(FromBytes, Immutable)]
4063
    /// #[repr(C)]
4064
    /// struct Pixel {
4065
    ///     r: u8,
4066
    ///     g: u8,
4067
    ///     b: u8,
4068
    ///     a: u8,
4069
    /// }
4070
    ///
4071
    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7][..];
4072
    ///
4073
    /// let pixels = <[Pixel]>::ref_from_bytes_with_elems(bytes, 2).unwrap();
4074
    ///
4075
    /// assert_eq!(pixels, &[
4076
    ///     Pixel { r: 0, g: 1, b: 2, a: 3 },
4077
    ///     Pixel { r: 4, g: 5, b: 6, a: 7 },
4078
    /// ]);
4079
    ///
4080
    /// ```
4081
    ///
4082
    /// Since an explicit `count` is provided, this method supports types with
4083
    /// zero-sized trailing slice elements. Methods such as [`ref_from_bytes`]
4084
    /// which do not take an explicit count do not support such types.
4085
    ///
4086
    /// ```
4087
    /// use zerocopy::*;
4088
    /// # use zerocopy_derive::*;
4089
    ///
4090
    /// #[derive(FromBytes, Immutable, KnownLayout)]
4091
    /// #[repr(C)]
4092
    /// struct ZSTy {
4093
    ///     leading_sized: [u8; 2],
4094
    ///     trailing_dst: [()],
4095
    /// }
4096
    ///
4097
    /// let src = &[85, 85][..];
4098
    /// let zsty = ZSTy::ref_from_bytes_with_elems(src, 42).unwrap();
4099
    /// assert_eq!(zsty.trailing_dst.len(), 42);
4100
    /// ```
4101
    ///
4102
    /// [`ref_from_bytes`]: FromBytes::ref_from_bytes
4103
    #[must_use = "has no side effects"]
4104
    #[inline]
4105
0
    fn ref_from_bytes_with_elems(
4106
0
        source: &[u8],
4107
0
        count: usize,
4108
0
    ) -> Result<&Self, CastError<&[u8], Self>>
4109
0
    where
4110
0
        Self: KnownLayout<PointerMetadata = usize> + Immutable,
4111
    {
4112
0
        let source = Ptr::from_ref(source);
4113
0
        let maybe_slf = source.try_cast_into_no_leftover::<_, BecauseImmutable>(Some(count));
4114
0
        match maybe_slf {
4115
0
            Ok(slf) => Ok(slf.recall_validity().as_ref()),
4116
0
            Err(err) => Err(err.map_src(|s| s.as_ref())),
4117
        }
4118
0
    }
4119
4120
    /// Interprets the prefix of the given `source` as a DST `&Self` with length
4121
    /// equal to `count`.
4122
    ///
4123
    /// This method attempts to return a reference to the prefix of `source`
4124
    /// interpreted as a `Self` with `count` trailing elements, and a reference
4125
    /// to the remaining bytes. If there are insufficient bytes, or if `source`
4126
    /// is not appropriately aligned, this returns `Err`. If [`Self:
4127
    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
4128
    /// error][size-error-from].
4129
    ///
4130
    /// [self-unaligned]: Unaligned
4131
    /// [size-error-from]: error/struct.SizeError.html#method.from-1
4132
    ///
4133
    /// # Examples
4134
    ///
4135
    /// ```
4136
    /// use zerocopy::FromBytes;
4137
    /// # use zerocopy_derive::*;
4138
    ///
4139
    /// # #[derive(Debug, PartialEq, Eq)]
4140
    /// #[derive(FromBytes, Immutable)]
4141
    /// #[repr(C)]
4142
    /// struct Pixel {
4143
    ///     r: u8,
4144
    ///     g: u8,
4145
    ///     b: u8,
4146
    ///     a: u8,
4147
    /// }
4148
    ///
4149
    /// // These are more bytes than are needed to encode two `Pixel`s.
4150
    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
4151
    ///
4152
    /// let (pixels, suffix) = <[Pixel]>::ref_from_prefix_with_elems(bytes, 2).unwrap();
4153
    ///
4154
    /// assert_eq!(pixels, &[
4155
    ///     Pixel { r: 0, g: 1, b: 2, a: 3 },
4156
    ///     Pixel { r: 4, g: 5, b: 6, a: 7 },
4157
    /// ]);
4158
    ///
4159
    /// assert_eq!(suffix, &[8, 9]);
4160
    /// ```
4161
    ///
4162
    /// Since an explicit `count` is provided, this method supports types with
4163
    /// zero-sized trailing slice elements. Methods such as [`ref_from_prefix`]
4164
    /// which do not take an explicit count do not support such types.
4165
    ///
4166
    /// ```
4167
    /// use zerocopy::*;
4168
    /// # use zerocopy_derive::*;
4169
    ///
4170
    /// #[derive(FromBytes, Immutable, KnownLayout)]
4171
    /// #[repr(C)]
4172
    /// struct ZSTy {
4173
    ///     leading_sized: [u8; 2],
4174
    ///     trailing_dst: [()],
4175
    /// }
4176
    ///
4177
    /// let src = &[85, 85][..];
4178
    /// let (zsty, _) = ZSTy::ref_from_prefix_with_elems(src, 42).unwrap();
4179
    /// assert_eq!(zsty.trailing_dst.len(), 42);
4180
    /// ```
4181
    ///
4182
    /// [`ref_from_prefix`]: FromBytes::ref_from_prefix
4183
    #[must_use = "has no side effects"]
4184
    #[inline]
4185
0
    fn ref_from_prefix_with_elems(
4186
0
        source: &[u8],
4187
0
        count: usize,
4188
0
    ) -> Result<(&Self, &[u8]), CastError<&[u8], Self>>
4189
0
    where
4190
0
        Self: KnownLayout<PointerMetadata = usize> + Immutable,
4191
    {
4192
0
        ref_from_prefix_suffix(source, Some(count), CastType::Prefix)
4193
0
    }
4194
4195
    /// Interprets the suffix of the given `source` as a DST `&Self` with length
4196
    /// equal to `count`.
4197
    ///
4198
    /// This method attempts to return a reference to the suffix of `source`
4199
    /// interpreted as a `Self` with `count` trailing elements, and a reference
4200
    /// to the preceding bytes. If there are insufficient bytes, or if that
4201
    /// suffix of `source` is not appropriately aligned, this returns `Err`. If
4202
    /// [`Self: Unaligned`][self-unaligned], you can [infallibly discard the
4203
    /// alignment error][size-error-from].
4204
    ///
4205
    /// [self-unaligned]: Unaligned
4206
    /// [size-error-from]: error/struct.SizeError.html#method.from-1
4207
    ///
4208
    /// # Examples
4209
    ///
4210
    /// ```
4211
    /// use zerocopy::FromBytes;
4212
    /// # use zerocopy_derive::*;
4213
    ///
4214
    /// # #[derive(Debug, PartialEq, Eq)]
4215
    /// #[derive(FromBytes, Immutable)]
4216
    /// #[repr(C)]
4217
    /// struct Pixel {
4218
    ///     r: u8,
4219
    ///     g: u8,
4220
    ///     b: u8,
4221
    ///     a: u8,
4222
    /// }
4223
    ///
4224
    /// // These are more bytes than are needed to encode two `Pixel`s.
4225
    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
4226
    ///
4227
    /// let (prefix, pixels) = <[Pixel]>::ref_from_suffix_with_elems(bytes, 2).unwrap();
4228
    ///
4229
    /// assert_eq!(prefix, &[0, 1]);
4230
    ///
4231
    /// assert_eq!(pixels, &[
4232
    ///     Pixel { r: 2, g: 3, b: 4, a: 5 },
4233
    ///     Pixel { r: 6, g: 7, b: 8, a: 9 },
4234
    /// ]);
4235
    /// ```
4236
    ///
4237
    /// Since an explicit `count` is provided, this method supports types with
4238
    /// zero-sized trailing slice elements. Methods such as [`ref_from_suffix`]
4239
    /// which do not take an explicit count do not support such types.
4240
    ///
4241
    /// ```
4242
    /// use zerocopy::*;
4243
    /// # use zerocopy_derive::*;
4244
    ///
4245
    /// #[derive(FromBytes, Immutable, KnownLayout)]
4246
    /// #[repr(C)]
4247
    /// struct ZSTy {
4248
    ///     leading_sized: [u8; 2],
4249
    ///     trailing_dst: [()],
4250
    /// }
4251
    ///
4252
    /// let src = &[85, 85][..];
4253
    /// let (_, zsty) = ZSTy::ref_from_suffix_with_elems(src, 42).unwrap();
4254
    /// assert_eq!(zsty.trailing_dst.len(), 42);
4255
    /// ```
4256
    ///
4257
    /// [`ref_from_suffix`]: FromBytes::ref_from_suffix
4258
    #[must_use = "has no side effects"]
4259
    #[inline]
4260
0
    fn ref_from_suffix_with_elems(
4261
0
        source: &[u8],
4262
0
        count: usize,
4263
0
    ) -> Result<(&[u8], &Self), CastError<&[u8], Self>>
4264
0
    where
4265
0
        Self: KnownLayout<PointerMetadata = usize> + Immutable,
4266
    {
4267
0
        ref_from_prefix_suffix(source, Some(count), CastType::Suffix).map(swap)
4268
0
    }
4269
4270
    /// Interprets the given `source` as a `&mut Self` with a DST length equal
4271
    /// to `count`.
4272
    ///
4273
    /// This method attempts to return a reference to `source` interpreted as a
4274
    /// `Self` with `count` trailing elements. If the length of `source` is not
4275
    /// equal to the size of `Self` with `count` elements, or if `source` is not
4276
    /// appropriately aligned, this returns `Err`. If [`Self:
4277
    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
4278
    /// error][size-error-from].
4279
    ///
4280
    /// [self-unaligned]: Unaligned
4281
    /// [size-error-from]: error/struct.SizeError.html#method.from-1
4282
    ///
4283
    /// # Examples
4284
    ///
4285
    /// ```
4286
    /// use zerocopy::FromBytes;
4287
    /// # use zerocopy_derive::*;
4288
    ///
4289
    /// # #[derive(Debug, PartialEq, Eq)]
4290
    /// #[derive(KnownLayout, FromBytes, IntoBytes, Immutable)]
4291
    /// #[repr(C)]
4292
    /// struct Pixel {
4293
    ///     r: u8,
4294
    ///     g: u8,
4295
    ///     b: u8,
4296
    ///     a: u8,
4297
    /// }
4298
    ///
4299
    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7][..];
4300
    ///
4301
    /// let pixels = <[Pixel]>::mut_from_bytes_with_elems(bytes, 2).unwrap();
4302
    ///
4303
    /// assert_eq!(pixels, &[
4304
    ///     Pixel { r: 0, g: 1, b: 2, a: 3 },
4305
    ///     Pixel { r: 4, g: 5, b: 6, a: 7 },
4306
    /// ]);
4307
    ///
4308
    /// pixels[1] = Pixel { r: 0, g: 0, b: 0, a: 0 };
4309
    ///
4310
    /// assert_eq!(bytes, [0, 1, 2, 3, 0, 0, 0, 0]);
4311
    /// ```
4312
    ///
4313
    /// Since an explicit `count` is provided, this method supports types with
4314
    /// zero-sized trailing slice elements. Methods such as [`mut_from`] which
4315
    /// do not take an explicit count do not support such types.
4316
    ///
4317
    /// ```
4318
    /// use zerocopy::*;
4319
    /// # use zerocopy_derive::*;
4320
    ///
4321
    /// #[derive(FromBytes, IntoBytes, Immutable, KnownLayout)]
4322
    /// #[repr(C, packed)]
4323
    /// struct ZSTy {
4324
    ///     leading_sized: [u8; 2],
4325
    ///     trailing_dst: [()],
4326
    /// }
4327
    ///
4328
    /// let src = &mut [85, 85][..];
4329
    /// let zsty = ZSTy::mut_from_bytes_with_elems(src, 42).unwrap();
4330
    /// assert_eq!(zsty.trailing_dst.len(), 42);
4331
    /// ```
4332
    ///
4333
    /// [`mut_from`]: FromBytes::mut_from
4334
    #[must_use = "has no side effects"]
4335
    #[inline]
4336
0
    fn mut_from_bytes_with_elems(
4337
0
        source: &mut [u8],
4338
0
        count: usize,
4339
0
    ) -> Result<&mut Self, CastError<&mut [u8], Self>>
4340
0
    where
4341
0
        Self: IntoBytes + KnownLayout<PointerMetadata = usize> + Immutable,
4342
    {
4343
0
        let source = Ptr::from_mut(source);
4344
0
        let maybe_slf = source.try_cast_into_no_leftover::<_, BecauseImmutable>(Some(count));
4345
0
        match maybe_slf {
4346
0
            Ok(slf) => Ok(slf
4347
0
                .recall_validity::<_, (_, (_, (BecauseExclusive, BecauseExclusive)))>()
4348
0
                .as_mut()),
4349
0
            Err(err) => Err(err.map_src(|s| s.as_mut())),
4350
        }
4351
0
    }
4352
4353
    /// Interprets the prefix of the given `source` as a `&mut Self` with DST
4354
    /// length equal to `count`.
4355
    ///
4356
    /// This method attempts to return a reference to the prefix of `source`
4357
    /// interpreted as a `Self` with `count` trailing elements, and a reference
4358
    /// to the preceding bytes. If there are insufficient bytes, or if `source`
4359
    /// is not appropriately aligned, this returns `Err`. If [`Self:
4360
    /// Unaligned`][self-unaligned], you can [infallibly discard the alignment
4361
    /// error][size-error-from].
4362
    ///
4363
    /// [self-unaligned]: Unaligned
4364
    /// [size-error-from]: error/struct.SizeError.html#method.from-1
4365
    ///
4366
    /// # Examples
4367
    ///
4368
    /// ```
4369
    /// use zerocopy::FromBytes;
4370
    /// # use zerocopy_derive::*;
4371
    ///
4372
    /// # #[derive(Debug, PartialEq, Eq)]
4373
    /// #[derive(KnownLayout, FromBytes, IntoBytes, Immutable)]
4374
    /// #[repr(C)]
4375
    /// struct Pixel {
4376
    ///     r: u8,
4377
    ///     g: u8,
4378
    ///     b: u8,
4379
    ///     a: u8,
4380
    /// }
4381
    ///
4382
    /// // These are more bytes than are needed to encode two `Pixel`s.
4383
    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
4384
    ///
4385
    /// let (pixels, suffix) = <[Pixel]>::mut_from_prefix_with_elems(bytes, 2).unwrap();
4386
    ///
4387
    /// assert_eq!(pixels, &[
4388
    ///     Pixel { r: 0, g: 1, b: 2, a: 3 },
4389
    ///     Pixel { r: 4, g: 5, b: 6, a: 7 },
4390
    /// ]);
4391
    ///
4392
    /// assert_eq!(suffix, &[8, 9]);
4393
    ///
4394
    /// pixels[1] = Pixel { r: 0, g: 0, b: 0, a: 0 };
4395
    /// suffix.fill(1);
4396
    ///
4397
    /// assert_eq!(bytes, [0, 1, 2, 3, 0, 0, 0, 0, 1, 1]);
4398
    /// ```
4399
    ///
4400
    /// Since an explicit `count` is provided, this method supports types with
4401
    /// zero-sized trailing slice elements. Methods such as [`mut_from_prefix`]
4402
    /// which do not take an explicit count do not support such types.
4403
    ///
4404
    /// ```
4405
    /// use zerocopy::*;
4406
    /// # use zerocopy_derive::*;
4407
    ///
4408
    /// #[derive(FromBytes, IntoBytes, Immutable, KnownLayout)]
4409
    /// #[repr(C, packed)]
4410
    /// struct ZSTy {
4411
    ///     leading_sized: [u8; 2],
4412
    ///     trailing_dst: [()],
4413
    /// }
4414
    ///
4415
    /// let src = &mut [85, 85][..];
4416
    /// let (zsty, _) = ZSTy::mut_from_prefix_with_elems(src, 42).unwrap();
4417
    /// assert_eq!(zsty.trailing_dst.len(), 42);
4418
    /// ```
4419
    ///
4420
    /// [`mut_from_prefix`]: FromBytes::mut_from_prefix
4421
    #[must_use = "has no side effects"]
4422
    #[inline]
4423
0
    fn mut_from_prefix_with_elems(
4424
0
        source: &mut [u8],
4425
0
        count: usize,
4426
0
    ) -> Result<(&mut Self, &mut [u8]), CastError<&mut [u8], Self>>
4427
0
    where
4428
0
        Self: IntoBytes + KnownLayout<PointerMetadata = usize>,
4429
    {
4430
0
        mut_from_prefix_suffix(source, Some(count), CastType::Prefix)
4431
0
    }
4432
4433
    /// Interprets the suffix of the given `source` as a `&mut Self` with DST
4434
    /// length equal to `count`.
4435
    ///
4436
    /// This method attempts to return a reference to the suffix of `source`
4437
    /// interpreted as a `Self` with `count` trailing elements, and a reference
4438
    /// to the remaining bytes. If there are insufficient bytes, or if that
4439
    /// suffix of `source` is not appropriately aligned, this returns `Err`. If
4440
    /// [`Self: Unaligned`][self-unaligned], you can [infallibly discard the
4441
    /// alignment error][size-error-from].
4442
    ///
4443
    /// [self-unaligned]: Unaligned
4444
    /// [size-error-from]: error/struct.SizeError.html#method.from-1
4445
    ///
4446
    /// # Examples
4447
    ///
4448
    /// ```
4449
    /// use zerocopy::FromBytes;
4450
    /// # use zerocopy_derive::*;
4451
    ///
4452
    /// # #[derive(Debug, PartialEq, Eq)]
4453
    /// #[derive(FromBytes, IntoBytes, Immutable)]
4454
    /// #[repr(C)]
4455
    /// struct Pixel {
4456
    ///     r: u8,
4457
    ///     g: u8,
4458
    ///     b: u8,
4459
    ///     a: u8,
4460
    /// }
4461
    ///
4462
    /// // These are more bytes than are needed to encode two `Pixel`s.
4463
    /// let bytes = &mut [0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
4464
    ///
4465
    /// let (prefix, pixels) = <[Pixel]>::mut_from_suffix_with_elems(bytes, 2).unwrap();
4466
    ///
4467
    /// assert_eq!(prefix, &[0, 1]);
4468
    ///
4469
    /// assert_eq!(pixels, &[
4470
    ///     Pixel { r: 2, g: 3, b: 4, a: 5 },
4471
    ///     Pixel { r: 6, g: 7, b: 8, a: 9 },
4472
    /// ]);
4473
    ///
4474
    /// prefix.fill(9);
4475
    /// pixels[1] = Pixel { r: 0, g: 0, b: 0, a: 0 };
4476
    ///
4477
    /// assert_eq!(bytes, [9, 9, 2, 3, 4, 5, 0, 0, 0, 0]);
4478
    /// ```
4479
    ///
4480
    /// Since an explicit `count` is provided, this method supports types with
4481
    /// zero-sized trailing slice elements. Methods such as [`mut_from_suffix`]
4482
    /// which do not take an explicit count do not support such types.
4483
    ///
4484
    /// ```
4485
    /// use zerocopy::*;
4486
    /// # use zerocopy_derive::*;
4487
    ///
4488
    /// #[derive(FromBytes, IntoBytes, Immutable, KnownLayout)]
4489
    /// #[repr(C, packed)]
4490
    /// struct ZSTy {
4491
    ///     leading_sized: [u8; 2],
4492
    ///     trailing_dst: [()],
4493
    /// }
4494
    ///
4495
    /// let src = &mut [85, 85][..];
4496
    /// let (_, zsty) = ZSTy::mut_from_suffix_with_elems(src, 42).unwrap();
4497
    /// assert_eq!(zsty.trailing_dst.len(), 42);
4498
    /// ```
4499
    ///
4500
    /// [`mut_from_suffix`]: FromBytes::mut_from_suffix
4501
    #[must_use = "has no side effects"]
4502
    #[inline]
4503
0
    fn mut_from_suffix_with_elems(
4504
0
        source: &mut [u8],
4505
0
        count: usize,
4506
0
    ) -> Result<(&mut [u8], &mut Self), CastError<&mut [u8], Self>>
4507
0
    where
4508
0
        Self: IntoBytes + KnownLayout<PointerMetadata = usize>,
4509
    {
4510
0
        mut_from_prefix_suffix(source, Some(count), CastType::Suffix).map(swap)
4511
0
    }
4512
4513
    /// Reads a copy of `Self` from the given `source`.
4514
    ///
4515
    /// If `source.len() != size_of::<Self>()`, `read_from_bytes` returns `Err`.
4516
    ///
4517
    /// # Examples
4518
    ///
4519
    /// ```
4520
    /// use zerocopy::FromBytes;
4521
    /// # use zerocopy_derive::*;
4522
    ///
4523
    /// #[derive(FromBytes)]
4524
    /// #[repr(C)]
4525
    /// struct PacketHeader {
4526
    ///     src_port: [u8; 2],
4527
    ///     dst_port: [u8; 2],
4528
    ///     length: [u8; 2],
4529
    ///     checksum: [u8; 2],
4530
    /// }
4531
    ///
4532
    /// // These bytes encode a `PacketHeader`.
4533
    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7][..];
4534
    ///
4535
    /// let header = PacketHeader::read_from_bytes(bytes).unwrap();
4536
    ///
4537
    /// assert_eq!(header.src_port, [0, 1]);
4538
    /// assert_eq!(header.dst_port, [2, 3]);
4539
    /// assert_eq!(header.length, [4, 5]);
4540
    /// assert_eq!(header.checksum, [6, 7]);
4541
    /// ```
4542
    #[must_use = "has no side effects"]
4543
    #[inline]
4544
0
    fn read_from_bytes(source: &[u8]) -> Result<Self, SizeError<&[u8], Self>>
4545
0
    where
4546
0
        Self: Sized,
4547
    {
4548
0
        match Ref::<_, Unalign<Self>>::sized_from(source) {
4549
0
            Ok(r) => Ok(Ref::read(&r).into_inner()),
4550
0
            Err(CastError::Size(e)) => Err(e.with_dst()),
4551
0
            Err(CastError::Alignment(_)) => {
4552
                // SAFETY: `Unalign<Self>` is trivially aligned, so
4553
                // `Ref::sized_from` cannot fail due to unmet alignment
4554
                // requirements.
4555
0
                unsafe { core::hint::unreachable_unchecked() }
4556
            }
4557
            Err(CastError::Validity(i)) => match i {},
4558
        }
4559
0
    }
4560
4561
    /// Reads a copy of `Self` from the prefix of the given `source`.
4562
    ///
4563
    /// This attempts to read a `Self` from the first `size_of::<Self>()` bytes
4564
    /// of `source`, returning that `Self` and any remaining bytes. If
4565
    /// `source.len() < size_of::<Self>()`, it returns `Err`.
4566
    ///
4567
    /// # Examples
4568
    ///
4569
    /// ```
4570
    /// use zerocopy::FromBytes;
4571
    /// # use zerocopy_derive::*;
4572
    ///
4573
    /// #[derive(FromBytes)]
4574
    /// #[repr(C)]
4575
    /// struct PacketHeader {
4576
    ///     src_port: [u8; 2],
4577
    ///     dst_port: [u8; 2],
4578
    ///     length: [u8; 2],
4579
    ///     checksum: [u8; 2],
4580
    /// }
4581
    ///
4582
    /// // These are more bytes than are needed to encode a `PacketHeader`.
4583
    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
4584
    ///
4585
    /// let (header, body) = PacketHeader::read_from_prefix(bytes).unwrap();
4586
    ///
4587
    /// assert_eq!(header.src_port, [0, 1]);
4588
    /// assert_eq!(header.dst_port, [2, 3]);
4589
    /// assert_eq!(header.length, [4, 5]);
4590
    /// assert_eq!(header.checksum, [6, 7]);
4591
    /// assert_eq!(body, [8, 9]);
4592
    /// ```
4593
    #[must_use = "has no side effects"]
4594
    #[inline]
4595
0
    fn read_from_prefix(source: &[u8]) -> Result<(Self, &[u8]), SizeError<&[u8], Self>>
4596
0
    where
4597
0
        Self: Sized,
4598
    {
4599
0
        match Ref::<_, Unalign<Self>>::sized_from_prefix(source) {
4600
0
            Ok((r, suffix)) => Ok((Ref::read(&r).into_inner(), suffix)),
4601
0
            Err(CastError::Size(e)) => Err(e.with_dst()),
4602
0
            Err(CastError::Alignment(_)) => {
4603
                // SAFETY: `Unalign<Self>` is trivially aligned, so
4604
                // `Ref::sized_from_prefix` cannot fail due to unmet alignment
4605
                // requirements.
4606
0
                unsafe { core::hint::unreachable_unchecked() }
4607
            }
4608
            Err(CastError::Validity(i)) => match i {},
4609
        }
4610
0
    }
4611
4612
    /// Reads a copy of `Self` from the suffix of the given `source`.
4613
    ///
4614
    /// This attempts to read a `Self` from the last `size_of::<Self>()` bytes
4615
    /// of `source`, returning that `Self` and any preceding bytes. If
4616
    /// `source.len() < size_of::<Self>()`, it returns `Err`.
4617
    ///
4618
    /// # Examples
4619
    ///
4620
    /// ```
4621
    /// use zerocopy::FromBytes;
4622
    /// # use zerocopy_derive::*;
4623
    ///
4624
    /// #[derive(FromBytes)]
4625
    /// #[repr(C)]
4626
    /// struct PacketTrailer {
4627
    ///     frame_check_sequence: [u8; 4],
4628
    /// }
4629
    ///
4630
    /// // These are more bytes than are needed to encode a `PacketTrailer`.
4631
    /// let bytes = &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9][..];
4632
    ///
4633
    /// let (prefix, trailer) = PacketTrailer::read_from_suffix(bytes).unwrap();
4634
    ///
4635
    /// assert_eq!(prefix, [0, 1, 2, 3, 4, 5]);
4636
    /// assert_eq!(trailer.frame_check_sequence, [6, 7, 8, 9]);
4637
    /// ```
4638
    #[must_use = "has no side effects"]
4639
    #[inline]
4640
0
    fn read_from_suffix(source: &[u8]) -> Result<(&[u8], Self), SizeError<&[u8], Self>>
4641
0
    where
4642
0
        Self: Sized,
4643
    {
4644
0
        match Ref::<_, Unalign<Self>>::sized_from_suffix(source) {
4645
0
            Ok((prefix, r)) => Ok((prefix, Ref::read(&r).into_inner())),
4646
0
            Err(CastError::Size(e)) => Err(e.with_dst()),
4647
0
            Err(CastError::Alignment(_)) => {
4648
                // SAFETY: `Unalign<Self>` is trivially aligned, so
4649
                // `Ref::sized_from_suffix` cannot fail due to unmet alignment
4650
                // requirements.
4651
0
                unsafe { core::hint::unreachable_unchecked() }
4652
            }
4653
            Err(CastError::Validity(i)) => match i {},
4654
        }
4655
0
    }
4656
4657
    /// Reads a copy of `self` from an `io::Read`.
4658
    ///
4659
    /// This is useful for interfacing with operating system byte sinks (files,
4660
    /// sockets, etc.).
4661
    ///
4662
    /// # Examples
4663
    ///
4664
    /// ```no_run
4665
    /// use zerocopy::{byteorder::big_endian::*, FromBytes};
4666
    /// use std::fs::File;
4667
    /// # use zerocopy_derive::*;
4668
    ///
4669
    /// #[derive(FromBytes)]
4670
    /// #[repr(C)]
4671
    /// struct BitmapFileHeader {
4672
    ///     signature: [u8; 2],
4673
    ///     size: U32,
4674
    ///     reserved: U64,
4675
    ///     offset: U64,
4676
    /// }
4677
    ///
4678
    /// let mut file = File::open("image.bin").unwrap();
4679
    /// let header = BitmapFileHeader::read_from_io(&mut file).unwrap();
4680
    /// ```
4681
    #[cfg(feature = "std")]
4682
    #[inline(always)]
4683
    fn read_from_io<R>(mut src: R) -> io::Result<Self>
4684
    where
4685
        Self: Sized,
4686
        R: io::Read,
4687
    {
4688
        // NOTE(#2319, #2320): We do `buf.zero()` separately rather than
4689
        // constructing `let buf = CoreMaybeUninit::zeroed()` because, if `Self`
4690
        // contains padding bytes, then a typed copy of `CoreMaybeUninit<Self>`
4691
        // will not necessarily preserve zeros written to those padding byte
4692
        // locations, and so `buf` could contain uninitialized bytes.
4693
        let mut buf = CoreMaybeUninit::<Self>::uninit();
4694
        buf.zero();
4695
4696
        let ptr = Ptr::from_mut(&mut buf);
4697
        // SAFETY: After `buf.zero()`, `buf` consists entirely of initialized,
4698
        // zeroed bytes. Since `MaybeUninit` has no validity requirements, `ptr`
4699
        // cannot be used to write values which will violate `buf`'s bit
4700
        // validity. Since `ptr` has `Exclusive` aliasing, nothing other than
4701
        // `ptr` may be used to mutate `ptr`'s referent, and so its bit validity
4702
        // cannot be violated even though `buf` may have more permissive bit
4703
        // validity than `ptr`.
4704
        let ptr = unsafe { ptr.assume_validity::<invariant::Initialized>() };
4705
        let ptr = ptr.as_bytes::<BecauseExclusive>();
4706
        src.read_exact(ptr.as_mut())?;
4707
        // SAFETY: `buf` entirely consists of initialized bytes, and `Self` is
4708
        // `FromBytes`.
4709
        Ok(unsafe { buf.assume_init() })
4710
    }
4711
4712
    #[deprecated(since = "0.8.0", note = "renamed to `FromBytes::ref_from_bytes`")]
4713
    #[doc(hidden)]
4714
    #[must_use = "has no side effects"]
4715
    #[inline(always)]
4716
0
    fn ref_from(source: &[u8]) -> Option<&Self>
4717
0
    where
4718
0
        Self: KnownLayout + Immutable,
4719
    {
4720
0
        Self::ref_from_bytes(source).ok()
4721
0
    }
4722
4723
    #[deprecated(since = "0.8.0", note = "renamed to `FromBytes::mut_from_bytes`")]
4724
    #[doc(hidden)]
4725
    #[must_use = "has no side effects"]
4726
    #[inline(always)]
4727
0
    fn mut_from(source: &mut [u8]) -> Option<&mut Self>
4728
0
    where
4729
0
        Self: KnownLayout + IntoBytes,
4730
    {
4731
0
        Self::mut_from_bytes(source).ok()
4732
0
    }
4733
4734
    #[deprecated(since = "0.8.0", note = "renamed to `FromBytes::ref_from_prefix_with_elems`")]
4735
    #[doc(hidden)]
4736
    #[must_use = "has no side effects"]
4737
    #[inline(always)]
4738
0
    fn slice_from_prefix(source: &[u8], count: usize) -> Option<(&[Self], &[u8])>
4739
0
    where
4740
0
        Self: Sized + Immutable,
4741
    {
4742
0
        <[Self]>::ref_from_prefix_with_elems(source, count).ok()
4743
0
    }
4744
4745
    #[deprecated(since = "0.8.0", note = "renamed to `FromBytes::ref_from_suffix_with_elems`")]
4746
    #[doc(hidden)]
4747
    #[must_use = "has no side effects"]
4748
    #[inline(always)]
4749
0
    fn slice_from_suffix(source: &[u8], count: usize) -> Option<(&[u8], &[Self])>
4750
0
    where
4751
0
        Self: Sized + Immutable,
4752
    {
4753
0
        <[Self]>::ref_from_suffix_with_elems(source, count).ok()
4754
0
    }
4755
4756
    #[deprecated(since = "0.8.0", note = "renamed to `FromBytes::mut_from_prefix_with_elems`")]
4757
    #[doc(hidden)]
4758
    #[must_use = "has no side effects"]
4759
    #[inline(always)]
4760
0
    fn mut_slice_from_prefix(source: &mut [u8], count: usize) -> Option<(&mut [Self], &mut [u8])>
4761
0
    where
4762
0
        Self: Sized + IntoBytes,
4763
    {
4764
0
        <[Self]>::mut_from_prefix_with_elems(source, count).ok()
4765
0
    }
4766
4767
    #[deprecated(since = "0.8.0", note = "renamed to `FromBytes::mut_from_suffix_with_elems`")]
4768
    #[doc(hidden)]
4769
    #[must_use = "has no side effects"]
4770
    #[inline(always)]
4771
0
    fn mut_slice_from_suffix(source: &mut [u8], count: usize) -> Option<(&mut [u8], &mut [Self])>
4772
0
    where
4773
0
        Self: Sized + IntoBytes,
4774
    {
4775
0
        <[Self]>::mut_from_suffix_with_elems(source, count).ok()
4776
0
    }
4777
4778
    #[deprecated(since = "0.8.0", note = "renamed to `FromBytes::read_from_bytes`")]
4779
    #[doc(hidden)]
4780
    #[must_use = "has no side effects"]
4781
    #[inline(always)]
4782
0
    fn read_from(source: &[u8]) -> Option<Self>
4783
0
    where
4784
0
        Self: Sized,
4785
    {
4786
0
        Self::read_from_bytes(source).ok()
4787
0
    }
4788
}
4789
4790
/// Interprets the given affix of the given bytes as a `&Self`.
4791
///
4792
/// This method computes the largest possible size of `Self` that can fit in the
4793
/// prefix or suffix bytes of `source`, then attempts to return both a reference
4794
/// to those bytes interpreted as a `Self`, and a reference to the excess bytes.
4795
/// If there are insufficient bytes, or if that affix of `source` is not
4796
/// appropriately aligned, this returns `Err`.
4797
#[inline(always)]
4798
0
fn ref_from_prefix_suffix<T: FromBytes + KnownLayout + Immutable + ?Sized>(
4799
0
    source: &[u8],
4800
0
    meta: Option<T::PointerMetadata>,
4801
0
    cast_type: CastType,
4802
0
) -> Result<(&T, &[u8]), CastError<&[u8], T>> {
4803
0
    let (slf, prefix_suffix) = Ptr::from_ref(source)
4804
0
        .try_cast_into::<_, BecauseImmutable>(cast_type, meta)
4805
0
        .map_err(|err| err.map_src(|s| s.as_ref()))?;
4806
0
    Ok((slf.recall_validity().as_ref(), prefix_suffix.as_ref()))
4807
0
}
4808
4809
/// Interprets the given affix of the given bytes as a `&mut Self` without
4810
/// copying.
4811
///
4812
/// This method computes the largest possible size of `Self` that can fit in the
4813
/// prefix or suffix bytes of `source`, then attempts to return both a reference
4814
/// to those bytes interpreted as a `Self`, and a reference to the excess bytes.
4815
/// If there are insufficient bytes, or if that affix of `source` is not
4816
/// appropriately aligned, this returns `Err`.
4817
#[inline(always)]
4818
0
fn mut_from_prefix_suffix<T: FromBytes + IntoBytes + KnownLayout + ?Sized>(
4819
0
    source: &mut [u8],
4820
0
    meta: Option<T::PointerMetadata>,
4821
0
    cast_type: CastType,
4822
0
) -> Result<(&mut T, &mut [u8]), CastError<&mut [u8], T>> {
4823
0
    let (slf, prefix_suffix) = Ptr::from_mut(source)
4824
0
        .try_cast_into::<_, BecauseExclusive>(cast_type, meta)
4825
0
        .map_err(|err| err.map_src(|s| s.as_mut()))?;
4826
0
    Ok((slf.recall_validity::<_, (_, (_, _))>().as_mut(), prefix_suffix.as_mut()))
4827
0
}
4828
4829
/// Analyzes whether a type is [`IntoBytes`].
4830
///
4831
/// This derive analyzes, at compile time, whether the annotated type satisfies
4832
/// the [safety conditions] of `IntoBytes` and implements `IntoBytes` if it is
4833
/// sound to do so. This derive can be applied to structs and enums (see below
4834
/// for union support); e.g.:
4835
///
4836
/// ```
4837
/// # use zerocopy_derive::{IntoBytes};
4838
/// #[derive(IntoBytes)]
4839
/// #[repr(C)]
4840
/// struct MyStruct {
4841
/// # /*
4842
///     ...
4843
/// # */
4844
/// }
4845
///
4846
/// #[derive(IntoBytes)]
4847
/// #[repr(u8)]
4848
/// enum MyEnum {
4849
/// #   Variant,
4850
/// # /*
4851
///     ...
4852
/// # */
4853
/// }
4854
/// ```
4855
///
4856
/// [safety conditions]: trait@IntoBytes#safety
4857
///
4858
/// # Error Messages
4859
///
4860
/// On Rust toolchains prior to 1.78.0, due to the way that the custom derive
4861
/// for `IntoBytes` is implemented, you may get an error like this:
4862
///
4863
/// ```text
4864
/// error[E0277]: the trait bound `(): PaddingFree<Foo, true>` is not satisfied
4865
///   --> lib.rs:23:10
4866
///    |
4867
///  1 | #[derive(IntoBytes)]
4868
///    |          ^^^^^^^^^ the trait `PaddingFree<Foo, true>` is not implemented for `()`
4869
///    |
4870
///    = help: the following implementations were found:
4871
///                   <() as PaddingFree<T, false>>
4872
/// ```
4873
///
4874
/// This error indicates that the type being annotated has padding bytes, which
4875
/// is illegal for `IntoBytes` types. Consider reducing the alignment of some
4876
/// fields by using types in the [`byteorder`] module, wrapping field types in
4877
/// [`Unalign`], adding explicit struct fields where those padding bytes would
4878
/// be, or using `#[repr(packed)]`. See the Rust Reference's page on [type
4879
/// layout] for more information about type layout and padding.
4880
///
4881
/// [type layout]: https://doc.rust-lang.org/reference/type-layout.html
4882
///
4883
/// # Unions
4884
///
4885
/// Currently, union bit validity is [up in the air][union-validity], and so
4886
/// zerocopy does not support `#[derive(IntoBytes)]` on unions by default.
4887
/// However, implementing `IntoBytes` on a union type is likely sound on all
4888
/// existing Rust toolchains - it's just that it may become unsound in the
4889
/// future. You can opt-in to `#[derive(IntoBytes)]` support on unions by
4890
/// passing the unstable `zerocopy_derive_union_into_bytes` cfg:
4891
///
4892
/// ```shell
4893
/// $ RUSTFLAGS='--cfg zerocopy_derive_union_into_bytes' cargo build
4894
/// ```
4895
///
4896
/// However, it is your responsibility to ensure that this derive is sound on
4897
/// the specific versions of the Rust toolchain you are using! We make no
4898
/// stability or soundness guarantees regarding this cfg, and may remove it at
4899
/// any point.
4900
///
4901
/// We are actively working with Rust to stabilize the necessary language
4902
/// guarantees to support this in a forwards-compatible way, which will enable
4903
/// us to remove the cfg gate. As part of this effort, we need to know how much
4904
/// demand there is for this feature. If you would like to use `IntoBytes` on
4905
/// unions, [please let us know][discussion].
4906
///
4907
/// [union-validity]: https://github.com/rust-lang/unsafe-code-guidelines/issues/438
4908
/// [discussion]: https://github.com/google/zerocopy/discussions/1802
4909
///
4910
/// # Analysis
4911
///
4912
/// *This section describes, roughly, the analysis performed by this derive to
4913
/// determine whether it is sound to implement `IntoBytes` for a given type.
4914
/// Unless you are modifying the implementation of this derive, or attempting to
4915
/// manually implement `IntoBytes` for a type yourself, you don't need to read
4916
/// this section.*
4917
///
4918
/// If a type has the following properties, then this derive can implement
4919
/// `IntoBytes` for that type:
4920
///
4921
/// - If the type is a struct, its fields must be [`IntoBytes`]. Additionally:
4922
///     - if the type is `repr(transparent)` or `repr(packed)`, it is
4923
///       [`IntoBytes`] if its fields are [`IntoBytes`]; else,
4924
///     - if the type is `repr(C)` with at most one field, it is [`IntoBytes`]
4925
///       if its field is [`IntoBytes`]; else,
4926
///     - if the type has no generic parameters, it is [`IntoBytes`] if the type
4927
///       is sized and has no padding bytes; else,
4928
///     - if the type is `repr(C)`, its fields must be [`Unaligned`].
4929
/// - If the type is an enum:
4930
///   - It must have a defined representation (`repr`s `C`, `u8`, `u16`, `u32`,
4931
///     `u64`, `usize`, `i8`, `i16`, `i32`, `i64`, or `isize`).
4932
///   - It must have no padding bytes.
4933
///   - Its fields must be [`IntoBytes`].
4934
///
4935
/// This analysis is subject to change. Unsafe code may *only* rely on the
4936
/// documented [safety conditions] of `FromBytes`, and must *not* rely on the
4937
/// implementation details of this derive.
4938
///
4939
/// [Rust Reference]: https://doc.rust-lang.org/reference/type-layout.html
4940
#[cfg(any(feature = "derive", test))]
4941
#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
4942
pub use zerocopy_derive::IntoBytes;
4943
4944
/// Types that can be converted to an immutable slice of initialized bytes.
4945
///
4946
/// Any `IntoBytes` type can be converted to a slice of initialized bytes of the
4947
/// same size. This is useful for efficiently serializing structured data as raw
4948
/// bytes.
4949
///
4950
/// # Implementation
4951
///
4952
/// **Do not implement this trait yourself!** Instead, use
4953
/// [`#[derive(IntoBytes)]`][derive]; e.g.:
4954
///
4955
/// ```
4956
/// # use zerocopy_derive::IntoBytes;
4957
/// #[derive(IntoBytes)]
4958
/// #[repr(C)]
4959
/// struct MyStruct {
4960
/// # /*
4961
///     ...
4962
/// # */
4963
/// }
4964
///
4965
/// #[derive(IntoBytes)]
4966
/// #[repr(u8)]
4967
/// enum MyEnum {
4968
/// #   Variant0,
4969
/// # /*
4970
///     ...
4971
/// # */
4972
/// }
4973
/// ```
4974
///
4975
/// This derive performs a sophisticated, compile-time safety analysis to
4976
/// determine whether a type is `IntoBytes`. See the [derive
4977
/// documentation][derive] for guidance on how to interpret error messages
4978
/// produced by the derive's analysis.
4979
///
4980
/// # Safety
4981
///
4982
/// *This section describes what is required in order for `T: IntoBytes`, and
4983
/// what unsafe code may assume of such types. If you don't plan on implementing
4984
/// `IntoBytes` manually, and you don't plan on writing unsafe code that
4985
/// operates on `IntoBytes` types, then you don't need to read this section.*
4986
///
4987
/// If `T: IntoBytes`, then unsafe code may assume that it is sound to treat any
4988
/// `t: T` as an immutable `[u8]` of length `size_of_val(t)`. If a type is
4989
/// marked as `IntoBytes` which violates this contract, it may cause undefined
4990
/// behavior.
4991
///
4992
/// `#[derive(IntoBytes)]` only permits [types which satisfy these
4993
/// requirements][derive-analysis].
4994
///
4995
#[cfg_attr(
4996
    feature = "derive",
4997
    doc = "[derive]: zerocopy_derive::IntoBytes",
4998
    doc = "[derive-analysis]: zerocopy_derive::IntoBytes#analysis"
4999
)]
5000
#[cfg_attr(
5001
    not(feature = "derive"),
5002
    doc = concat!("[derive]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.IntoBytes.html"),
5003
    doc = concat!("[derive-analysis]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.IntoBytes.html#analysis"),
5004
)]
5005
#[cfg_attr(
5006
    zerocopy_diagnostic_on_unimplemented_1_78_0,
5007
    diagnostic::on_unimplemented(note = "Consider adding `#[derive(IntoBytes)]` to `{Self}`")
5008
)]
5009
pub unsafe trait IntoBytes {
5010
    // The `Self: Sized` bound makes it so that this function doesn't prevent
5011
    // `IntoBytes` from being object safe. Note that other `IntoBytes` methods
5012
    // prevent object safety, but those provide a benefit in exchange for object
5013
    // safety. If at some point we remove those methods, change their type
5014
    // signatures, or move them out of this trait so that `IntoBytes` is object
5015
    // safe again, it's important that this function not prevent object safety.
5016
    #[doc(hidden)]
5017
    fn only_derive_is_allowed_to_implement_this_trait()
5018
    where
5019
        Self: Sized;
5020
5021
    /// Gets the bytes of this value.
5022
    ///
5023
    /// # Examples
5024
    ///
5025
    /// ```
5026
    /// use zerocopy::IntoBytes;
5027
    /// # use zerocopy_derive::*;
5028
    ///
5029
    /// #[derive(IntoBytes, Immutable)]
5030
    /// #[repr(C)]
5031
    /// struct PacketHeader {
5032
    ///     src_port: [u8; 2],
5033
    ///     dst_port: [u8; 2],
5034
    ///     length: [u8; 2],
5035
    ///     checksum: [u8; 2],
5036
    /// }
5037
    ///
5038
    /// let header = PacketHeader {
5039
    ///     src_port: [0, 1],
5040
    ///     dst_port: [2, 3],
5041
    ///     length: [4, 5],
5042
    ///     checksum: [6, 7],
5043
    /// };
5044
    ///
5045
    /// let bytes = header.as_bytes();
5046
    ///
5047
    /// assert_eq!(bytes, [0, 1, 2, 3, 4, 5, 6, 7]);
5048
    /// ```
5049
    #[must_use = "has no side effects"]
5050
    #[inline(always)]
5051
0
    fn as_bytes(&self) -> &[u8]
5052
0
    where
5053
0
        Self: Immutable,
5054
    {
5055
        // Note that this method does not have a `Self: Sized` bound;
5056
        // `size_of_val` works for unsized values too.
5057
0
        let len = mem::size_of_val(self);
5058
0
        let slf: *const Self = self;
5059
5060
        // SAFETY:
5061
        // - `slf.cast::<u8>()` is valid for reads for `len * size_of::<u8>()`
5062
        //   many bytes because...
5063
        //   - `slf` is the same pointer as `self`, and `self` is a reference
5064
        //     which points to an object whose size is `len`. Thus...
5065
        //     - The entire region of `len` bytes starting at `slf` is contained
5066
        //       within a single allocation.
5067
        //     - `slf` is non-null.
5068
        //   - `slf` is trivially aligned to `align_of::<u8>() == 1`.
5069
        // - `Self: IntoBytes` ensures that all of the bytes of `slf` are
5070
        //   initialized.
5071
        // - Since `slf` is derived from `self`, and `self` is an immutable
5072
        //   reference, the only other references to this memory region that
5073
        //   could exist are other immutable references, and those don't allow
5074
        //   mutation. `Self: Immutable` prohibits types which contain
5075
        //   `UnsafeCell`s, which are the only types for which this rule
5076
        //   wouldn't be sufficient.
5077
        // - The total size of the resulting slice is no larger than
5078
        //   `isize::MAX` because no allocation produced by safe code can be
5079
        //   larger than `isize::MAX`.
5080
        //
5081
        // FIXME(#429): Add references to docs and quotes.
5082
0
        unsafe { slice::from_raw_parts(slf.cast::<u8>(), len) }
5083
0
    }
5084
5085
    /// Gets the bytes of this value mutably.
5086
    ///
5087
    /// # Examples
5088
    ///
5089
    /// ```
5090
    /// use zerocopy::IntoBytes;
5091
    /// # use zerocopy_derive::*;
5092
    ///
5093
    /// # #[derive(Eq, PartialEq, Debug)]
5094
    /// #[derive(FromBytes, IntoBytes, Immutable)]
5095
    /// #[repr(C)]
5096
    /// struct PacketHeader {
5097
    ///     src_port: [u8; 2],
5098
    ///     dst_port: [u8; 2],
5099
    ///     length: [u8; 2],
5100
    ///     checksum: [u8; 2],
5101
    /// }
5102
    ///
5103
    /// let mut header = PacketHeader {
5104
    ///     src_port: [0, 1],
5105
    ///     dst_port: [2, 3],
5106
    ///     length: [4, 5],
5107
    ///     checksum: [6, 7],
5108
    /// };
5109
    ///
5110
    /// let bytes = header.as_mut_bytes();
5111
    ///
5112
    /// assert_eq!(bytes, [0, 1, 2, 3, 4, 5, 6, 7]);
5113
    ///
5114
    /// bytes.reverse();
5115
    ///
5116
    /// assert_eq!(header, PacketHeader {
5117
    ///     src_port: [7, 6],
5118
    ///     dst_port: [5, 4],
5119
    ///     length: [3, 2],
5120
    ///     checksum: [1, 0],
5121
    /// });
5122
    /// ```
5123
    #[must_use = "has no side effects"]
5124
    #[inline(always)]
5125
0
    fn as_mut_bytes(&mut self) -> &mut [u8]
5126
0
    where
5127
0
        Self: FromBytes,
5128
    {
5129
        // Note that this method does not have a `Self: Sized` bound;
5130
        // `size_of_val` works for unsized values too.
5131
0
        let len = mem::size_of_val(self);
5132
0
        let slf: *mut Self = self;
5133
5134
        // SAFETY:
5135
        // - `slf.cast::<u8>()` is valid for reads and writes for `len *
5136
        //   size_of::<u8>()` many bytes because...
5137
        //   - `slf` is the same pointer as `self`, and `self` is a reference
5138
        //     which points to an object whose size is `len`. Thus...
5139
        //     - The entire region of `len` bytes starting at `slf` is contained
5140
        //       within a single allocation.
5141
        //     - `slf` is non-null.
5142
        //   - `slf` is trivially aligned to `align_of::<u8>() == 1`.
5143
        // - `Self: IntoBytes` ensures that all of the bytes of `slf` are
5144
        //   initialized.
5145
        // - `Self: FromBytes` ensures that no write to this memory region
5146
        //   could result in it containing an invalid `Self`.
5147
        // - Since `slf` is derived from `self`, and `self` is a mutable
5148
        //   reference, no other references to this memory region can exist.
5149
        // - The total size of the resulting slice is no larger than
5150
        //   `isize::MAX` because no allocation produced by safe code can be
5151
        //   larger than `isize::MAX`.
5152
        //
5153
        // FIXME(#429): Add references to docs and quotes.
5154
0
        unsafe { slice::from_raw_parts_mut(slf.cast::<u8>(), len) }
5155
0
    }
5156
5157
    /// Writes a copy of `self` to `dst`.
5158
    ///
5159
    /// If `dst.len() != size_of_val(self)`, `write_to` returns `Err`.
5160
    ///
5161
    /// # Examples
5162
    ///
5163
    /// ```
5164
    /// use zerocopy::IntoBytes;
5165
    /// # use zerocopy_derive::*;
5166
    ///
5167
    /// #[derive(IntoBytes, Immutable)]
5168
    /// #[repr(C)]
5169
    /// struct PacketHeader {
5170
    ///     src_port: [u8; 2],
5171
    ///     dst_port: [u8; 2],
5172
    ///     length: [u8; 2],
5173
    ///     checksum: [u8; 2],
5174
    /// }
5175
    ///
5176
    /// let header = PacketHeader {
5177
    ///     src_port: [0, 1],
5178
    ///     dst_port: [2, 3],
5179
    ///     length: [4, 5],
5180
    ///     checksum: [6, 7],
5181
    /// };
5182
    ///
5183
    /// let mut bytes = [0, 0, 0, 0, 0, 0, 0, 0];
5184
    ///
5185
    /// header.write_to(&mut bytes[..]);
5186
    ///
5187
    /// assert_eq!(bytes, [0, 1, 2, 3, 4, 5, 6, 7]);
5188
    /// ```
5189
    ///
5190
    /// If too many or too few target bytes are provided, `write_to` returns
5191
    /// `Err` and leaves the target bytes unmodified:
5192
    ///
5193
    /// ```
5194
    /// # use zerocopy::IntoBytes;
5195
    /// # let header = u128::MAX;
5196
    /// let mut excessive_bytes = &mut [0u8; 128][..];
5197
    ///
5198
    /// let write_result = header.write_to(excessive_bytes);
5199
    ///
5200
    /// assert!(write_result.is_err());
5201
    /// assert_eq!(excessive_bytes, [0u8; 128]);
5202
    /// ```
5203
    #[must_use = "callers should check the return value to see if the operation succeeded"]
5204
    #[inline]
5205
    #[allow(clippy::mut_from_ref)] // False positive: `&self -> &mut [u8]`
5206
0
    fn write_to(&self, dst: &mut [u8]) -> Result<(), SizeError<&Self, &mut [u8]>>
5207
0
    where
5208
0
        Self: Immutable,
5209
    {
5210
0
        let src = self.as_bytes();
5211
0
        if dst.len() == src.len() {
5212
            // SAFETY: Within this branch of the conditional, we have ensured
5213
            // that `dst.len()` is equal to `src.len()`. Neither the size of the
5214
            // source nor the size of the destination change between the above
5215
            // size check and the invocation of `copy_unchecked`.
5216
0
            unsafe { util::copy_unchecked(src, dst) }
5217
0
            Ok(())
5218
        } else {
5219
0
            Err(SizeError::new(self))
5220
        }
5221
0
    }
5222
5223
    /// Writes a copy of `self` to the prefix of `dst`.
5224
    ///
5225
    /// `write_to_prefix` writes `self` to the first `size_of_val(self)` bytes
5226
    /// of `dst`. If `dst.len() < size_of_val(self)`, it returns `Err`.
5227
    ///
5228
    /// # Examples
5229
    ///
5230
    /// ```
5231
    /// use zerocopy::IntoBytes;
5232
    /// # use zerocopy_derive::*;
5233
    ///
5234
    /// #[derive(IntoBytes, Immutable)]
5235
    /// #[repr(C)]
5236
    /// struct PacketHeader {
5237
    ///     src_port: [u8; 2],
5238
    ///     dst_port: [u8; 2],
5239
    ///     length: [u8; 2],
5240
    ///     checksum: [u8; 2],
5241
    /// }
5242
    ///
5243
    /// let header = PacketHeader {
5244
    ///     src_port: [0, 1],
5245
    ///     dst_port: [2, 3],
5246
    ///     length: [4, 5],
5247
    ///     checksum: [6, 7],
5248
    /// };
5249
    ///
5250
    /// let mut bytes = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0];
5251
    ///
5252
    /// header.write_to_prefix(&mut bytes[..]);
5253
    ///
5254
    /// assert_eq!(bytes, [0, 1, 2, 3, 4, 5, 6, 7, 0, 0]);
5255
    /// ```
5256
    ///
5257
    /// If insufficient target bytes are provided, `write_to_prefix` returns
5258
    /// `Err` and leaves the target bytes unmodified:
5259
    ///
5260
    /// ```
5261
    /// # use zerocopy::IntoBytes;
5262
    /// # let header = u128::MAX;
5263
    /// let mut insufficient_bytes = &mut [0, 0][..];
5264
    ///
5265
    /// let write_result = header.write_to_suffix(insufficient_bytes);
5266
    ///
5267
    /// assert!(write_result.is_err());
5268
    /// assert_eq!(insufficient_bytes, [0, 0]);
5269
    /// ```
5270
    #[must_use = "callers should check the return value to see if the operation succeeded"]
5271
    #[inline]
5272
    #[allow(clippy::mut_from_ref)] // False positive: `&self -> &mut [u8]`
5273
0
    fn write_to_prefix(&self, dst: &mut [u8]) -> Result<(), SizeError<&Self, &mut [u8]>>
5274
0
    where
5275
0
        Self: Immutable,
5276
    {
5277
0
        let src = self.as_bytes();
5278
0
        match dst.get_mut(..src.len()) {
5279
0
            Some(dst) => {
5280
                // SAFETY: Within this branch of the `match`, we have ensured
5281
                // through fallible subslicing that `dst.len()` is equal to
5282
                // `src.len()`. Neither the size of the source nor the size of
5283
                // the destination change between the above subslicing operation
5284
                // and the invocation of `copy_unchecked`.
5285
0
                unsafe { util::copy_unchecked(src, dst) }
5286
0
                Ok(())
5287
            }
5288
0
            None => Err(SizeError::new(self)),
5289
        }
5290
0
    }
5291
5292
    /// Writes a copy of `self` to the suffix of `dst`.
5293
    ///
5294
    /// `write_to_suffix` writes `self` to the last `size_of_val(self)` bytes of
5295
    /// `dst`. If `dst.len() < size_of_val(self)`, it returns `Err`.
5296
    ///
5297
    /// # Examples
5298
    ///
5299
    /// ```
5300
    /// use zerocopy::IntoBytes;
5301
    /// # use zerocopy_derive::*;
5302
    ///
5303
    /// #[derive(IntoBytes, Immutable)]
5304
    /// #[repr(C)]
5305
    /// struct PacketHeader {
5306
    ///     src_port: [u8; 2],
5307
    ///     dst_port: [u8; 2],
5308
    ///     length: [u8; 2],
5309
    ///     checksum: [u8; 2],
5310
    /// }
5311
    ///
5312
    /// let header = PacketHeader {
5313
    ///     src_port: [0, 1],
5314
    ///     dst_port: [2, 3],
5315
    ///     length: [4, 5],
5316
    ///     checksum: [6, 7],
5317
    /// };
5318
    ///
5319
    /// let mut bytes = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0];
5320
    ///
5321
    /// header.write_to_suffix(&mut bytes[..]);
5322
    ///
5323
    /// assert_eq!(bytes, [0, 0, 0, 1, 2, 3, 4, 5, 6, 7]);
5324
    ///
5325
    /// let mut insufficient_bytes = &mut [0, 0][..];
5326
    ///
5327
    /// let write_result = header.write_to_suffix(insufficient_bytes);
5328
    ///
5329
    /// assert!(write_result.is_err());
5330
    /// assert_eq!(insufficient_bytes, [0, 0]);
5331
    /// ```
5332
    ///
5333
    /// If insufficient target bytes are provided, `write_to_suffix` returns
5334
    /// `Err` and leaves the target bytes unmodified:
5335
    ///
5336
    /// ```
5337
    /// # use zerocopy::IntoBytes;
5338
    /// # let header = u128::MAX;
5339
    /// let mut insufficient_bytes = &mut [0, 0][..];
5340
    ///
5341
    /// let write_result = header.write_to_suffix(insufficient_bytes);
5342
    ///
5343
    /// assert!(write_result.is_err());
5344
    /// assert_eq!(insufficient_bytes, [0, 0]);
5345
    /// ```
5346
    #[must_use = "callers should check the return value to see if the operation succeeded"]
5347
    #[inline]
5348
    #[allow(clippy::mut_from_ref)] // False positive: `&self -> &mut [u8]`
5349
0
    fn write_to_suffix(&self, dst: &mut [u8]) -> Result<(), SizeError<&Self, &mut [u8]>>
5350
0
    where
5351
0
        Self: Immutable,
5352
    {
5353
0
        let src = self.as_bytes();
5354
0
        let start = if let Some(start) = dst.len().checked_sub(src.len()) {
5355
0
            start
5356
        } else {
5357
0
            return Err(SizeError::new(self));
5358
        };
5359
0
        let dst = if let Some(dst) = dst.get_mut(start..) {
5360
0
            dst
5361
        } else {
5362
            // get_mut() should never return None here. We return a `SizeError`
5363
            // rather than .unwrap() because in the event the branch is not
5364
            // optimized away, returning a value is generally lighter-weight
5365
            // than panicking.
5366
0
            return Err(SizeError::new(self));
5367
        };
5368
        // SAFETY: Through fallible subslicing of `dst`, we have ensured that
5369
        // `dst.len()` is equal to `src.len()`. Neither the size of the source
5370
        // nor the size of the destination change between the above subslicing
5371
        // operation and the invocation of `copy_unchecked`.
5372
0
        unsafe {
5373
0
            util::copy_unchecked(src, dst);
5374
0
        }
5375
0
        Ok(())
5376
0
    }
5377
5378
    /// Writes a copy of `self` to an `io::Write`.
5379
    ///
5380
    /// This is a shorthand for `dst.write_all(self.as_bytes())`, and is useful
5381
    /// for interfacing with operating system byte sinks (files, sockets, etc.).
5382
    ///
5383
    /// # Examples
5384
    ///
5385
    /// ```no_run
5386
    /// use zerocopy::{byteorder::big_endian::U16, FromBytes, IntoBytes};
5387
    /// use std::fs::File;
5388
    /// # use zerocopy_derive::*;
5389
    ///
5390
    /// #[derive(FromBytes, IntoBytes, Immutable, KnownLayout)]
5391
    /// #[repr(C, packed)]
5392
    /// struct GrayscaleImage {
5393
    ///     height: U16,
5394
    ///     width: U16,
5395
    ///     pixels: [U16],
5396
    /// }
5397
    ///
5398
    /// let image = GrayscaleImage::ref_from_bytes(&[0, 0, 0, 0][..]).unwrap();
5399
    /// let mut file = File::create("image.bin").unwrap();
5400
    /// image.write_to_io(&mut file).unwrap();
5401
    /// ```
5402
    ///
5403
    /// If the write fails, `write_to_io` returns `Err` and a partial write may
5404
    /// have occurred; e.g.:
5405
    ///
5406
    /// ```
5407
    /// # use zerocopy::IntoBytes;
5408
    ///
5409
    /// let src = u128::MAX;
5410
    /// let mut dst = [0u8; 2];
5411
    ///
5412
    /// let write_result = src.write_to_io(&mut dst[..]);
5413
    ///
5414
    /// assert!(write_result.is_err());
5415
    /// assert_eq!(dst, [255, 255]);
5416
    /// ```
5417
    #[cfg(feature = "std")]
5418
    #[inline(always)]
5419
    fn write_to_io<W>(&self, mut dst: W) -> io::Result<()>
5420
    where
5421
        Self: Immutable,
5422
        W: io::Write,
5423
    {
5424
        dst.write_all(self.as_bytes())
5425
    }
5426
5427
    #[deprecated(since = "0.8.0", note = "`IntoBytes::as_bytes_mut` was renamed to `as_mut_bytes`")]
5428
    #[doc(hidden)]
5429
    #[inline]
5430
0
    fn as_bytes_mut(&mut self) -> &mut [u8]
5431
0
    where
5432
0
        Self: FromBytes,
5433
    {
5434
0
        self.as_mut_bytes()
5435
0
    }
5436
}
5437
5438
/// Analyzes whether a type is [`Unaligned`].
5439
///
5440
/// This derive analyzes, at compile time, whether the annotated type satisfies
5441
/// the [safety conditions] of `Unaligned` and implements `Unaligned` if it is
5442
/// sound to do so. This derive can be applied to structs, enums, and unions;
5443
/// e.g.:
5444
///
5445
/// ```
5446
/// # use zerocopy_derive::Unaligned;
5447
/// #[derive(Unaligned)]
5448
/// #[repr(C)]
5449
/// struct MyStruct {
5450
/// # /*
5451
///     ...
5452
/// # */
5453
/// }
5454
///
5455
/// #[derive(Unaligned)]
5456
/// #[repr(u8)]
5457
/// enum MyEnum {
5458
/// #   Variant0,
5459
/// # /*
5460
///     ...
5461
/// # */
5462
/// }
5463
///
5464
/// #[derive(Unaligned)]
5465
/// #[repr(packed)]
5466
/// union MyUnion {
5467
/// #   variant: u8,
5468
/// # /*
5469
///     ...
5470
/// # */
5471
/// }
5472
/// ```
5473
///
5474
/// # Analysis
5475
///
5476
/// *This section describes, roughly, the analysis performed by this derive to
5477
/// determine whether it is sound to implement `Unaligned` for a given type.
5478
/// Unless you are modifying the implementation of this derive, or attempting to
5479
/// manually implement `Unaligned` for a type yourself, you don't need to read
5480
/// this section.*
5481
///
5482
/// If a type has the following properties, then this derive can implement
5483
/// `Unaligned` for that type:
5484
///
5485
/// - If the type is a struct or union:
5486
///   - If `repr(align(N))` is provided, `N` must equal 1.
5487
///   - If the type is `repr(C)` or `repr(transparent)`, all fields must be
5488
///     [`Unaligned`].
5489
///   - If the type is not `repr(C)` or `repr(transparent)`, it must be
5490
///     `repr(packed)` or `repr(packed(1))`.
5491
/// - If the type is an enum:
5492
///   - If `repr(align(N))` is provided, `N` must equal 1.
5493
///   - It must be a field-less enum (meaning that all variants have no fields).
5494
///   - It must be `repr(i8)` or `repr(u8)`.
5495
///
5496
/// [safety conditions]: trait@Unaligned#safety
5497
#[cfg(any(feature = "derive", test))]
5498
#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
5499
pub use zerocopy_derive::Unaligned;
5500
5501
/// Types with no alignment requirement.
5502
///
5503
/// If `T: Unaligned`, then `align_of::<T>() == 1`.
5504
///
5505
/// # Implementation
5506
///
5507
/// **Do not implement this trait yourself!** Instead, use
5508
/// [`#[derive(Unaligned)]`][derive]; e.g.:
5509
///
5510
/// ```
5511
/// # use zerocopy_derive::Unaligned;
5512
/// #[derive(Unaligned)]
5513
/// #[repr(C)]
5514
/// struct MyStruct {
5515
/// # /*
5516
///     ...
5517
/// # */
5518
/// }
5519
///
5520
/// #[derive(Unaligned)]
5521
/// #[repr(u8)]
5522
/// enum MyEnum {
5523
/// #   Variant0,
5524
/// # /*
5525
///     ...
5526
/// # */
5527
/// }
5528
///
5529
/// #[derive(Unaligned)]
5530
/// #[repr(packed)]
5531
/// union MyUnion {
5532
/// #   variant: u8,
5533
/// # /*
5534
///     ...
5535
/// # */
5536
/// }
5537
/// ```
5538
///
5539
/// This derive performs a sophisticated, compile-time safety analysis to
5540
/// determine whether a type is `Unaligned`.
5541
///
5542
/// # Safety
5543
///
5544
/// *This section describes what is required in order for `T: Unaligned`, and
5545
/// what unsafe code may assume of such types. If you don't plan on implementing
5546
/// `Unaligned` manually, and you don't plan on writing unsafe code that
5547
/// operates on `Unaligned` types, then you don't need to read this section.*
5548
///
5549
/// If `T: Unaligned`, then unsafe code may assume that it is sound to produce a
5550
/// reference to `T` at any memory location regardless of alignment. If a type
5551
/// is marked as `Unaligned` which violates this contract, it may cause
5552
/// undefined behavior.
5553
///
5554
/// `#[derive(Unaligned)]` only permits [types which satisfy these
5555
/// requirements][derive-analysis].
5556
///
5557
#[cfg_attr(
5558
    feature = "derive",
5559
    doc = "[derive]: zerocopy_derive::Unaligned",
5560
    doc = "[derive-analysis]: zerocopy_derive::Unaligned#analysis"
5561
)]
5562
#[cfg_attr(
5563
    not(feature = "derive"),
5564
    doc = concat!("[derive]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.Unaligned.html"),
5565
    doc = concat!("[derive-analysis]: https://docs.rs/zerocopy/", env!("CARGO_PKG_VERSION"), "/zerocopy/derive.Unaligned.html#analysis"),
5566
)]
5567
#[cfg_attr(
5568
    zerocopy_diagnostic_on_unimplemented_1_78_0,
5569
    diagnostic::on_unimplemented(note = "Consider adding `#[derive(Unaligned)]` to `{Self}`")
5570
)]
5571
pub unsafe trait Unaligned {
5572
    // The `Self: Sized` bound makes it so that `Unaligned` is still object
5573
    // safe.
5574
    #[doc(hidden)]
5575
    fn only_derive_is_allowed_to_implement_this_trait()
5576
    where
5577
        Self: Sized;
5578
}
5579
5580
/// Derives optimized [`PartialEq`] and [`Eq`] implementations.
5581
///
5582
/// This derive can be applied to structs and enums implementing both
5583
/// [`Immutable`] and [`IntoBytes`]; e.g.:
5584
///
5585
/// ```
5586
/// # use zerocopy_derive::{ByteEq, Immutable, IntoBytes};
5587
/// #[derive(ByteEq, Immutable, IntoBytes)]
5588
/// #[repr(C)]
5589
/// struct MyStruct {
5590
/// # /*
5591
///     ...
5592
/// # */
5593
/// }
5594
///
5595
/// #[derive(ByteEq, Immutable, IntoBytes)]
5596
/// #[repr(u8)]
5597
/// enum MyEnum {
5598
/// #   Variant,
5599
/// # /*
5600
///     ...
5601
/// # */
5602
/// }
5603
/// ```
5604
///
5605
/// The standard library's [`derive(Eq, PartialEq)`][derive@PartialEq] computes
5606
/// equality by individually comparing each field. Instead, the implementation
5607
/// of [`PartialEq::eq`] emitted by `derive(ByteHash)` converts the entirety of
5608
/// `self` and `other` to byte slices and compares those slices for equality.
5609
/// This may have performance advantages.
5610
#[cfg(any(feature = "derive", test))]
5611
#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
5612
pub use zerocopy_derive::ByteEq;
5613
/// Derives an optimized [`Hash`] implementation.
5614
///
5615
/// This derive can be applied to structs and enums implementing both
5616
/// [`Immutable`] and [`IntoBytes`]; e.g.:
5617
///
5618
/// ```
5619
/// # use zerocopy_derive::{ByteHash, Immutable, IntoBytes};
5620
/// #[derive(ByteHash, Immutable, IntoBytes)]
5621
/// #[repr(C)]
5622
/// struct MyStruct {
5623
/// # /*
5624
///     ...
5625
/// # */
5626
/// }
5627
///
5628
/// #[derive(ByteHash, Immutable, IntoBytes)]
5629
/// #[repr(u8)]
5630
/// enum MyEnum {
5631
/// #   Variant,
5632
/// # /*
5633
///     ...
5634
/// # */
5635
/// }
5636
/// ```
5637
///
5638
/// The standard library's [`derive(Hash)`][derive@Hash] produces hashes by
5639
/// individually hashing each field and combining the results. Instead, the
5640
/// implementations of [`Hash::hash()`] and [`Hash::hash_slice()`] generated by
5641
/// `derive(ByteHash)` convert the entirety of `self` to a byte slice and hashes
5642
/// it in a single call to [`Hasher::write()`]. This may have performance
5643
/// advantages.
5644
///
5645
/// [`Hash`]: core::hash::Hash
5646
/// [`Hash::hash()`]: core::hash::Hash::hash()
5647
/// [`Hash::hash_slice()`]: core::hash::Hash::hash_slice()
5648
#[cfg(any(feature = "derive", test))]
5649
#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
5650
pub use zerocopy_derive::ByteHash;
5651
/// Implements [`SplitAt`].
5652
///
5653
/// This derive can be applied to structs; e.g.:
5654
///
5655
/// ```
5656
/// # use zerocopy_derive::{ByteEq, Immutable, IntoBytes};
5657
/// #[derive(ByteEq, Immutable, IntoBytes)]
5658
/// #[repr(C)]
5659
/// struct MyStruct {
5660
/// # /*
5661
///     ...
5662
/// # */
5663
/// }
5664
/// ```
5665
#[cfg(any(feature = "derive", test))]
5666
#[cfg_attr(doc_cfg, doc(cfg(feature = "derive")))]
5667
pub use zerocopy_derive::SplitAt;
5668
5669
#[cfg(feature = "alloc")]
5670
#[cfg_attr(doc_cfg, doc(cfg(feature = "alloc")))]
5671
#[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
5672
mod alloc_support {
5673
    use super::*;
5674
5675
    /// Extends a `Vec<T>` by pushing `additional` new items onto the end of the
5676
    /// vector. The new items are initialized with zeros.
5677
    #[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
5678
    #[doc(hidden)]
5679
    #[deprecated(since = "0.8.0", note = "moved to `FromZeros`")]
5680
    #[inline(always)]
5681
    pub fn extend_vec_zeroed<T: FromZeros>(
5682
        v: &mut Vec<T>,
5683
        additional: usize,
5684
    ) -> Result<(), AllocError> {
5685
        <T as FromZeros>::extend_vec_zeroed(v, additional)
5686
    }
5687
5688
    /// Inserts `additional` new items into `Vec<T>` at `position`. The new
5689
    /// items are initialized with zeros.
5690
    ///
5691
    /// # Panics
5692
    ///
5693
    /// Panics if `position > v.len()`.
5694
    #[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
5695
    #[doc(hidden)]
5696
    #[deprecated(since = "0.8.0", note = "moved to `FromZeros`")]
5697
    #[inline(always)]
5698
    pub fn insert_vec_zeroed<T: FromZeros>(
5699
        v: &mut Vec<T>,
5700
        position: usize,
5701
        additional: usize,
5702
    ) -> Result<(), AllocError> {
5703
        <T as FromZeros>::insert_vec_zeroed(v, position, additional)
5704
    }
5705
}
5706
5707
#[cfg(feature = "alloc")]
5708
#[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
5709
#[doc(hidden)]
5710
pub use alloc_support::*;
5711
5712
#[cfg(test)]
5713
#[allow(clippy::assertions_on_result_states, clippy::unreadable_literal)]
5714
mod tests {
5715
    use static_assertions::assert_impl_all;
5716
5717
    use super::*;
5718
    use crate::util::testutil::*;
5719
5720
    // An unsized type.
5721
    //
5722
    // This is used to test the custom derives of our traits. The `[u8]` type
5723
    // gets a hand-rolled impl, so it doesn't exercise our custom derives.
5724
    #[derive(Debug, Eq, PartialEq, FromBytes, IntoBytes, Unaligned, Immutable)]
5725
    #[repr(transparent)]
5726
    struct Unsized([u8]);
5727
5728
    impl Unsized {
5729
        fn from_mut_slice(slc: &mut [u8]) -> &mut Unsized {
5730
            // SAFETY: This *probably* sound - since the layouts of `[u8]` and
5731
            // `Unsized` are the same, so are the layouts of `&mut [u8]` and
5732
            // `&mut Unsized`. [1] Even if it turns out that this isn't actually
5733
            // guaranteed by the language spec, we can just change this since
5734
            // it's in test code.
5735
            //
5736
            // [1] https://github.com/rust-lang/unsafe-code-guidelines/issues/375
5737
            unsafe { mem::transmute(slc) }
5738
        }
5739
    }
5740
5741
    #[test]
5742
    fn test_known_layout() {
5743
        // Test that `$ty` and `ManuallyDrop<$ty>` have the expected layout.
5744
        // Test that `PhantomData<$ty>` has the same layout as `()` regardless
5745
        // of `$ty`.
5746
        macro_rules! test {
5747
            ($ty:ty, $expect:expr) => {
5748
                let expect = $expect;
5749
                assert_eq!(<$ty as KnownLayout>::LAYOUT, expect);
5750
                assert_eq!(<ManuallyDrop<$ty> as KnownLayout>::LAYOUT, expect);
5751
                assert_eq!(<PhantomData<$ty> as KnownLayout>::LAYOUT, <() as KnownLayout>::LAYOUT);
5752
            };
5753
        }
5754
5755
        let layout =
5756
            |offset, align, trailing_slice_elem_size, statically_shallow_unpadded| DstLayout {
5757
                align: NonZeroUsize::new(align).unwrap(),
5758
                size_info: match trailing_slice_elem_size {
5759
                    None => SizeInfo::Sized { size: offset },
5760
                    Some(elem_size) => {
5761
                        SizeInfo::SliceDst(TrailingSliceLayout { offset, elem_size })
5762
                    }
5763
                },
5764
                statically_shallow_unpadded,
5765
            };
5766
5767
        test!((), layout(0, 1, None, false));
5768
        test!(u8, layout(1, 1, None, false));
5769
        // Use `align_of` because `u64` alignment may be smaller than 8 on some
5770
        // platforms.
5771
        test!(u64, layout(8, mem::align_of::<u64>(), None, false));
5772
        test!(AU64, layout(8, 8, None, false));
5773
5774
        test!(Option<&'static ()>, usize::LAYOUT);
5775
5776
        test!([()], layout(0, 1, Some(0), true));
5777
        test!([u8], layout(0, 1, Some(1), true));
5778
        test!(str, layout(0, 1, Some(1), true));
5779
    }
5780
5781
    #[cfg(feature = "derive")]
5782
    #[test]
5783
    fn test_known_layout_derive() {
5784
        // In this and other files (`late_compile_pass.rs`,
5785
        // `mid_compile_pass.rs`, and `struct.rs`), we test success and failure
5786
        // modes of `derive(KnownLayout)` for the following combination of
5787
        // properties:
5788
        //
5789
        // +------------+--------------------------------------+-----------+
5790
        // |            |      trailing field properties       |           |
5791
        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
5792
        // |------------+----------+----------------+----------+-----------|
5793
        // |          N |        N |              N |        N |      KL00 |
5794
        // |          N |        N |              N |        Y |      KL01 |
5795
        // |          N |        N |              Y |        N |      KL02 |
5796
        // |          N |        N |              Y |        Y |      KL03 |
5797
        // |          N |        Y |              N |        N |      KL04 |
5798
        // |          N |        Y |              N |        Y |      KL05 |
5799
        // |          N |        Y |              Y |        N |      KL06 |
5800
        // |          N |        Y |              Y |        Y |      KL07 |
5801
        // |          Y |        N |              N |        N |      KL08 |
5802
        // |          Y |        N |              N |        Y |      KL09 |
5803
        // |          Y |        N |              Y |        N |      KL10 |
5804
        // |          Y |        N |              Y |        Y |      KL11 |
5805
        // |          Y |        Y |              N |        N |      KL12 |
5806
        // |          Y |        Y |              N |        Y |      KL13 |
5807
        // |          Y |        Y |              Y |        N |      KL14 |
5808
        // |          Y |        Y |              Y |        Y |      KL15 |
5809
        // +------------+----------+----------------+----------+-----------+
5810
5811
        struct NotKnownLayout<T = ()> {
5812
            _t: T,
5813
        }
5814
5815
        #[derive(KnownLayout)]
5816
        #[repr(C)]
5817
        struct AlignSize<const ALIGN: usize, const SIZE: usize>
5818
        where
5819
            elain::Align<ALIGN>: elain::Alignment,
5820
        {
5821
            _align: elain::Align<ALIGN>,
5822
            size: [u8; SIZE],
5823
        }
5824
5825
        type AU16 = AlignSize<2, 2>;
5826
        type AU32 = AlignSize<4, 4>;
5827
5828
        fn _assert_kl<T: ?Sized + KnownLayout>(_: &T) {}
5829
5830
        let sized_layout = |align, size| DstLayout {
5831
            align: NonZeroUsize::new(align).unwrap(),
5832
            size_info: SizeInfo::Sized { size },
5833
            statically_shallow_unpadded: false,
5834
        };
5835
5836
        let unsized_layout = |align, elem_size, offset, statically_shallow_unpadded| DstLayout {
5837
            align: NonZeroUsize::new(align).unwrap(),
5838
            size_info: SizeInfo::SliceDst(TrailingSliceLayout { offset, elem_size }),
5839
            statically_shallow_unpadded,
5840
        };
5841
5842
        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
5843
        // |          N |        N |              N |        Y |      KL01 |
5844
        #[allow(dead_code)]
5845
        #[derive(KnownLayout)]
5846
        struct KL01(NotKnownLayout<AU32>, NotKnownLayout<AU16>);
5847
5848
        let expected = DstLayout::for_type::<KL01>();
5849
5850
        assert_eq!(<KL01 as KnownLayout>::LAYOUT, expected);
5851
        assert_eq!(<KL01 as KnownLayout>::LAYOUT, sized_layout(4, 8));
5852
5853
        // ...with `align(N)`:
5854
        #[allow(dead_code)]
5855
        #[derive(KnownLayout)]
5856
        #[repr(align(64))]
5857
        struct KL01Align(NotKnownLayout<AU32>, NotKnownLayout<AU16>);
5858
5859
        let expected = DstLayout::for_type::<KL01Align>();
5860
5861
        assert_eq!(<KL01Align as KnownLayout>::LAYOUT, expected);
5862
        assert_eq!(<KL01Align as KnownLayout>::LAYOUT, sized_layout(64, 64));
5863
5864
        // ...with `packed`:
5865
        #[allow(dead_code)]
5866
        #[derive(KnownLayout)]
5867
        #[repr(packed)]
5868
        struct KL01Packed(NotKnownLayout<AU32>, NotKnownLayout<AU16>);
5869
5870
        let expected = DstLayout::for_type::<KL01Packed>();
5871
5872
        assert_eq!(<KL01Packed as KnownLayout>::LAYOUT, expected);
5873
        assert_eq!(<KL01Packed as KnownLayout>::LAYOUT, sized_layout(1, 6));
5874
5875
        // ...with `packed(N)`:
5876
        #[allow(dead_code)]
5877
        #[derive(KnownLayout)]
5878
        #[repr(packed(2))]
5879
        struct KL01PackedN(NotKnownLayout<AU32>, NotKnownLayout<AU16>);
5880
5881
        assert_impl_all!(KL01PackedN: KnownLayout);
5882
5883
        let expected = DstLayout::for_type::<KL01PackedN>();
5884
5885
        assert_eq!(<KL01PackedN as KnownLayout>::LAYOUT, expected);
5886
        assert_eq!(<KL01PackedN as KnownLayout>::LAYOUT, sized_layout(2, 6));
5887
5888
        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
5889
        // |          N |        N |              Y |        Y |      KL03 |
5890
        #[allow(dead_code)]
5891
        #[derive(KnownLayout)]
5892
        struct KL03(NotKnownLayout, u8);
5893
5894
        let expected = DstLayout::for_type::<KL03>();
5895
5896
        assert_eq!(<KL03 as KnownLayout>::LAYOUT, expected);
5897
        assert_eq!(<KL03 as KnownLayout>::LAYOUT, sized_layout(1, 1));
5898
5899
        // ... with `align(N)`
5900
        #[allow(dead_code)]
5901
        #[derive(KnownLayout)]
5902
        #[repr(align(64))]
5903
        struct KL03Align(NotKnownLayout<AU32>, u8);
5904
5905
        let expected = DstLayout::for_type::<KL03Align>();
5906
5907
        assert_eq!(<KL03Align as KnownLayout>::LAYOUT, expected);
5908
        assert_eq!(<KL03Align as KnownLayout>::LAYOUT, sized_layout(64, 64));
5909
5910
        // ... with `packed`:
5911
        #[allow(dead_code)]
5912
        #[derive(KnownLayout)]
5913
        #[repr(packed)]
5914
        struct KL03Packed(NotKnownLayout<AU32>, u8);
5915
5916
        let expected = DstLayout::for_type::<KL03Packed>();
5917
5918
        assert_eq!(<KL03Packed as KnownLayout>::LAYOUT, expected);
5919
        assert_eq!(<KL03Packed as KnownLayout>::LAYOUT, sized_layout(1, 5));
5920
5921
        // ... with `packed(N)`
5922
        #[allow(dead_code)]
5923
        #[derive(KnownLayout)]
5924
        #[repr(packed(2))]
5925
        struct KL03PackedN(NotKnownLayout<AU32>, u8);
5926
5927
        assert_impl_all!(KL03PackedN: KnownLayout);
5928
5929
        let expected = DstLayout::for_type::<KL03PackedN>();
5930
5931
        assert_eq!(<KL03PackedN as KnownLayout>::LAYOUT, expected);
5932
        assert_eq!(<KL03PackedN as KnownLayout>::LAYOUT, sized_layout(2, 6));
5933
5934
        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
5935
        // |          N |        Y |              N |        Y |      KL05 |
5936
        #[allow(dead_code)]
5937
        #[derive(KnownLayout)]
5938
        struct KL05<T>(u8, T);
5939
5940
        fn _test_kl05<T>(t: T) -> impl KnownLayout {
5941
            KL05(0u8, t)
5942
        }
5943
5944
        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
5945
        // |          N |        Y |              Y |        Y |      KL07 |
5946
        #[allow(dead_code)]
5947
        #[derive(KnownLayout)]
5948
        struct KL07<T: KnownLayout>(u8, T);
5949
5950
        fn _test_kl07<T: KnownLayout>(t: T) -> impl KnownLayout {
5951
            let _ = KL07(0u8, t);
5952
        }
5953
5954
        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
5955
        // |          Y |        N |              Y |        N |      KL10 |
5956
        #[allow(dead_code)]
5957
        #[derive(KnownLayout)]
5958
        #[repr(C)]
5959
        struct KL10(NotKnownLayout<AU32>, [u8]);
5960
5961
        let expected = DstLayout::new_zst(None)
5962
            .extend(DstLayout::for_type::<NotKnownLayout<AU32>>(), None)
5963
            .extend(<[u8] as KnownLayout>::LAYOUT, None)
5964
            .pad_to_align();
5965
5966
        assert_eq!(<KL10 as KnownLayout>::LAYOUT, expected);
5967
        assert_eq!(<KL10 as KnownLayout>::LAYOUT, unsized_layout(4, 1, 4, false));
5968
5969
        // ...with `align(N)`:
5970
        #[allow(dead_code)]
5971
        #[derive(KnownLayout)]
5972
        #[repr(C, align(64))]
5973
        struct KL10Align(NotKnownLayout<AU32>, [u8]);
5974
5975
        let repr_align = NonZeroUsize::new(64);
5976
5977
        let expected = DstLayout::new_zst(repr_align)
5978
            .extend(DstLayout::for_type::<NotKnownLayout<AU32>>(), None)
5979
            .extend(<[u8] as KnownLayout>::LAYOUT, None)
5980
            .pad_to_align();
5981
5982
        assert_eq!(<KL10Align as KnownLayout>::LAYOUT, expected);
5983
        assert_eq!(<KL10Align as KnownLayout>::LAYOUT, unsized_layout(64, 1, 4, false));
5984
5985
        // ...with `packed`:
5986
        #[allow(dead_code)]
5987
        #[derive(KnownLayout)]
5988
        #[repr(C, packed)]
5989
        struct KL10Packed(NotKnownLayout<AU32>, [u8]);
5990
5991
        let repr_packed = NonZeroUsize::new(1);
5992
5993
        let expected = DstLayout::new_zst(None)
5994
            .extend(DstLayout::for_type::<NotKnownLayout<AU32>>(), repr_packed)
5995
            .extend(<[u8] as KnownLayout>::LAYOUT, repr_packed)
5996
            .pad_to_align();
5997
5998
        assert_eq!(<KL10Packed as KnownLayout>::LAYOUT, expected);
5999
        assert_eq!(<KL10Packed as KnownLayout>::LAYOUT, unsized_layout(1, 1, 4, false));
6000
6001
        // ...with `packed(N)`:
6002
        #[allow(dead_code)]
6003
        #[derive(KnownLayout)]
6004
        #[repr(C, packed(2))]
6005
        struct KL10PackedN(NotKnownLayout<AU32>, [u8]);
6006
6007
        let repr_packed = NonZeroUsize::new(2);
6008
6009
        let expected = DstLayout::new_zst(None)
6010
            .extend(DstLayout::for_type::<NotKnownLayout<AU32>>(), repr_packed)
6011
            .extend(<[u8] as KnownLayout>::LAYOUT, repr_packed)
6012
            .pad_to_align();
6013
6014
        assert_eq!(<KL10PackedN as KnownLayout>::LAYOUT, expected);
6015
        assert_eq!(<KL10PackedN as KnownLayout>::LAYOUT, unsized_layout(2, 1, 4, false));
6016
6017
        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
6018
        // |          Y |        N |              Y |        Y |      KL11 |
6019
        #[allow(dead_code)]
6020
        #[derive(KnownLayout)]
6021
        #[repr(C)]
6022
        struct KL11(NotKnownLayout<AU64>, u8);
6023
6024
        let expected = DstLayout::new_zst(None)
6025
            .extend(DstLayout::for_type::<NotKnownLayout<AU64>>(), None)
6026
            .extend(<u8 as KnownLayout>::LAYOUT, None)
6027
            .pad_to_align();
6028
6029
        assert_eq!(<KL11 as KnownLayout>::LAYOUT, expected);
6030
        assert_eq!(<KL11 as KnownLayout>::LAYOUT, sized_layout(8, 16));
6031
6032
        // ...with `align(N)`:
6033
        #[allow(dead_code)]
6034
        #[derive(KnownLayout)]
6035
        #[repr(C, align(64))]
6036
        struct KL11Align(NotKnownLayout<AU64>, u8);
6037
6038
        let repr_align = NonZeroUsize::new(64);
6039
6040
        let expected = DstLayout::new_zst(repr_align)
6041
            .extend(DstLayout::for_type::<NotKnownLayout<AU64>>(), None)
6042
            .extend(<u8 as KnownLayout>::LAYOUT, None)
6043
            .pad_to_align();
6044
6045
        assert_eq!(<KL11Align as KnownLayout>::LAYOUT, expected);
6046
        assert_eq!(<KL11Align as KnownLayout>::LAYOUT, sized_layout(64, 64));
6047
6048
        // ...with `packed`:
6049
        #[allow(dead_code)]
6050
        #[derive(KnownLayout)]
6051
        #[repr(C, packed)]
6052
        struct KL11Packed(NotKnownLayout<AU64>, u8);
6053
6054
        let repr_packed = NonZeroUsize::new(1);
6055
6056
        let expected = DstLayout::new_zst(None)
6057
            .extend(DstLayout::for_type::<NotKnownLayout<AU64>>(), repr_packed)
6058
            .extend(<u8 as KnownLayout>::LAYOUT, repr_packed)
6059
            .pad_to_align();
6060
6061
        assert_eq!(<KL11Packed as KnownLayout>::LAYOUT, expected);
6062
        assert_eq!(<KL11Packed as KnownLayout>::LAYOUT, sized_layout(1, 9));
6063
6064
        // ...with `packed(N)`:
6065
        #[allow(dead_code)]
6066
        #[derive(KnownLayout)]
6067
        #[repr(C, packed(2))]
6068
        struct KL11PackedN(NotKnownLayout<AU64>, u8);
6069
6070
        let repr_packed = NonZeroUsize::new(2);
6071
6072
        let expected = DstLayout::new_zst(None)
6073
            .extend(DstLayout::for_type::<NotKnownLayout<AU64>>(), repr_packed)
6074
            .extend(<u8 as KnownLayout>::LAYOUT, repr_packed)
6075
            .pad_to_align();
6076
6077
        assert_eq!(<KL11PackedN as KnownLayout>::LAYOUT, expected);
6078
        assert_eq!(<KL11PackedN as KnownLayout>::LAYOUT, sized_layout(2, 10));
6079
6080
        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
6081
        // |          Y |        Y |              Y |        N |      KL14 |
6082
        #[allow(dead_code)]
6083
        #[derive(KnownLayout)]
6084
        #[repr(C)]
6085
        struct KL14<T: ?Sized + KnownLayout>(u8, T);
6086
6087
        fn _test_kl14<T: ?Sized + KnownLayout>(kl: &KL14<T>) {
6088
            _assert_kl(kl)
6089
        }
6090
6091
        // | `repr(C)`? | generic? | `KnownLayout`? | `Sized`? | Type Name |
6092
        // |          Y |        Y |              Y |        Y |      KL15 |
6093
        #[allow(dead_code)]
6094
        #[derive(KnownLayout)]
6095
        #[repr(C)]
6096
        struct KL15<T: KnownLayout>(u8, T);
6097
6098
        fn _test_kl15<T: KnownLayout>(t: T) -> impl KnownLayout {
6099
            let _ = KL15(0u8, t);
6100
        }
6101
6102
        // Test a variety of combinations of field types:
6103
        //  - ()
6104
        //  - u8
6105
        //  - AU16
6106
        //  - [()]
6107
        //  - [u8]
6108
        //  - [AU16]
6109
6110
        #[allow(clippy::upper_case_acronyms, dead_code)]
6111
        #[derive(KnownLayout)]
6112
        #[repr(C)]
6113
        struct KLTU<T, U: ?Sized>(T, U);
6114
6115
        assert_eq!(<KLTU<(), ()> as KnownLayout>::LAYOUT, sized_layout(1, 0));
6116
6117
        assert_eq!(<KLTU<(), u8> as KnownLayout>::LAYOUT, sized_layout(1, 1));
6118
6119
        assert_eq!(<KLTU<(), AU16> as KnownLayout>::LAYOUT, sized_layout(2, 2));
6120
6121
        assert_eq!(<KLTU<(), [()]> as KnownLayout>::LAYOUT, unsized_layout(1, 0, 0, false));
6122
6123
        assert_eq!(<KLTU<(), [u8]> as KnownLayout>::LAYOUT, unsized_layout(1, 1, 0, false));
6124
6125
        assert_eq!(<KLTU<(), [AU16]> as KnownLayout>::LAYOUT, unsized_layout(2, 2, 0, false));
6126
6127
        assert_eq!(<KLTU<u8, ()> as KnownLayout>::LAYOUT, sized_layout(1, 1));
6128
6129
        assert_eq!(<KLTU<u8, u8> as KnownLayout>::LAYOUT, sized_layout(1, 2));
6130
6131
        assert_eq!(<KLTU<u8, AU16> as KnownLayout>::LAYOUT, sized_layout(2, 4));
6132
6133
        assert_eq!(<KLTU<u8, [()]> as KnownLayout>::LAYOUT, unsized_layout(1, 0, 1, false));
6134
6135
        assert_eq!(<KLTU<u8, [u8]> as KnownLayout>::LAYOUT, unsized_layout(1, 1, 1, false));
6136
6137
        assert_eq!(<KLTU<u8, [AU16]> as KnownLayout>::LAYOUT, unsized_layout(2, 2, 2, false));
6138
6139
        assert_eq!(<KLTU<AU16, ()> as KnownLayout>::LAYOUT, sized_layout(2, 2));
6140
6141
        assert_eq!(<KLTU<AU16, u8> as KnownLayout>::LAYOUT, sized_layout(2, 4));
6142
6143
        assert_eq!(<KLTU<AU16, AU16> as KnownLayout>::LAYOUT, sized_layout(2, 4));
6144
6145
        assert_eq!(<KLTU<AU16, [()]> as KnownLayout>::LAYOUT, unsized_layout(2, 0, 2, false));
6146
6147
        assert_eq!(<KLTU<AU16, [u8]> as KnownLayout>::LAYOUT, unsized_layout(2, 1, 2, false));
6148
6149
        assert_eq!(<KLTU<AU16, [AU16]> as KnownLayout>::LAYOUT, unsized_layout(2, 2, 2, false));
6150
6151
        // Test a variety of field counts.
6152
6153
        #[derive(KnownLayout)]
6154
        #[repr(C)]
6155
        struct KLF0;
6156
6157
        assert_eq!(<KLF0 as KnownLayout>::LAYOUT, sized_layout(1, 0));
6158
6159
        #[derive(KnownLayout)]
6160
        #[repr(C)]
6161
        struct KLF1([u8]);
6162
6163
        assert_eq!(<KLF1 as KnownLayout>::LAYOUT, unsized_layout(1, 1, 0, true));
6164
6165
        #[derive(KnownLayout)]
6166
        #[repr(C)]
6167
        struct KLF2(NotKnownLayout<u8>, [u8]);
6168
6169
        assert_eq!(<KLF2 as KnownLayout>::LAYOUT, unsized_layout(1, 1, 1, false));
6170
6171
        #[derive(KnownLayout)]
6172
        #[repr(C)]
6173
        struct KLF3(NotKnownLayout<u8>, NotKnownLayout<AU16>, [u8]);
6174
6175
        assert_eq!(<KLF3 as KnownLayout>::LAYOUT, unsized_layout(2, 1, 4, false));
6176
6177
        #[derive(KnownLayout)]
6178
        #[repr(C)]
6179
        struct KLF4(NotKnownLayout<u8>, NotKnownLayout<AU16>, NotKnownLayout<AU32>, [u8]);
6180
6181
        assert_eq!(<KLF4 as KnownLayout>::LAYOUT, unsized_layout(4, 1, 8, false));
6182
    }
6183
6184
    #[test]
6185
    fn test_object_safety() {
6186
        fn _takes_no_cell(_: &dyn Immutable) {}
6187
        fn _takes_unaligned(_: &dyn Unaligned) {}
6188
    }
6189
6190
    #[test]
6191
    fn test_from_zeros_only() {
6192
        // Test types that implement `FromZeros` but not `FromBytes`.
6193
6194
        assert!(!bool::new_zeroed());
6195
        assert_eq!(char::new_zeroed(), '\0');
6196
6197
        #[cfg(feature = "alloc")]
6198
        {
6199
            assert_eq!(bool::new_box_zeroed(), Ok(Box::new(false)));
6200
            assert_eq!(char::new_box_zeroed(), Ok(Box::new('\0')));
6201
6202
            assert_eq!(
6203
                <[bool]>::new_box_zeroed_with_elems(3).unwrap().as_ref(),
6204
                [false, false, false]
6205
            );
6206
            assert_eq!(
6207
                <[char]>::new_box_zeroed_with_elems(3).unwrap().as_ref(),
6208
                ['\0', '\0', '\0']
6209
            );
6210
6211
            assert_eq!(bool::new_vec_zeroed(3).unwrap().as_ref(), [false, false, false]);
6212
            assert_eq!(char::new_vec_zeroed(3).unwrap().as_ref(), ['\0', '\0', '\0']);
6213
        }
6214
6215
        let mut string = "hello".to_string();
6216
        let s: &mut str = string.as_mut();
6217
        assert_eq!(s, "hello");
6218
        s.zero();
6219
        assert_eq!(s, "\0\0\0\0\0");
6220
    }
6221
6222
    #[test]
6223
    fn test_zst_count_preserved() {
6224
        // Test that, when an explicit count is provided to for a type with a
6225
        // ZST trailing slice element, that count is preserved. This is
6226
        // important since, for such types, all element counts result in objects
6227
        // of the same size, and so the correct behavior is ambiguous. However,
6228
        // preserving the count as requested by the user is the behavior that we
6229
        // document publicly.
6230
6231
        // FromZeros methods
6232
        #[cfg(feature = "alloc")]
6233
        assert_eq!(<[()]>::new_box_zeroed_with_elems(3).unwrap().len(), 3);
6234
        #[cfg(feature = "alloc")]
6235
        assert_eq!(<()>::new_vec_zeroed(3).unwrap().len(), 3);
6236
6237
        // FromBytes methods
6238
        assert_eq!(<[()]>::ref_from_bytes_with_elems(&[][..], 3).unwrap().len(), 3);
6239
        assert_eq!(<[()]>::ref_from_prefix_with_elems(&[][..], 3).unwrap().0.len(), 3);
6240
        assert_eq!(<[()]>::ref_from_suffix_with_elems(&[][..], 3).unwrap().1.len(), 3);
6241
        assert_eq!(<[()]>::mut_from_bytes_with_elems(&mut [][..], 3).unwrap().len(), 3);
6242
        assert_eq!(<[()]>::mut_from_prefix_with_elems(&mut [][..], 3).unwrap().0.len(), 3);
6243
        assert_eq!(<[()]>::mut_from_suffix_with_elems(&mut [][..], 3).unwrap().1.len(), 3);
6244
    }
6245
6246
    #[test]
6247
    fn test_read_write() {
6248
        const VAL: u64 = 0x12345678;
6249
        #[cfg(target_endian = "big")]
6250
        const VAL_BYTES: [u8; 8] = VAL.to_be_bytes();
6251
        #[cfg(target_endian = "little")]
6252
        const VAL_BYTES: [u8; 8] = VAL.to_le_bytes();
6253
        const ZEROS: [u8; 8] = [0u8; 8];
6254
6255
        // Test `FromBytes::{read_from, read_from_prefix, read_from_suffix}`.
6256
6257
        assert_eq!(u64::read_from_bytes(&VAL_BYTES[..]), Ok(VAL));
6258
        // The first 8 bytes are from `VAL_BYTES` and the second 8 bytes are all
6259
        // zeros.
6260
        let bytes_with_prefix: [u8; 16] = transmute!([VAL_BYTES, [0; 8]]);
6261
        assert_eq!(u64::read_from_prefix(&bytes_with_prefix[..]), Ok((VAL, &ZEROS[..])));
6262
        assert_eq!(u64::read_from_suffix(&bytes_with_prefix[..]), Ok((&VAL_BYTES[..], 0)));
6263
        // The first 8 bytes are all zeros and the second 8 bytes are from
6264
        // `VAL_BYTES`
6265
        let bytes_with_suffix: [u8; 16] = transmute!([[0; 8], VAL_BYTES]);
6266
        assert_eq!(u64::read_from_prefix(&bytes_with_suffix[..]), Ok((0, &VAL_BYTES[..])));
6267
        assert_eq!(u64::read_from_suffix(&bytes_with_suffix[..]), Ok((&ZEROS[..], VAL)));
6268
6269
        // Test `IntoBytes::{write_to, write_to_prefix, write_to_suffix}`.
6270
6271
        let mut bytes = [0u8; 8];
6272
        assert_eq!(VAL.write_to(&mut bytes[..]), Ok(()));
6273
        assert_eq!(bytes, VAL_BYTES);
6274
        let mut bytes = [0u8; 16];
6275
        assert_eq!(VAL.write_to_prefix(&mut bytes[..]), Ok(()));
6276
        let want: [u8; 16] = transmute!([VAL_BYTES, [0; 8]]);
6277
        assert_eq!(bytes, want);
6278
        let mut bytes = [0u8; 16];
6279
        assert_eq!(VAL.write_to_suffix(&mut bytes[..]), Ok(()));
6280
        let want: [u8; 16] = transmute!([[0; 8], VAL_BYTES]);
6281
        assert_eq!(bytes, want);
6282
    }
6283
6284
    #[test]
6285
    #[cfg(feature = "std")]
6286
    fn test_read_io_with_padding_soundness() {
6287
        // This test is designed to exhibit potential UB in
6288
        // `FromBytes::read_from_io`. (see #2319, #2320).
6289
6290
        // On most platforms (where `align_of::<u16>() == 2`), `WithPadding`
6291
        // will have inter-field padding between `x` and `y`.
6292
        #[derive(FromBytes)]
6293
        #[repr(C)]
6294
        struct WithPadding {
6295
            x: u8,
6296
            y: u16,
6297
        }
6298
        struct ReadsInRead;
6299
        impl std::io::Read for ReadsInRead {
6300
            fn read(&mut self, buf: &mut [u8]) -> std::io::Result<usize> {
6301
                // This body branches on every byte of `buf`, ensuring that it
6302
                // exhibits UB if any byte of `buf` is uninitialized.
6303
                if buf.iter().all(|&x| x == 0) {
6304
                    Ok(buf.len())
6305
                } else {
6306
                    buf.iter_mut().for_each(|x| *x = 0);
6307
                    Ok(buf.len())
6308
                }
6309
            }
6310
        }
6311
        assert!(matches!(WithPadding::read_from_io(ReadsInRead), Ok(WithPadding { x: 0, y: 0 })));
6312
    }
6313
6314
    #[test]
6315
    #[cfg(feature = "std")]
6316
    fn test_read_write_io() {
6317
        let mut long_buffer = [0, 0, 0, 0];
6318
        assert!(matches!(u16::MAX.write_to_io(&mut long_buffer[..]), Ok(())));
6319
        assert_eq!(long_buffer, [255, 255, 0, 0]);
6320
        assert!(matches!(u16::read_from_io(&long_buffer[..]), Ok(u16::MAX)));
6321
6322
        let mut short_buffer = [0, 0];
6323
        assert!(u32::MAX.write_to_io(&mut short_buffer[..]).is_err());
6324
        assert_eq!(short_buffer, [255, 255]);
6325
        assert!(u32::read_from_io(&short_buffer[..]).is_err());
6326
    }
6327
6328
    #[test]
6329
    fn test_try_from_bytes_try_read_from() {
6330
        assert_eq!(<bool as TryFromBytes>::try_read_from_bytes(&[0]), Ok(false));
6331
        assert_eq!(<bool as TryFromBytes>::try_read_from_bytes(&[1]), Ok(true));
6332
6333
        assert_eq!(<bool as TryFromBytes>::try_read_from_prefix(&[0, 2]), Ok((false, &[2][..])));
6334
        assert_eq!(<bool as TryFromBytes>::try_read_from_prefix(&[1, 2]), Ok((true, &[2][..])));
6335
6336
        assert_eq!(<bool as TryFromBytes>::try_read_from_suffix(&[2, 0]), Ok((&[2][..], false)));
6337
        assert_eq!(<bool as TryFromBytes>::try_read_from_suffix(&[2, 1]), Ok((&[2][..], true)));
6338
6339
        // If we don't pass enough bytes, it fails.
6340
        assert!(matches!(
6341
            <u8 as TryFromBytes>::try_read_from_bytes(&[]),
6342
            Err(TryReadError::Size(_))
6343
        ));
6344
        assert!(matches!(
6345
            <u8 as TryFromBytes>::try_read_from_prefix(&[]),
6346
            Err(TryReadError::Size(_))
6347
        ));
6348
        assert!(matches!(
6349
            <u8 as TryFromBytes>::try_read_from_suffix(&[]),
6350
            Err(TryReadError::Size(_))
6351
        ));
6352
6353
        // If we pass too many bytes, it fails.
6354
        assert!(matches!(
6355
            <u8 as TryFromBytes>::try_read_from_bytes(&[0, 0]),
6356
            Err(TryReadError::Size(_))
6357
        ));
6358
6359
        // If we pass an invalid value, it fails.
6360
        assert!(matches!(
6361
            <bool as TryFromBytes>::try_read_from_bytes(&[2]),
6362
            Err(TryReadError::Validity(_))
6363
        ));
6364
        assert!(matches!(
6365
            <bool as TryFromBytes>::try_read_from_prefix(&[2, 0]),
6366
            Err(TryReadError::Validity(_))
6367
        ));
6368
        assert!(matches!(
6369
            <bool as TryFromBytes>::try_read_from_suffix(&[0, 2]),
6370
            Err(TryReadError::Validity(_))
6371
        ));
6372
6373
        // Reading from a misaligned buffer should still succeed. Since `AU64`'s
6374
        // alignment is 8, and since we read from two adjacent addresses one
6375
        // byte apart, it is guaranteed that at least one of them (though
6376
        // possibly both) will be misaligned.
6377
        let bytes: [u8; 9] = [0, 0, 0, 0, 0, 0, 0, 0, 0];
6378
        assert_eq!(<AU64 as TryFromBytes>::try_read_from_bytes(&bytes[..8]), Ok(AU64(0)));
6379
        assert_eq!(<AU64 as TryFromBytes>::try_read_from_bytes(&bytes[1..9]), Ok(AU64(0)));
6380
6381
        assert_eq!(
6382
            <AU64 as TryFromBytes>::try_read_from_prefix(&bytes[..8]),
6383
            Ok((AU64(0), &[][..]))
6384
        );
6385
        assert_eq!(
6386
            <AU64 as TryFromBytes>::try_read_from_prefix(&bytes[1..9]),
6387
            Ok((AU64(0), &[][..]))
6388
        );
6389
6390
        assert_eq!(
6391
            <AU64 as TryFromBytes>::try_read_from_suffix(&bytes[..8]),
6392
            Ok((&[][..], AU64(0)))
6393
        );
6394
        assert_eq!(
6395
            <AU64 as TryFromBytes>::try_read_from_suffix(&bytes[1..9]),
6396
            Ok((&[][..], AU64(0)))
6397
        );
6398
    }
6399
6400
    #[test]
6401
    fn test_ref_from_mut_from() {
6402
        // Test `FromBytes::{ref_from, mut_from}{,_prefix,Suffix}` success cases
6403
        // Exhaustive coverage for these methods is covered by the `Ref` tests above,
6404
        // which these helper methods defer to.
6405
6406
        let mut buf =
6407
            Align::<[u8; 16], AU64>::new([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]);
6408
6409
        assert_eq!(
6410
            AU64::ref_from_bytes(&buf.t[8..]).unwrap().0.to_ne_bytes(),
6411
            [8, 9, 10, 11, 12, 13, 14, 15]
6412
        );
6413
        let suffix = AU64::mut_from_bytes(&mut buf.t[8..]).unwrap();
6414
        suffix.0 = 0x0101010101010101;
6415
        // The `[u8:9]` is a non-half size of the full buffer, which would catch
6416
        // `from_prefix` having the same implementation as `from_suffix` (issues #506, #511).
6417
        assert_eq!(
6418
            <[u8; 9]>::ref_from_suffix(&buf.t[..]).unwrap(),
6419
            (&[0, 1, 2, 3, 4, 5, 6][..], &[7u8, 1, 1, 1, 1, 1, 1, 1, 1])
6420
        );
6421
        let (prefix, suffix) = AU64::mut_from_suffix(&mut buf.t[1..]).unwrap();
6422
        assert_eq!(prefix, &mut [1u8, 2, 3, 4, 5, 6, 7][..]);
6423
        suffix.0 = 0x0202020202020202;
6424
        let (prefix, suffix) = <[u8; 10]>::mut_from_suffix(&mut buf.t[..]).unwrap();
6425
        assert_eq!(prefix, &mut [0u8, 1, 2, 3, 4, 5][..]);
6426
        suffix[0] = 42;
6427
        assert_eq!(
6428
            <[u8; 9]>::ref_from_prefix(&buf.t[..]).unwrap(),
6429
            (&[0u8, 1, 2, 3, 4, 5, 42, 7, 2], &[2u8, 2, 2, 2, 2, 2, 2][..])
6430
        );
6431
        <[u8; 2]>::mut_from_prefix(&mut buf.t[..]).unwrap().0[1] = 30;
6432
        assert_eq!(buf.t, [0, 30, 2, 3, 4, 5, 42, 7, 2, 2, 2, 2, 2, 2, 2, 2]);
6433
    }
6434
6435
    #[test]
6436
    fn test_ref_from_mut_from_error() {
6437
        // Test `FromBytes::{ref_from, mut_from}{,_prefix,Suffix}` error cases.
6438
6439
        // Fail because the buffer is too large.
6440
        let mut buf = Align::<[u8; 16], AU64>::default();
6441
        // `buf.t` should be aligned to 8, so only the length check should fail.
6442
        assert!(AU64::ref_from_bytes(&buf.t[..]).is_err());
6443
        assert!(AU64::mut_from_bytes(&mut buf.t[..]).is_err());
6444
        assert!(<[u8; 8]>::ref_from_bytes(&buf.t[..]).is_err());
6445
        assert!(<[u8; 8]>::mut_from_bytes(&mut buf.t[..]).is_err());
6446
6447
        // Fail because the buffer is too small.
6448
        let mut buf = Align::<[u8; 4], AU64>::default();
6449
        assert!(AU64::ref_from_bytes(&buf.t[..]).is_err());
6450
        assert!(AU64::mut_from_bytes(&mut buf.t[..]).is_err());
6451
        assert!(<[u8; 8]>::ref_from_bytes(&buf.t[..]).is_err());
6452
        assert!(<[u8; 8]>::mut_from_bytes(&mut buf.t[..]).is_err());
6453
        assert!(AU64::ref_from_prefix(&buf.t[..]).is_err());
6454
        assert!(AU64::mut_from_prefix(&mut buf.t[..]).is_err());
6455
        assert!(AU64::ref_from_suffix(&buf.t[..]).is_err());
6456
        assert!(AU64::mut_from_suffix(&mut buf.t[..]).is_err());
6457
        assert!(<[u8; 8]>::ref_from_prefix(&buf.t[..]).is_err());
6458
        assert!(<[u8; 8]>::mut_from_prefix(&mut buf.t[..]).is_err());
6459
        assert!(<[u8; 8]>::ref_from_suffix(&buf.t[..]).is_err());
6460
        assert!(<[u8; 8]>::mut_from_suffix(&mut buf.t[..]).is_err());
6461
6462
        // Fail because the alignment is insufficient.
6463
        let mut buf = Align::<[u8; 13], AU64>::default();
6464
        assert!(AU64::ref_from_bytes(&buf.t[1..]).is_err());
6465
        assert!(AU64::mut_from_bytes(&mut buf.t[1..]).is_err());
6466
        assert!(AU64::ref_from_bytes(&buf.t[1..]).is_err());
6467
        assert!(AU64::mut_from_bytes(&mut buf.t[1..]).is_err());
6468
        assert!(AU64::ref_from_prefix(&buf.t[1..]).is_err());
6469
        assert!(AU64::mut_from_prefix(&mut buf.t[1..]).is_err());
6470
        assert!(AU64::ref_from_suffix(&buf.t[..]).is_err());
6471
        assert!(AU64::mut_from_suffix(&mut buf.t[..]).is_err());
6472
    }
6473
6474
    #[test]
6475
    fn test_to_methods() {
6476
        /// Run a series of tests by calling `IntoBytes` methods on `t`.
6477
        ///
6478
        /// `bytes` is the expected byte sequence returned from `t.as_bytes()`
6479
        /// before `t` has been modified. `post_mutation` is the expected
6480
        /// sequence returned from `t.as_bytes()` after `t.as_mut_bytes()[0]`
6481
        /// has had its bits flipped (by applying `^= 0xFF`).
6482
        ///
6483
        /// `N` is the size of `t` in bytes.
6484
        fn test<T: FromBytes + IntoBytes + Immutable + Debug + Eq + ?Sized, const N: usize>(
6485
            t: &mut T,
6486
            bytes: &[u8],
6487
            post_mutation: &T,
6488
        ) {
6489
            // Test that we can access the underlying bytes, and that we get the
6490
            // right bytes and the right number of bytes.
6491
            assert_eq!(t.as_bytes(), bytes);
6492
6493
            // Test that changes to the underlying byte slices are reflected in
6494
            // the original object.
6495
            t.as_mut_bytes()[0] ^= 0xFF;
6496
            assert_eq!(t, post_mutation);
6497
            t.as_mut_bytes()[0] ^= 0xFF;
6498
6499
            // `write_to` rejects slices that are too small or too large.
6500
            assert!(t.write_to(&mut vec![0; N - 1][..]).is_err());
6501
            assert!(t.write_to(&mut vec![0; N + 1][..]).is_err());
6502
6503
            // `write_to` works as expected.
6504
            let mut bytes = [0; N];
6505
            assert_eq!(t.write_to(&mut bytes[..]), Ok(()));
6506
            assert_eq!(bytes, t.as_bytes());
6507
6508
            // `write_to_prefix` rejects slices that are too small.
6509
            assert!(t.write_to_prefix(&mut vec![0; N - 1][..]).is_err());
6510
6511
            // `write_to_prefix` works with exact-sized slices.
6512
            let mut bytes = [0; N];
6513
            assert_eq!(t.write_to_prefix(&mut bytes[..]), Ok(()));
6514
            assert_eq!(bytes, t.as_bytes());
6515
6516
            // `write_to_prefix` works with too-large slices, and any bytes past
6517
            // the prefix aren't modified.
6518
            let mut too_many_bytes = vec![0; N + 1];
6519
            too_many_bytes[N] = 123;
6520
            assert_eq!(t.write_to_prefix(&mut too_many_bytes[..]), Ok(()));
6521
            assert_eq!(&too_many_bytes[..N], t.as_bytes());
6522
            assert_eq!(too_many_bytes[N], 123);
6523
6524
            // `write_to_suffix` rejects slices that are too small.
6525
            assert!(t.write_to_suffix(&mut vec![0; N - 1][..]).is_err());
6526
6527
            // `write_to_suffix` works with exact-sized slices.
6528
            let mut bytes = [0; N];
6529
            assert_eq!(t.write_to_suffix(&mut bytes[..]), Ok(()));
6530
            assert_eq!(bytes, t.as_bytes());
6531
6532
            // `write_to_suffix` works with too-large slices, and any bytes
6533
            // before the suffix aren't modified.
6534
            let mut too_many_bytes = vec![0; N + 1];
6535
            too_many_bytes[0] = 123;
6536
            assert_eq!(t.write_to_suffix(&mut too_many_bytes[..]), Ok(()));
6537
            assert_eq!(&too_many_bytes[1..], t.as_bytes());
6538
            assert_eq!(too_many_bytes[0], 123);
6539
        }
6540
6541
        #[derive(Debug, Eq, PartialEq, FromBytes, IntoBytes, Immutable)]
6542
        #[repr(C)]
6543
        struct Foo {
6544
            a: u32,
6545
            b: Wrapping<u32>,
6546
            c: Option<NonZeroU32>,
6547
        }
6548
6549
        let expected_bytes: Vec<u8> = if cfg!(target_endian = "little") {
6550
            vec![1, 0, 0, 0, 2, 0, 0, 0, 0, 0, 0, 0]
6551
        } else {
6552
            vec![0, 0, 0, 1, 0, 0, 0, 2, 0, 0, 0, 0]
6553
        };
6554
        let post_mutation_expected_a =
6555
            if cfg!(target_endian = "little") { 0x00_00_00_FE } else { 0xFF_00_00_01 };
6556
        test::<_, 12>(
6557
            &mut Foo { a: 1, b: Wrapping(2), c: None },
6558
            expected_bytes.as_bytes(),
6559
            &Foo { a: post_mutation_expected_a, b: Wrapping(2), c: None },
6560
        );
6561
        test::<_, 3>(
6562
            Unsized::from_mut_slice(&mut [1, 2, 3]),
6563
            &[1, 2, 3],
6564
            Unsized::from_mut_slice(&mut [0xFE, 2, 3]),
6565
        );
6566
    }
6567
6568
    #[test]
6569
    fn test_array() {
6570
        #[derive(FromBytes, IntoBytes, Immutable)]
6571
        #[repr(C)]
6572
        struct Foo {
6573
            a: [u16; 33],
6574
        }
6575
6576
        let foo = Foo { a: [0xFFFF; 33] };
6577
        let expected = [0xFFu8; 66];
6578
        assert_eq!(foo.as_bytes(), &expected[..]);
6579
    }
6580
6581
    #[test]
6582
    fn test_new_zeroed() {
6583
        assert!(!bool::new_zeroed());
6584
        assert_eq!(u64::new_zeroed(), 0);
6585
        // This test exists in order to exercise unsafe code, especially when
6586
        // running under Miri.
6587
        #[allow(clippy::unit_cmp)]
6588
        {
6589
            assert_eq!(<()>::new_zeroed(), ());
6590
        }
6591
    }
6592
6593
    #[test]
6594
    fn test_transparent_packed_generic_struct() {
6595
        #[derive(IntoBytes, FromBytes, Unaligned)]
6596
        #[repr(transparent)]
6597
        #[allow(dead_code)] // We never construct this type
6598
        struct Foo<T> {
6599
            _t: T,
6600
            _phantom: PhantomData<()>,
6601
        }
6602
6603
        assert_impl_all!(Foo<u32>: FromZeros, FromBytes, IntoBytes);
6604
        assert_impl_all!(Foo<u8>: Unaligned);
6605
6606
        #[derive(IntoBytes, FromBytes, Unaligned)]
6607
        #[repr(C, packed)]
6608
        #[allow(dead_code)] // We never construct this type
6609
        struct Bar<T, U> {
6610
            _t: T,
6611
            _u: U,
6612
        }
6613
6614
        assert_impl_all!(Bar<u8, AU64>: FromZeros, FromBytes, IntoBytes, Unaligned);
6615
    }
6616
6617
    #[cfg(feature = "alloc")]
6618
    mod alloc {
6619
        use super::*;
6620
6621
        #[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
6622
        #[test]
6623
        fn test_extend_vec_zeroed() {
6624
            // Test extending when there is an existing allocation.
6625
            let mut v = vec![100u16, 200, 300];
6626
            FromZeros::extend_vec_zeroed(&mut v, 3).unwrap();
6627
            assert_eq!(v.len(), 6);
6628
            assert_eq!(&*v, &[100, 200, 300, 0, 0, 0]);
6629
            drop(v);
6630
6631
            // Test extending when there is no existing allocation.
6632
            let mut v: Vec<u64> = Vec::new();
6633
            FromZeros::extend_vec_zeroed(&mut v, 3).unwrap();
6634
            assert_eq!(v.len(), 3);
6635
            assert_eq!(&*v, &[0, 0, 0]);
6636
            drop(v);
6637
        }
6638
6639
        #[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
6640
        #[test]
6641
        fn test_extend_vec_zeroed_zst() {
6642
            // Test extending when there is an existing (fake) allocation.
6643
            let mut v = vec![(), (), ()];
6644
            <()>::extend_vec_zeroed(&mut v, 3).unwrap();
6645
            assert_eq!(v.len(), 6);
6646
            assert_eq!(&*v, &[(), (), (), (), (), ()]);
6647
            drop(v);
6648
6649
            // Test extending when there is no existing (fake) allocation.
6650
            let mut v: Vec<()> = Vec::new();
6651
            <()>::extend_vec_zeroed(&mut v, 3).unwrap();
6652
            assert_eq!(&*v, &[(), (), ()]);
6653
            drop(v);
6654
        }
6655
6656
        #[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
6657
        #[test]
6658
        fn test_insert_vec_zeroed() {
6659
            // Insert at start (no existing allocation).
6660
            let mut v: Vec<u64> = Vec::new();
6661
            u64::insert_vec_zeroed(&mut v, 0, 2).unwrap();
6662
            assert_eq!(v.len(), 2);
6663
            assert_eq!(&*v, &[0, 0]);
6664
            drop(v);
6665
6666
            // Insert at start.
6667
            let mut v = vec![100u64, 200, 300];
6668
            u64::insert_vec_zeroed(&mut v, 0, 2).unwrap();
6669
            assert_eq!(v.len(), 5);
6670
            assert_eq!(&*v, &[0, 0, 100, 200, 300]);
6671
            drop(v);
6672
6673
            // Insert at middle.
6674
            let mut v = vec![100u64, 200, 300];
6675
            u64::insert_vec_zeroed(&mut v, 1, 1).unwrap();
6676
            assert_eq!(v.len(), 4);
6677
            assert_eq!(&*v, &[100, 0, 200, 300]);
6678
            drop(v);
6679
6680
            // Insert at end.
6681
            let mut v = vec![100u64, 200, 300];
6682
            u64::insert_vec_zeroed(&mut v, 3, 1).unwrap();
6683
            assert_eq!(v.len(), 4);
6684
            assert_eq!(&*v, &[100, 200, 300, 0]);
6685
            drop(v);
6686
        }
6687
6688
        #[cfg(zerocopy_panic_in_const_and_vec_try_reserve_1_57_0)]
6689
        #[test]
6690
        fn test_insert_vec_zeroed_zst() {
6691
            // Insert at start (no existing fake allocation).
6692
            let mut v: Vec<()> = Vec::new();
6693
            <()>::insert_vec_zeroed(&mut v, 0, 2).unwrap();
6694
            assert_eq!(v.len(), 2);
6695
            assert_eq!(&*v, &[(), ()]);
6696
            drop(v);
6697
6698
            // Insert at start.
6699
            let mut v = vec![(), (), ()];
6700
            <()>::insert_vec_zeroed(&mut v, 0, 2).unwrap();
6701
            assert_eq!(v.len(), 5);
6702
            assert_eq!(&*v, &[(), (), (), (), ()]);
6703
            drop(v);
6704
6705
            // Insert at middle.
6706
            let mut v = vec![(), (), ()];
6707
            <()>::insert_vec_zeroed(&mut v, 1, 1).unwrap();
6708
            assert_eq!(v.len(), 4);
6709
            assert_eq!(&*v, &[(), (), (), ()]);
6710
            drop(v);
6711
6712
            // Insert at end.
6713
            let mut v = vec![(), (), ()];
6714
            <()>::insert_vec_zeroed(&mut v, 3, 1).unwrap();
6715
            assert_eq!(v.len(), 4);
6716
            assert_eq!(&*v, &[(), (), (), ()]);
6717
            drop(v);
6718
        }
6719
6720
        #[test]
6721
        fn test_new_box_zeroed() {
6722
            assert_eq!(u64::new_box_zeroed(), Ok(Box::new(0)));
6723
        }
6724
6725
        #[test]
6726
        fn test_new_box_zeroed_array() {
6727
            drop(<[u32; 0x1000]>::new_box_zeroed());
6728
        }
6729
6730
        #[test]
6731
        fn test_new_box_zeroed_zst() {
6732
            // This test exists in order to exercise unsafe code, especially
6733
            // when running under Miri.
6734
            #[allow(clippy::unit_cmp)]
6735
            {
6736
                assert_eq!(<()>::new_box_zeroed(), Ok(Box::new(())));
6737
            }
6738
        }
6739
6740
        #[test]
6741
        fn test_new_box_zeroed_with_elems() {
6742
            let mut s: Box<[u64]> = <[u64]>::new_box_zeroed_with_elems(3).unwrap();
6743
            assert_eq!(s.len(), 3);
6744
            assert_eq!(&*s, &[0, 0, 0]);
6745
            s[1] = 3;
6746
            assert_eq!(&*s, &[0, 3, 0]);
6747
        }
6748
6749
        #[test]
6750
        fn test_new_box_zeroed_with_elems_empty() {
6751
            let s: Box<[u64]> = <[u64]>::new_box_zeroed_with_elems(0).unwrap();
6752
            assert_eq!(s.len(), 0);
6753
        }
6754
6755
        #[test]
6756
        fn test_new_box_zeroed_with_elems_zst() {
6757
            let mut s: Box<[()]> = <[()]>::new_box_zeroed_with_elems(3).unwrap();
6758
            assert_eq!(s.len(), 3);
6759
            assert!(s.get(10).is_none());
6760
            // This test exists in order to exercise unsafe code, especially
6761
            // when running under Miri.
6762
            #[allow(clippy::unit_cmp)]
6763
            {
6764
                assert_eq!(s[1], ());
6765
            }
6766
            s[2] = ();
6767
        }
6768
6769
        #[test]
6770
        fn test_new_box_zeroed_with_elems_zst_empty() {
6771
            let s: Box<[()]> = <[()]>::new_box_zeroed_with_elems(0).unwrap();
6772
            assert_eq!(s.len(), 0);
6773
        }
6774
6775
        #[test]
6776
        fn new_box_zeroed_with_elems_errors() {
6777
            assert_eq!(<[u16]>::new_box_zeroed_with_elems(usize::MAX), Err(AllocError));
6778
6779
            let max = <usize as core::convert::TryFrom<_>>::try_from(isize::MAX).unwrap();
6780
            assert_eq!(
6781
                <[u16]>::new_box_zeroed_with_elems((max / mem::size_of::<u16>()) + 1),
6782
                Err(AllocError)
6783
            );
6784
        }
6785
    }
6786
}