wgpu_hal/lib.rs
1//! A cross-platform unsafe graphics abstraction.
2//!
3//! This crate defines a set of traits abstracting over modern graphics APIs,
4//! with implementations ("backends") for Vulkan, Metal, Direct3D, and GL.
5//!
6//! `wgpu-hal` is a spiritual successor to
7//! [gfx-hal](https://github.com/gfx-rs/gfx), but with reduced scope, and
8//! oriented towards WebGPU implementation goals. It has no overhead for
9//! validation or tracking, and the API translation overhead is kept to the bare
10//! minimum by the design of WebGPU. This API can be used for resource-demanding
11//! applications and engines.
12//!
13//! The `wgpu-hal` crate's main design choices:
14//!
15//! - Our traits are meant to be *portable*: proper use
16//! should get equivalent results regardless of the backend.
17//!
18//! - Our traits' contracts are *unsafe*: implementations perform minimal
19//! validation, if any, and incorrect use will often cause undefined behavior.
20//! This allows us to minimize the overhead we impose over the underlying
21//! graphics system. If you need safety, the [`wgpu-core`] crate provides a
22//! safe API for driving `wgpu-hal`, implementing all necessary validation,
23//! resource state tracking, and so on. (Note that `wgpu-core` is designed for
24//! use via FFI; the [`wgpu`] crate provides more idiomatic Rust bindings for
25//! `wgpu-core`.) Or, you can do your own validation.
26//!
27//! - In the same vein, returned errors *only cover cases the user can't
28//! anticipate*, like running out of memory or losing the device. Any errors
29//! that the user could reasonably anticipate are their responsibility to
30//! avoid. For example, `wgpu-hal` returns no error for mapping a buffer that's
31//! not mappable: as the buffer creator, the user should already know if they
32//! can map it.
33//!
34//! - We use *static dispatch*. The traits are not
35//! generally object-safe. You must select a specific backend type
36//! like [`vulkan::Api`] or [`metal::Api`], and then use that
37//! according to the main traits, or call backend-specific methods.
38//!
39//! - We use *idiomatic Rust parameter passing*,
40//! taking objects by reference, returning them by value, and so on,
41//! unlike `wgpu-core`, which refers to objects by ID.
42//!
43//! - We map buffer contents *persistently*. This means that the buffer can
44//! remain mapped on the CPU while the GPU reads or writes to it. You must
45//! explicitly indicate when data might need to be transferred between CPU and
46//! GPU, if [`Device::map_buffer`] indicates that this is necessary.
47//!
48//! - You must record *explicit barriers* between different usages of a
49//! resource. For example, if a buffer is written to by a compute
50//! shader, and then used as and index buffer to a draw call, you
51//! must use [`CommandEncoder::transition_buffers`] between those two
52//! operations.
53//!
54//! - Pipeline layouts are *explicitly specified* when setting bind groups.
55//! Incompatible layouts disturb groups bound at higher indices.
56//!
57//! - The API *accepts collections as iterators*, to avoid forcing the user to
58//! store data in particular containers. The implementation doesn't guarantee
59//! that any of the iterators are drained, unless stated otherwise by the
60//! function documentation. For this reason, we recommend that iterators don't
61//! do any mutating work.
62//!
63//! Unfortunately, `wgpu-hal`'s safety requirements are not fully documented.
64//! Ideally, all trait methods would have doc comments setting out the
65//! requirements users must meet to ensure correct and portable behavior. If you
66//! are aware of a specific requirement that a backend imposes that is not
67//! ensured by the traits' documented rules, please file an issue. Or, if you are
68//! a capable technical writer, please file a pull request!
69//!
70//! [`wgpu-core`]: https://crates.io/crates/wgpu-core
71//! [`wgpu`]: https://crates.io/crates/wgpu
72//! [`vulkan::Api`]: vulkan/struct.Api.html
73//! [`metal::Api`]: metal/struct.Api.html
74//!
75//! ## Primary backends
76//!
77//! The `wgpu-hal` crate has full-featured backends implemented on the following
78//! platform graphics APIs:
79//!
80//! - Vulkan, available on Linux, Android, and Windows, using the [`ash`] crate's
81//! Vulkan bindings. It's also available on macOS, if you install [MoltenVK].
82//!
83//! - Metal on macOS, using the [`metal`] crate's bindings.
84//!
85//! - Direct3D 12 on Windows, using the [`d3d12`] crate's bindings.
86//!
87//! [`ash`]: https://crates.io/crates/ash
88//! [MoltenVK]: https://github.com/KhronosGroup/MoltenVK
89//! [`metal`]: https://crates.io/crates/metal
90//! [`d3d12`]: https://crates.io/crates/d3d12
91//!
92//! ## Secondary backends
93//!
94//! The `wgpu-hal` crate has a partial implementation based on the following
95//! platform graphics API:
96//!
97//! - The GL backend is available anywhere OpenGL, OpenGL ES, or WebGL are
98//! available. See the [`gles`] module documentation for details.
99//!
100//! [`gles`]: gles/index.html
101//!
102//! You can see what capabilities an adapter is missing by checking the
103//! [`DownlevelCapabilities`][tdc] in [`ExposedAdapter::capabilities`], available
104//! from [`Instance::enumerate_adapters`].
105//!
106//! The API is generally designed to fit the primary backends better than the
107//! secondary backends, so the latter may impose more overhead.
108//!
109//! [tdc]: wgt::DownlevelCapabilities
110//!
111//! ## Traits
112//!
113//! The `wgpu-hal` crate defines a handful of traits that together
114//! represent a cross-platform abstraction for modern GPU APIs.
115//!
116//! - The [`Api`] trait represents a `wgpu-hal` backend. It has no methods of its
117//! own, only a collection of associated types.
118//!
119//! - [`Api::Instance`] implements the [`Instance`] trait. [`Instance::init`]
120//! creates an instance value, which you can use to enumerate the adapters
121//! available on the system. For example, [`vulkan::Api::Instance::init`][Ii]
122//! returns an instance that can enumerate the Vulkan physical devices on your
123//! system.
124//!
125//! - [`Api::Adapter`] implements the [`Adapter`] trait, representing a
126//! particular device from a particular backend. For example, a Vulkan instance
127//! might have a Lavapipe software adapter and a GPU-based adapter.
128//!
129//! - [`Api::Device`] implements the [`Device`] trait, representing an active
130//! link to a device. You get a device value by calling [`Adapter::open`], and
131//! then use it to create buffers, textures, shader modules, and so on.
132//!
133//! - [`Api::Queue`] implements the [`Queue`] trait, which you use to submit
134//! command buffers to a given device.
135//!
136//! - [`Api::CommandEncoder`] implements the [`CommandEncoder`] trait, which you
137//! use to build buffers of commands to submit to a queue. This has all the
138//! methods for drawing and running compute shaders, which is presumably what
139//! you're here for.
140//!
141//! - [`Api::Surface`] implements the [`Surface`] trait, which represents a
142//! swapchain for presenting images on the screen, via interaction with the
143//! system's window manager.
144//!
145//! The [`Api`] trait has various other associated types like [`Api::Buffer`] and
146//! [`Api::Texture`] that represent resources the rest of the interface can
147//! operate on, but these generally do not have their own traits.
148//!
149//! [Ii]: Instance::init
150//!
151//! ## Validation is the calling code's responsibility, not `wgpu-hal`'s
152//!
153//! As much as possible, `wgpu-hal` traits place the burden of validation,
154//! resource tracking, and state tracking on the caller, not on the trait
155//! implementations themselves. Anything which can reasonably be handled in
156//! backend-independent code should be. A `wgpu_hal` backend's sole obligation is
157//! to provide portable behavior, and report conditions that the calling code
158//! can't reasonably anticipate, like device loss or running out of memory.
159//!
160//! The `wgpu` crate collection is intended for use in security-sensitive
161//! applications, like web browsers, where the API is available to untrusted
162//! code. This means that `wgpu-core`'s validation is not simply a service to
163//! developers, to be provided opportunistically when the performance costs are
164//! acceptable and the necessary data is ready at hand. Rather, `wgpu-core`'s
165//! validation must be exhaustive, to ensure that even malicious content cannot
166//! provoke and exploit undefined behavior in the platform's graphics API.
167//!
168//! Because graphics APIs' requirements are complex, the only practical way for
169//! `wgpu` to provide exhaustive validation is to comprehensively track the
170//! lifetime and state of all the resources in the system. Implementing this
171//! separately for each backend is infeasible; effort would be better spent
172//! making the cross-platform validation in `wgpu-core` legible and trustworthy.
173//! Fortunately, the requirements are largely similar across the various
174//! platforms, so cross-platform validation is practical.
175//!
176//! Some backends have specific requirements that aren't practical to foist off
177//! on the `wgpu-hal` user. For example, properly managing macOS Objective-C or
178//! Microsoft COM reference counts is best handled by using appropriate pointer
179//! types within the backend.
180//!
181//! A desire for "defense in depth" may suggest performing additional validation
182//! in `wgpu-hal` when the opportunity arises, but this must be done with
183//! caution. Even experienced contributors infer the expectations their changes
184//! must meet by considering not just requirements made explicit in types, tests,
185//! assertions, and comments, but also those implicit in the surrounding code.
186//! When one sees validation or state-tracking code in `wgpu-hal`, it is tempting
187//! to conclude, "Oh, `wgpu-hal` checks for this, so `wgpu-core` needn't worry
188//! about it - that would be redundant!" The responsibility for exhaustive
189//! validation always rests with `wgpu-core`, regardless of what may or may not
190//! be checked in `wgpu-hal`.
191//!
192//! To this end, any "defense in depth" validation that does appear in `wgpu-hal`
193//! for requirements that `wgpu-core` should have enforced should report failure
194//! via the `unreachable!` macro, because problems detected at this stage always
195//! indicate a bug in `wgpu-core`.
196//!
197//! ## Debugging
198//!
199//! Most of the information on the wiki [Debugging wgpu Applications][wiki-debug]
200//! page still applies to this API, with the exception of API tracing/replay
201//! functionality, which is only available in `wgpu-core`.
202//!
203//! [wiki-debug]: https://github.com/gfx-rs/wgpu/wiki/Debugging-wgpu-Applications
204
205#![cfg_attr(docsrs, feature(doc_cfg, doc_auto_cfg))]
206#![allow(
207 // this happens on the GL backend, where it is both thread safe and non-thread safe in the same code.
208 clippy::arc_with_non_send_sync,
209 // We don't use syntax sugar where it's not necessary.
210 clippy::match_like_matches_macro,
211 // Redundant matching is more explicit.
212 clippy::redundant_pattern_matching,
213 // Explicit lifetimes are often easier to reason about.
214 clippy::needless_lifetimes,
215 // No need for defaults in the internal types.
216 clippy::new_without_default,
217 // Matches are good and extendable, no need to make an exception here.
218 clippy::single_match,
219 // Push commands are more regular than macros.
220 clippy::vec_init_then_push,
221 // We unsafe impl `Send` for a reason.
222 clippy::non_send_fields_in_send_ty,
223 // TODO!
224 clippy::missing_safety_doc,
225 // It gets in the way a lot and does not prevent bugs in practice.
226 clippy::pattern_type_mismatch,
227)]
228#![warn(
229 clippy::ptr_as_ptr,
230 trivial_casts,
231 trivial_numeric_casts,
232 unsafe_op_in_unsafe_fn,
233 unused_extern_crates,
234 unused_qualifications
235)]
236
237/// DirectX12 API internals.
238#[cfg(dx12)]
239pub mod dx12;
240/// A dummy API implementation.
241pub mod empty;
242/// GLES API internals.
243#[cfg(gles)]
244pub mod gles;
245/// Metal API internals.
246#[cfg(metal)]
247pub mod metal;
248/// Vulkan API internals.
249#[cfg(vulkan)]
250pub mod vulkan;
251
252pub mod auxil;
253pub mod api {
254 #[cfg(dx12)]
255 pub use super::dx12::Api as Dx12;
256 pub use super::empty::Api as Empty;
257 #[cfg(gles)]
258 pub use super::gles::Api as Gles;
259 #[cfg(metal)]
260 pub use super::metal::Api as Metal;
261 #[cfg(vulkan)]
262 pub use super::vulkan::Api as Vulkan;
263}
264
265mod dynamic;
266
267pub(crate) use dynamic::impl_dyn_resource;
268pub use dynamic::{
269 DynAccelerationStructure, DynAcquiredSurfaceTexture, DynAdapter, DynBindGroup,
270 DynBindGroupLayout, DynBuffer, DynCommandBuffer, DynCommandEncoder, DynComputePipeline,
271 DynDevice, DynExposedAdapter, DynFence, DynInstance, DynOpenDevice, DynPipelineCache,
272 DynPipelineLayout, DynQuerySet, DynQueue, DynRenderPipeline, DynResource, DynSampler,
273 DynShaderModule, DynSurface, DynSurfaceTexture, DynTexture, DynTextureView,
274};
275
276use std::{
277 borrow::{Borrow, Cow},
278 fmt,
279 num::NonZeroU32,
280 ops::{Range, RangeInclusive},
281 ptr::NonNull,
282 sync::Arc,
283};
284
285use bitflags::bitflags;
286use parking_lot::Mutex;
287use thiserror::Error;
288use wgt::WasmNotSendSync;
289
290// - Vertex + Fragment
291// - Compute
292pub const MAX_CONCURRENT_SHADER_STAGES: usize = 2;
293pub const MAX_ANISOTROPY: u8 = 16;
294pub const MAX_BIND_GROUPS: usize = 8;
295pub const MAX_VERTEX_BUFFERS: usize = 16;
296pub const MAX_COLOR_ATTACHMENTS: usize = 8;
297pub const MAX_MIP_LEVELS: u32 = 16;
298/// Size of a single occlusion/timestamp query, when copied into a buffer, in bytes.
299pub const QUERY_SIZE: wgt::BufferAddress = 8;
300
301pub type Label<'a> = Option<&'a str>;
302pub type MemoryRange = Range<wgt::BufferAddress>;
303pub type FenceValue = u64;
304pub type AtomicFenceValue = std::sync::atomic::AtomicU64;
305
306/// A callback to signal that wgpu is no longer using a resource.
307#[cfg(any(gles, vulkan))]
308pub type DropCallback = Box<dyn FnMut() + Send + Sync + 'static>;
309
310#[cfg(any(gles, vulkan))]
311pub struct DropGuard {
312 callback: DropCallback,
313}
314
315#[cfg(all(any(gles, vulkan), any(native, Emscripten)))]
316impl DropGuard {
317 fn from_option(callback: Option<DropCallback>) -> Option<Self> {
318 callback.map(|callback| Self { callback })
319 }
320}
321
322#[cfg(any(gles, vulkan))]
323impl Drop for DropGuard {
324 fn drop(&mut self) {
325 (self.callback)();
326 }
327}
328
329#[cfg(any(gles, vulkan))]
330impl fmt::Debug for DropGuard {
331 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
332 f.debug_struct("DropGuard").finish()
333 }
334}
335
336#[derive(Clone, Debug, PartialEq, Eq, Error)]
337pub enum DeviceError {
338 #[error("Out of memory")]
339 OutOfMemory,
340 #[error("Device is lost")]
341 Lost,
342 #[error("Creation of a resource failed for a reason other than running out of memory.")]
343 ResourceCreationFailed,
344 #[error("Unexpected error variant (driver implementation is at fault)")]
345 Unexpected,
346}
347
348#[allow(dead_code)] // may be unused on some platforms
349#[cold]
350fn hal_usage_error<T: fmt::Display>(txt: T) -> ! {
351 panic!("wgpu-hal invariant was violated (usage error): {txt}")
352}
353
354#[allow(dead_code)] // may be unused on some platforms
355#[cold]
356fn hal_internal_error<T: fmt::Display>(txt: T) -> ! {
357 panic!("wgpu-hal ran into a preventable internal error: {txt}")
358}
359
360#[derive(Clone, Debug, Eq, PartialEq, Error)]
361pub enum ShaderError {
362 #[error("Compilation failed: {0:?}")]
363 Compilation(String),
364 #[error(transparent)]
365 Device(#[from] DeviceError),
366}
367
368#[derive(Clone, Debug, Eq, PartialEq, Error)]
369pub enum PipelineError {
370 #[error("Linkage failed for stage {0:?}: {1}")]
371 Linkage(wgt::ShaderStages, String),
372 #[error("Entry point for stage {0:?} is invalid")]
373 EntryPoint(naga::ShaderStage),
374 #[error(transparent)]
375 Device(#[from] DeviceError),
376 #[error("Pipeline constant error for stage {0:?}: {1}")]
377 PipelineConstants(wgt::ShaderStages, String),
378}
379
380#[derive(Clone, Debug, Eq, PartialEq, Error)]
381pub enum PipelineCacheError {
382 #[error(transparent)]
383 Device(#[from] DeviceError),
384}
385
386#[derive(Clone, Debug, Eq, PartialEq, Error)]
387pub enum SurfaceError {
388 #[error("Surface is lost")]
389 Lost,
390 #[error("Surface is outdated, needs to be re-created")]
391 Outdated,
392 #[error(transparent)]
393 Device(#[from] DeviceError),
394 #[error("Other reason: {0}")]
395 Other(&'static str),
396}
397
398/// Error occurring while trying to create an instance, or create a surface from an instance;
399/// typically relating to the state of the underlying graphics API or hardware.
400#[derive(Clone, Debug, Error)]
401#[error("{message}")]
402pub struct InstanceError {
403 /// These errors are very platform specific, so do not attempt to encode them as an enum.
404 ///
405 /// This message should describe the problem in sufficient detail to be useful for a
406 /// user-to-developer “why won't this work on my machine” bug report, and otherwise follow
407 /// <https://rust-lang.github.io/api-guidelines/interoperability.html#error-types-are-meaningful-and-well-behaved-c-good-err>.
408 message: String,
409
410 /// Underlying error value, if any is available.
411 #[source]
412 source: Option<Arc<dyn std::error::Error + Send + Sync + 'static>>,
413}
414
415impl InstanceError {
416 #[allow(dead_code)] // may be unused on some platforms
417 pub(crate) fn new(message: String) -> Self {
418 Self {
419 message,
420 source: None,
421 }
422 }
423 #[allow(dead_code)] // may be unused on some platforms
424 pub(crate) fn with_source(
425 message: String,
426 source: impl std::error::Error + Send + Sync + 'static,
427 ) -> Self {
428 Self {
429 message,
430 source: Some(Arc::new(source)),
431 }
432 }
433}
434
435pub trait Api: Clone + fmt::Debug + Sized {
436 type Instance: DynInstance + Instance<A = Self>;
437 type Surface: DynSurface + Surface<A = Self>;
438 type Adapter: DynAdapter + Adapter<A = Self>;
439 type Device: DynDevice + Device<A = Self>;
440
441 type Queue: DynQueue + Queue<A = Self>;
442 type CommandEncoder: DynCommandEncoder + CommandEncoder<A = Self>;
443
444 /// This API's command buffer type.
445 ///
446 /// The only thing you can do with `CommandBuffer`s is build them
447 /// with a [`CommandEncoder`] and then pass them to
448 /// [`Queue::submit`] for execution, or destroy them by passing
449 /// them to [`CommandEncoder::reset_all`].
450 ///
451 /// [`CommandEncoder`]: Api::CommandEncoder
452 type CommandBuffer: DynCommandBuffer;
453
454 type Buffer: DynBuffer;
455 type Texture: DynTexture;
456 type SurfaceTexture: DynSurfaceTexture + Borrow<Self::Texture>;
457 type TextureView: DynTextureView;
458 type Sampler: DynSampler;
459 type QuerySet: DynQuerySet;
460
461 /// A value you can block on to wait for something to finish.
462 ///
463 /// A `Fence` holds a monotonically increasing [`FenceValue`]. You can call
464 /// [`Device::wait`] to block until a fence reaches or passes a value you
465 /// choose. [`Queue::submit`] can take a `Fence` and a [`FenceValue`] to
466 /// store in it when the submitted work is complete.
467 ///
468 /// Attempting to set a fence to a value less than its current value has no
469 /// effect.
470 ///
471 /// Waiting on a fence returns as soon as the fence reaches *or passes* the
472 /// requested value. This implies that, in order to reliably determine when
473 /// an operation has completed, operations must finish in order of
474 /// increasing fence values: if a higher-valued operation were to finish
475 /// before a lower-valued operation, then waiting for the fence to reach the
476 /// lower value could return before the lower-valued operation has actually
477 /// finished.
478 type Fence: DynFence;
479
480 type BindGroupLayout: DynBindGroupLayout;
481 type BindGroup: DynBindGroup;
482 type PipelineLayout: DynPipelineLayout;
483 type ShaderModule: DynShaderModule;
484 type RenderPipeline: DynRenderPipeline;
485 type ComputePipeline: DynComputePipeline;
486 type PipelineCache: DynPipelineCache;
487
488 type AccelerationStructure: DynAccelerationStructure + 'static;
489}
490
491pub trait Instance: Sized + WasmNotSendSync {
492 type A: Api;
493
494 unsafe fn init(desc: &InstanceDescriptor) -> Result<Self, InstanceError>;
495 unsafe fn create_surface(
496 &self,
497 display_handle: raw_window_handle::RawDisplayHandle,
498 window_handle: raw_window_handle::RawWindowHandle,
499 ) -> Result<<Self::A as Api>::Surface, InstanceError>;
500 /// `surface_hint` is only used by the GLES backend targeting WebGL2
501 unsafe fn enumerate_adapters(
502 &self,
503 surface_hint: Option<&<Self::A as Api>::Surface>,
504 ) -> Vec<ExposedAdapter<Self::A>>;
505}
506
507pub trait Surface: WasmNotSendSync {
508 type A: Api;
509
510 /// Configure `self` to use `device`.
511 ///
512 /// # Safety
513 ///
514 /// - All GPU work using `self` must have been completed.
515 /// - All [`AcquiredSurfaceTexture`]s must have been destroyed.
516 /// - All [`Api::TextureView`]s derived from the [`AcquiredSurfaceTexture`]s must have been destroyed.
517 /// - The surface `self` must not currently be configured to use any other [`Device`].
518 unsafe fn configure(
519 &self,
520 device: &<Self::A as Api>::Device,
521 config: &SurfaceConfiguration,
522 ) -> Result<(), SurfaceError>;
523
524 /// Unconfigure `self` on `device`.
525 ///
526 /// # Safety
527 ///
528 /// - All GPU work that uses `surface` must have been completed.
529 /// - All [`AcquiredSurfaceTexture`]s must have been destroyed.
530 /// - All [`Api::TextureView`]s derived from the [`AcquiredSurfaceTexture`]s must have been destroyed.
531 /// - The surface `self` must have been configured on `device`.
532 unsafe fn unconfigure(&self, device: &<Self::A as Api>::Device);
533
534 /// Return the next texture to be presented by `self`, for the caller to draw on.
535 ///
536 /// On success, return an [`AcquiredSurfaceTexture`] representing the
537 /// texture into which the caller should draw the image to be displayed on
538 /// `self`.
539 ///
540 /// If `timeout` elapses before `self` has a texture ready to be acquired,
541 /// return `Ok(None)`. If `timeout` is `None`, wait indefinitely, with no
542 /// timeout.
543 ///
544 /// # Using an [`AcquiredSurfaceTexture`]
545 ///
546 /// On success, this function returns an [`AcquiredSurfaceTexture`] whose
547 /// [`texture`] field is a [`SurfaceTexture`] from which the caller can
548 /// [`borrow`] a [`Texture`] to draw on. The [`AcquiredSurfaceTexture`] also
549 /// carries some metadata about that [`SurfaceTexture`].
550 ///
551 /// All calls to [`Queue::submit`] that draw on that [`Texture`] must also
552 /// include the [`SurfaceTexture`] in the `surface_textures` argument.
553 ///
554 /// When you are done drawing on the texture, you can display it on `self`
555 /// by passing the [`SurfaceTexture`] and `self` to [`Queue::present`].
556 ///
557 /// If you do not wish to display the texture, you must pass the
558 /// [`SurfaceTexture`] to [`self.discard_texture`], so that it can be reused
559 /// by future acquisitions.
560 ///
561 /// # Portability
562 ///
563 /// Some backends can't support a timeout when acquiring a texture. On these
564 /// backends, `timeout` is ignored.
565 ///
566 /// # Safety
567 ///
568 /// - The surface `self` must currently be configured on some [`Device`].
569 ///
570 /// - The `fence` argument must be the same [`Fence`] passed to all calls to
571 /// [`Queue::submit`] that used [`Texture`]s acquired from this surface.
572 ///
573 /// - You may only have one texture acquired from `self` at a time. When
574 /// `acquire_texture` returns `Ok(Some(ast))`, you must pass the returned
575 /// [`SurfaceTexture`] `ast.texture` to either [`Queue::present`] or
576 /// [`Surface::discard_texture`] before calling `acquire_texture` again.
577 ///
578 /// [`texture`]: AcquiredSurfaceTexture::texture
579 /// [`SurfaceTexture`]: Api::SurfaceTexture
580 /// [`borrow`]: std::borrow::Borrow::borrow
581 /// [`Texture`]: Api::Texture
582 /// [`Fence`]: Api::Fence
583 /// [`self.discard_texture`]: Surface::discard_texture
584 unsafe fn acquire_texture(
585 &self,
586 timeout: Option<std::time::Duration>,
587 fence: &<Self::A as Api>::Fence,
588 ) -> Result<Option<AcquiredSurfaceTexture<Self::A>>, SurfaceError>;
589
590 /// Relinquish an acquired texture without presenting it.
591 ///
592 /// After this call, the texture underlying [`SurfaceTexture`] may be
593 /// returned by subsequent calls to [`self.acquire_texture`].
594 ///
595 /// # Safety
596 ///
597 /// - The surface `self` must currently be configured on some [`Device`].
598 ///
599 /// - `texture` must be a [`SurfaceTexture`] returned by a call to
600 /// [`self.acquire_texture`] that has not yet been passed to
601 /// [`Queue::present`].
602 ///
603 /// [`SurfaceTexture`]: Api::SurfaceTexture
604 /// [`self.acquire_texture`]: Surface::acquire_texture
605 unsafe fn discard_texture(&self, texture: <Self::A as Api>::SurfaceTexture);
606}
607
608pub trait Adapter: WasmNotSendSync {
609 type A: Api;
610
611 unsafe fn open(
612 &self,
613 features: wgt::Features,
614 limits: &wgt::Limits,
615 memory_hints: &wgt::MemoryHints,
616 ) -> Result<OpenDevice<Self::A>, DeviceError>;
617
618 /// Return the set of supported capabilities for a texture format.
619 unsafe fn texture_format_capabilities(
620 &self,
621 format: wgt::TextureFormat,
622 ) -> TextureFormatCapabilities;
623
624 /// Returns the capabilities of working with a specified surface.
625 ///
626 /// `None` means presentation is not supported for it.
627 unsafe fn surface_capabilities(
628 &self,
629 surface: &<Self::A as Api>::Surface,
630 ) -> Option<SurfaceCapabilities>;
631
632 /// Creates a [`PresentationTimestamp`] using the adapter's WSI.
633 ///
634 /// [`PresentationTimestamp`]: wgt::PresentationTimestamp
635 unsafe fn get_presentation_timestamp(&self) -> wgt::PresentationTimestamp;
636}
637
638/// A connection to a GPU and a pool of resources to use with it.
639///
640/// A `wgpu-hal` `Device` represents an open connection to a specific graphics
641/// processor, controlled via the backend [`Device::A`]. A `Device` is mostly
642/// used for creating resources. Each `Device` has an associated [`Queue`] used
643/// for command submission.
644///
645/// On Vulkan a `Device` corresponds to a logical device ([`VkDevice`]). Other
646/// backends don't have an exact analog: for example, [`ID3D12Device`]s and
647/// [`MTLDevice`]s are owned by the backends' [`wgpu_hal::Adapter`]
648/// implementations, and shared by all [`wgpu_hal::Device`]s created from that
649/// `Adapter`.
650///
651/// A `Device`'s life cycle is generally:
652///
653/// 1) Obtain a `Device` and its associated [`Queue`] by calling
654/// [`Adapter::open`].
655///
656/// Alternatively, the backend-specific types that implement [`Adapter`] often
657/// have methods for creating a `wgpu-hal` `Device` from a platform-specific
658/// handle. For example, [`vulkan::Adapter::device_from_raw`] can create a
659/// [`vulkan::Device`] from an [`ash::Device`].
660///
661/// 1) Create resources to use on the device by calling methods like
662/// [`Device::create_texture`] or [`Device::create_shader_module`].
663///
664/// 1) Call [`Device::create_command_encoder`] to obtain a [`CommandEncoder`],
665/// which you can use to build [`CommandBuffer`]s holding commands to be
666/// executed on the GPU.
667///
668/// 1) Call [`Queue::submit`] on the `Device`'s associated [`Queue`] to submit
669/// [`CommandBuffer`]s for execution on the GPU. If needed, call
670/// [`Device::wait`] to wait for them to finish execution.
671///
672/// 1) Free resources with methods like [`Device::destroy_texture`] or
673/// [`Device::destroy_shader_module`].
674///
675/// 1) Shut down the device by calling [`Device::exit`].
676///
677/// [`vkDevice`]: https://registry.khronos.org/vulkan/specs/1.3-extensions/html/vkspec.html#VkDevice
678/// [`ID3D12Device`]: https://learn.microsoft.com/en-us/windows/win32/api/d3d12/nn-d3d12-id3d12device
679/// [`MTLDevice`]: https://developer.apple.com/documentation/metal/mtldevice
680/// [`wgpu_hal::Adapter`]: Adapter
681/// [`wgpu_hal::Device`]: Device
682/// [`vulkan::Adapter::device_from_raw`]: vulkan/struct.Adapter.html#method.device_from_raw
683/// [`vulkan::Device`]: vulkan/struct.Device.html
684/// [`ash::Device`]: https://docs.rs/ash/latest/ash/struct.Device.html
685/// [`CommandBuffer`]: Api::CommandBuffer
686///
687/// # Safety
688///
689/// As with other `wgpu-hal` APIs, [validation] is the caller's
690/// responsibility. Here are the general requirements for all `Device`
691/// methods:
692///
693/// - Any resource passed to a `Device` method must have been created by that
694/// `Device`. For example, a [`Texture`] passed to [`Device::destroy_texture`] must
695/// have been created with the `Device` passed as `self`.
696///
697/// - Resources may not be destroyed if they are used by any submitted command
698/// buffers that have not yet finished execution.
699///
700/// [validation]: index.html#validation-is-the-calling-codes-responsibility-not-wgpu-hals
701/// [`Texture`]: Api::Texture
702pub trait Device: WasmNotSendSync {
703 type A: Api;
704
705 /// Exit connection to this logical device.
706 unsafe fn exit(self, queue: <Self::A as Api>::Queue);
707 /// Creates a new buffer.
708 ///
709 /// The initial usage is `BufferUses::empty()`.
710 unsafe fn create_buffer(
711 &self,
712 desc: &BufferDescriptor,
713 ) -> Result<<Self::A as Api>::Buffer, DeviceError>;
714
715 /// Free `buffer` and any GPU resources it owns.
716 ///
717 /// Note that backends are allowed to allocate GPU memory for buffers from
718 /// allocation pools, and this call is permitted to simply return `buffer`'s
719 /// storage to that pool, without making it available to other applications.
720 ///
721 /// # Safety
722 ///
723 /// - The given `buffer` must not currently be mapped.
724 unsafe fn destroy_buffer(&self, buffer: <Self::A as Api>::Buffer);
725
726 /// A hook for when a wgpu-core buffer is created from a raw wgpu-hal buffer.
727 unsafe fn add_raw_buffer(&self, buffer: &<Self::A as Api>::Buffer);
728
729 /// Return a pointer to CPU memory mapping the contents of `buffer`.
730 ///
731 /// Buffer mappings are persistent: the buffer may remain mapped on the CPU
732 /// while the GPU reads or writes to it. (Note that `wgpu_core` does not use
733 /// this feature: when a `wgpu_core::Buffer` is unmapped, the underlying
734 /// `wgpu_hal` buffer is also unmapped.)
735 ///
736 /// If this function returns `Ok(mapping)`, then:
737 ///
738 /// - `mapping.ptr` is the CPU address of the start of the mapped memory.
739 ///
740 /// - If `mapping.is_coherent` is `true`, then CPU writes to the mapped
741 /// memory are immediately visible on the GPU, and vice versa.
742 ///
743 /// # Safety
744 ///
745 /// - The given `buffer` must have been created with the [`MAP_READ`] or
746 /// [`MAP_WRITE`] flags set in [`BufferDescriptor::usage`].
747 ///
748 /// - The given `range` must fall within the size of `buffer`.
749 ///
750 /// - The caller must avoid data races between the CPU and the GPU. A data
751 /// race is any pair of accesses to a particular byte, one of which is a
752 /// write, that are not ordered with respect to each other by some sort of
753 /// synchronization operation.
754 ///
755 /// - If this function returns `Ok(mapping)` and `mapping.is_coherent` is
756 /// `false`, then:
757 ///
758 /// - Every CPU write to a mapped byte followed by a GPU read of that byte
759 /// must have at least one call to [`Device::flush_mapped_ranges`]
760 /// covering that byte that occurs between those two accesses.
761 ///
762 /// - Every GPU write to a mapped byte followed by a CPU read of that byte
763 /// must have at least one call to [`Device::invalidate_mapped_ranges`]
764 /// covering that byte that occurs between those two accesses.
765 ///
766 /// Note that the data race rule above requires that all such access pairs
767 /// be ordered, so it is meaningful to talk about what must occur
768 /// "between" them.
769 ///
770 /// - Zero-sized mappings are not allowed.
771 ///
772 /// - The returned [`BufferMapping::ptr`] must not be used after a call to
773 /// [`Device::unmap_buffer`].
774 ///
775 /// [`MAP_READ`]: BufferUses::MAP_READ
776 /// [`MAP_WRITE`]: BufferUses::MAP_WRITE
777 unsafe fn map_buffer(
778 &self,
779 buffer: &<Self::A as Api>::Buffer,
780 range: MemoryRange,
781 ) -> Result<BufferMapping, DeviceError>;
782
783 /// Remove the mapping established by the last call to [`Device::map_buffer`].
784 ///
785 /// # Safety
786 ///
787 /// - The given `buffer` must be currently mapped.
788 unsafe fn unmap_buffer(&self, buffer: &<Self::A as Api>::Buffer);
789
790 /// Indicate that CPU writes to mapped buffer memory should be made visible to the GPU.
791 ///
792 /// # Safety
793 ///
794 /// - The given `buffer` must be currently mapped.
795 ///
796 /// - All ranges produced by `ranges` must fall within `buffer`'s size.
797 unsafe fn flush_mapped_ranges<I>(&self, buffer: &<Self::A as Api>::Buffer, ranges: I)
798 where
799 I: Iterator<Item = MemoryRange>;
800
801 /// Indicate that GPU writes to mapped buffer memory should be made visible to the CPU.
802 ///
803 /// # Safety
804 ///
805 /// - The given `buffer` must be currently mapped.
806 ///
807 /// - All ranges produced by `ranges` must fall within `buffer`'s size.
808 unsafe fn invalidate_mapped_ranges<I>(&self, buffer: &<Self::A as Api>::Buffer, ranges: I)
809 where
810 I: Iterator<Item = MemoryRange>;
811
812 /// Creates a new texture.
813 ///
814 /// The initial usage for all subresources is `TextureUses::UNINITIALIZED`.
815 unsafe fn create_texture(
816 &self,
817 desc: &TextureDescriptor,
818 ) -> Result<<Self::A as Api>::Texture, DeviceError>;
819 unsafe fn destroy_texture(&self, texture: <Self::A as Api>::Texture);
820
821 /// A hook for when a wgpu-core texture is created from a raw wgpu-hal texture.
822 unsafe fn add_raw_texture(&self, texture: &<Self::A as Api>::Texture);
823
824 unsafe fn create_texture_view(
825 &self,
826 texture: &<Self::A as Api>::Texture,
827 desc: &TextureViewDescriptor,
828 ) -> Result<<Self::A as Api>::TextureView, DeviceError>;
829 unsafe fn destroy_texture_view(&self, view: <Self::A as Api>::TextureView);
830 unsafe fn create_sampler(
831 &self,
832 desc: &SamplerDescriptor,
833 ) -> Result<<Self::A as Api>::Sampler, DeviceError>;
834 unsafe fn destroy_sampler(&self, sampler: <Self::A as Api>::Sampler);
835
836 /// Create a fresh [`CommandEncoder`].
837 ///
838 /// The new `CommandEncoder` is in the "closed" state.
839 unsafe fn create_command_encoder(
840 &self,
841 desc: &CommandEncoderDescriptor<<Self::A as Api>::Queue>,
842 ) -> Result<<Self::A as Api>::CommandEncoder, DeviceError>;
843 unsafe fn destroy_command_encoder(&self, pool: <Self::A as Api>::CommandEncoder);
844
845 /// Creates a bind group layout.
846 unsafe fn create_bind_group_layout(
847 &self,
848 desc: &BindGroupLayoutDescriptor,
849 ) -> Result<<Self::A as Api>::BindGroupLayout, DeviceError>;
850 unsafe fn destroy_bind_group_layout(&self, bg_layout: <Self::A as Api>::BindGroupLayout);
851 unsafe fn create_pipeline_layout(
852 &self,
853 desc: &PipelineLayoutDescriptor<<Self::A as Api>::BindGroupLayout>,
854 ) -> Result<<Self::A as Api>::PipelineLayout, DeviceError>;
855 unsafe fn destroy_pipeline_layout(&self, pipeline_layout: <Self::A as Api>::PipelineLayout);
856
857 #[allow(clippy::type_complexity)]
858 unsafe fn create_bind_group(
859 &self,
860 desc: &BindGroupDescriptor<
861 <Self::A as Api>::BindGroupLayout,
862 <Self::A as Api>::Buffer,
863 <Self::A as Api>::Sampler,
864 <Self::A as Api>::TextureView,
865 <Self::A as Api>::AccelerationStructure,
866 >,
867 ) -> Result<<Self::A as Api>::BindGroup, DeviceError>;
868 unsafe fn destroy_bind_group(&self, group: <Self::A as Api>::BindGroup);
869
870 unsafe fn create_shader_module(
871 &self,
872 desc: &ShaderModuleDescriptor,
873 shader: ShaderInput,
874 ) -> Result<<Self::A as Api>::ShaderModule, ShaderError>;
875 unsafe fn destroy_shader_module(&self, module: <Self::A as Api>::ShaderModule);
876
877 #[allow(clippy::type_complexity)]
878 unsafe fn create_render_pipeline(
879 &self,
880 desc: &RenderPipelineDescriptor<
881 <Self::A as Api>::PipelineLayout,
882 <Self::A as Api>::ShaderModule,
883 <Self::A as Api>::PipelineCache,
884 >,
885 ) -> Result<<Self::A as Api>::RenderPipeline, PipelineError>;
886 unsafe fn destroy_render_pipeline(&self, pipeline: <Self::A as Api>::RenderPipeline);
887
888 #[allow(clippy::type_complexity)]
889 unsafe fn create_compute_pipeline(
890 &self,
891 desc: &ComputePipelineDescriptor<
892 <Self::A as Api>::PipelineLayout,
893 <Self::A as Api>::ShaderModule,
894 <Self::A as Api>::PipelineCache,
895 >,
896 ) -> Result<<Self::A as Api>::ComputePipeline, PipelineError>;
897 unsafe fn destroy_compute_pipeline(&self, pipeline: <Self::A as Api>::ComputePipeline);
898
899 unsafe fn create_pipeline_cache(
900 &self,
901 desc: &PipelineCacheDescriptor<'_>,
902 ) -> Result<<Self::A as Api>::PipelineCache, PipelineCacheError>;
903 fn pipeline_cache_validation_key(&self) -> Option<[u8; 16]> {
904 None
905 }
906 unsafe fn destroy_pipeline_cache(&self, cache: <Self::A as Api>::PipelineCache);
907
908 unsafe fn create_query_set(
909 &self,
910 desc: &wgt::QuerySetDescriptor<Label>,
911 ) -> Result<<Self::A as Api>::QuerySet, DeviceError>;
912 unsafe fn destroy_query_set(&self, set: <Self::A as Api>::QuerySet);
913 unsafe fn create_fence(&self) -> Result<<Self::A as Api>::Fence, DeviceError>;
914 unsafe fn destroy_fence(&self, fence: <Self::A as Api>::Fence);
915 unsafe fn get_fence_value(
916 &self,
917 fence: &<Self::A as Api>::Fence,
918 ) -> Result<FenceValue, DeviceError>;
919
920 /// Wait for `fence` to reach `value`.
921 ///
922 /// Operations like [`Queue::submit`] can accept a [`Fence`] and a
923 /// [`FenceValue`] to store in it, so you can use this `wait` function
924 /// to wait for a given queue submission to finish execution.
925 ///
926 /// The `value` argument must be a value that some actual operation you have
927 /// already presented to the device is going to store in `fence`. You cannot
928 /// wait for values yet to be submitted. (This restriction accommodates
929 /// implementations like the `vulkan` backend's [`FencePool`] that must
930 /// allocate a distinct synchronization object for each fence value one is
931 /// able to wait for.)
932 ///
933 /// Calling `wait` with a lower [`FenceValue`] than `fence`'s current value
934 /// returns immediately.
935 ///
936 /// [`Fence`]: Api::Fence
937 /// [`FencePool`]: vulkan/enum.Fence.html#variant.FencePool
938 unsafe fn wait(
939 &self,
940 fence: &<Self::A as Api>::Fence,
941 value: FenceValue,
942 timeout_ms: u32,
943 ) -> Result<bool, DeviceError>;
944
945 unsafe fn start_capture(&self) -> bool;
946 unsafe fn stop_capture(&self);
947
948 #[allow(unused_variables)]
949 unsafe fn pipeline_cache_get_data(
950 &self,
951 cache: &<Self::A as Api>::PipelineCache,
952 ) -> Option<Vec<u8>> {
953 None
954 }
955
956 unsafe fn create_acceleration_structure(
957 &self,
958 desc: &AccelerationStructureDescriptor,
959 ) -> Result<<Self::A as Api>::AccelerationStructure, DeviceError>;
960 unsafe fn get_acceleration_structure_build_sizes(
961 &self,
962 desc: &GetAccelerationStructureBuildSizesDescriptor<<Self::A as Api>::Buffer>,
963 ) -> AccelerationStructureBuildSizes;
964 unsafe fn get_acceleration_structure_device_address(
965 &self,
966 acceleration_structure: &<Self::A as Api>::AccelerationStructure,
967 ) -> wgt::BufferAddress;
968 unsafe fn destroy_acceleration_structure(
969 &self,
970 acceleration_structure: <Self::A as Api>::AccelerationStructure,
971 );
972
973 fn get_internal_counters(&self) -> wgt::HalCounters;
974
975 fn generate_allocator_report(&self) -> Option<wgt::AllocatorReport> {
976 None
977 }
978}
979
980pub trait Queue: WasmNotSendSync {
981 type A: Api;
982
983 /// Submit `command_buffers` for execution on GPU.
984 ///
985 /// Update `fence` to `value` when the operation is complete. See
986 /// [`Fence`] for details.
987 ///
988 /// A `wgpu_hal` queue is "single threaded": all command buffers are
989 /// executed in the order they're submitted, with each buffer able to see
990 /// previous buffers' results. Specifically:
991 ///
992 /// - If two calls to `submit` on a single `Queue` occur in a particular
993 /// order (that is, they happen on the same thread, or on two threads that
994 /// have synchronized to establish an ordering), then the first
995 /// submission's commands all complete execution before any of the second
996 /// submission's commands begin. All results produced by one submission
997 /// are visible to the next.
998 ///
999 /// - Within a submission, command buffers execute in the order in which they
1000 /// appear in `command_buffers`. All results produced by one buffer are
1001 /// visible to the next.
1002 ///
1003 /// If two calls to `submit` on a single `Queue` from different threads are
1004 /// not synchronized to occur in a particular order, they must pass distinct
1005 /// [`Fence`]s. As explained in the [`Fence`] documentation, waiting for
1006 /// operations to complete is only trustworthy when operations finish in
1007 /// order of increasing fence value, but submissions from different threads
1008 /// cannot determine how to order the fence values if the submissions
1009 /// themselves are unordered. If each thread uses a separate [`Fence`], this
1010 /// problem does not arise.
1011 ///
1012 /// # Safety
1013 ///
1014 /// - Each [`CommandBuffer`][cb] in `command_buffers` must have been created
1015 /// from a [`CommandEncoder`][ce] that was constructed from the
1016 /// [`Device`][d] associated with this [`Queue`].
1017 ///
1018 /// - Each [`CommandBuffer`][cb] must remain alive until the submitted
1019 /// commands have finished execution. Since command buffers must not
1020 /// outlive their encoders, this implies that the encoders must remain
1021 /// alive as well.
1022 ///
1023 /// - All resources used by a submitted [`CommandBuffer`][cb]
1024 /// ([`Texture`][t]s, [`BindGroup`][bg]s, [`RenderPipeline`][rp]s, and so
1025 /// on) must remain alive until the command buffer finishes execution.
1026 ///
1027 /// - Every [`SurfaceTexture`][st] that any command in `command_buffers`
1028 /// writes to must appear in the `surface_textures` argument.
1029 ///
1030 /// - No [`SurfaceTexture`][st] may appear in the `surface_textures`
1031 /// argument more than once.
1032 ///
1033 /// - Each [`SurfaceTexture`][st] in `surface_textures` must be configured
1034 /// for use with the [`Device`][d] associated with this [`Queue`],
1035 /// typically by calling [`Surface::configure`].
1036 ///
1037 /// - All calls to this function that include a given [`SurfaceTexture`][st]
1038 /// in `surface_textures` must use the same [`Fence`].
1039 ///
1040 /// - The [`Fence`] passed as `signal_fence.0` must remain alive until
1041 /// all submissions that will signal it have completed.
1042 ///
1043 /// [`Fence`]: Api::Fence
1044 /// [cb]: Api::CommandBuffer
1045 /// [ce]: Api::CommandEncoder
1046 /// [d]: Api::Device
1047 /// [t]: Api::Texture
1048 /// [bg]: Api::BindGroup
1049 /// [rp]: Api::RenderPipeline
1050 /// [st]: Api::SurfaceTexture
1051 unsafe fn submit(
1052 &self,
1053 command_buffers: &[&<Self::A as Api>::CommandBuffer],
1054 surface_textures: &[&<Self::A as Api>::SurfaceTexture],
1055 signal_fence: (&mut <Self::A as Api>::Fence, FenceValue),
1056 ) -> Result<(), DeviceError>;
1057 unsafe fn present(
1058 &self,
1059 surface: &<Self::A as Api>::Surface,
1060 texture: <Self::A as Api>::SurfaceTexture,
1061 ) -> Result<(), SurfaceError>;
1062 unsafe fn get_timestamp_period(&self) -> f32;
1063}
1064
1065/// Encoder and allocation pool for `CommandBuffer`s.
1066///
1067/// A `CommandEncoder` not only constructs `CommandBuffer`s but also
1068/// acts as the allocation pool that owns the buffers' underlying
1069/// storage. Thus, `CommandBuffer`s must not outlive the
1070/// `CommandEncoder` that created them.
1071///
1072/// The life cycle of a `CommandBuffer` is as follows:
1073///
1074/// - Call [`Device::create_command_encoder`] to create a new
1075/// `CommandEncoder`, in the "closed" state.
1076///
1077/// - Call `begin_encoding` on a closed `CommandEncoder` to begin
1078/// recording commands. This puts the `CommandEncoder` in the
1079/// "recording" state.
1080///
1081/// - Call methods like `copy_buffer_to_buffer`, `begin_render_pass`,
1082/// etc. on a "recording" `CommandEncoder` to add commands to the
1083/// list. (If an error occurs, you must call `discard_encoding`; see
1084/// below.)
1085///
1086/// - Call `end_encoding` on a recording `CommandEncoder` to close the
1087/// encoder and construct a fresh `CommandBuffer` consisting of the
1088/// list of commands recorded up to that point.
1089///
1090/// - Call `discard_encoding` on a recording `CommandEncoder` to drop
1091/// the commands recorded thus far and close the encoder. This is
1092/// the only safe thing to do on a `CommandEncoder` if an error has
1093/// occurred while recording commands.
1094///
1095/// - Call `reset_all` on a closed `CommandEncoder`, passing all the
1096/// live `CommandBuffers` built from it. All the `CommandBuffer`s
1097/// are destroyed, and their resources are freed.
1098///
1099/// # Safety
1100///
1101/// - The `CommandEncoder` must be in the states described above to
1102/// make the given calls.
1103///
1104/// - A `CommandBuffer` that has been submitted for execution on the
1105/// GPU must live until its execution is complete.
1106///
1107/// - A `CommandBuffer` must not outlive the `CommandEncoder` that
1108/// built it.
1109///
1110/// - A `CommandEncoder` must not outlive its `Device`.
1111///
1112/// It is the user's responsibility to meet this requirements. This
1113/// allows `CommandEncoder` implementations to keep their state
1114/// tracking to a minimum.
1115pub trait CommandEncoder: WasmNotSendSync + fmt::Debug {
1116 type A: Api;
1117
1118 /// Begin encoding a new command buffer.
1119 ///
1120 /// This puts this `CommandEncoder` in the "recording" state.
1121 ///
1122 /// # Safety
1123 ///
1124 /// This `CommandEncoder` must be in the "closed" state.
1125 unsafe fn begin_encoding(&mut self, label: Label) -> Result<(), DeviceError>;
1126
1127 /// Discard the command list under construction.
1128 ///
1129 /// If an error has occurred while recording commands, this
1130 /// is the only safe thing to do with the encoder.
1131 ///
1132 /// This puts this `CommandEncoder` in the "closed" state.
1133 ///
1134 /// # Safety
1135 ///
1136 /// This `CommandEncoder` must be in the "recording" state.
1137 ///
1138 /// Callers must not assume that implementations of this
1139 /// function are idempotent, and thus should not call it
1140 /// multiple times in a row.
1141 unsafe fn discard_encoding(&mut self);
1142
1143 /// Return a fresh [`CommandBuffer`] holding the recorded commands.
1144 ///
1145 /// The returned [`CommandBuffer`] holds all the commands recorded
1146 /// on this `CommandEncoder` since the last call to
1147 /// [`begin_encoding`].
1148 ///
1149 /// This puts this `CommandEncoder` in the "closed" state.
1150 ///
1151 /// # Safety
1152 ///
1153 /// This `CommandEncoder` must be in the "recording" state.
1154 ///
1155 /// The returned [`CommandBuffer`] must not outlive this
1156 /// `CommandEncoder`. Implementations are allowed to build
1157 /// `CommandBuffer`s that depend on storage owned by this
1158 /// `CommandEncoder`.
1159 ///
1160 /// [`CommandBuffer`]: Api::CommandBuffer
1161 /// [`begin_encoding`]: CommandEncoder::begin_encoding
1162 unsafe fn end_encoding(&mut self) -> Result<<Self::A as Api>::CommandBuffer, DeviceError>;
1163
1164 /// Reclaim all resources belonging to this `CommandEncoder`.
1165 ///
1166 /// # Safety
1167 ///
1168 /// This `CommandEncoder` must be in the "closed" state.
1169 ///
1170 /// The `command_buffers` iterator must produce all the live
1171 /// [`CommandBuffer`]s built using this `CommandEncoder` --- that
1172 /// is, every extant `CommandBuffer` returned from `end_encoding`.
1173 ///
1174 /// [`CommandBuffer`]: Api::CommandBuffer
1175 unsafe fn reset_all<I>(&mut self, command_buffers: I)
1176 where
1177 I: Iterator<Item = <Self::A as Api>::CommandBuffer>;
1178
1179 unsafe fn transition_buffers<'a, T>(&mut self, barriers: T)
1180 where
1181 T: Iterator<Item = BufferBarrier<'a, <Self::A as Api>::Buffer>>;
1182
1183 unsafe fn transition_textures<'a, T>(&mut self, barriers: T)
1184 where
1185 T: Iterator<Item = TextureBarrier<'a, <Self::A as Api>::Texture>>;
1186
1187 // copy operations
1188
1189 unsafe fn clear_buffer(&mut self, buffer: &<Self::A as Api>::Buffer, range: MemoryRange);
1190
1191 unsafe fn copy_buffer_to_buffer<T>(
1192 &mut self,
1193 src: &<Self::A as Api>::Buffer,
1194 dst: &<Self::A as Api>::Buffer,
1195 regions: T,
1196 ) where
1197 T: Iterator<Item = BufferCopy>;
1198
1199 /// Copy from an external image to an internal texture.
1200 /// Works with a single array layer.
1201 /// Note: `dst` current usage has to be `TextureUses::COPY_DST`.
1202 /// Note: the copy extent is in physical size (rounded to the block size)
1203 #[cfg(webgl)]
1204 unsafe fn copy_external_image_to_texture<T>(
1205 &mut self,
1206 src: &wgt::ImageCopyExternalImage,
1207 dst: &<Self::A as Api>::Texture,
1208 dst_premultiplication: bool,
1209 regions: T,
1210 ) where
1211 T: Iterator<Item = TextureCopy>;
1212
1213 /// Copy from one texture to another.
1214 /// Works with a single array layer.
1215 /// Note: `dst` current usage has to be `TextureUses::COPY_DST`.
1216 /// Note: the copy extent is in physical size (rounded to the block size)
1217 unsafe fn copy_texture_to_texture<T>(
1218 &mut self,
1219 src: &<Self::A as Api>::Texture,
1220 src_usage: TextureUses,
1221 dst: &<Self::A as Api>::Texture,
1222 regions: T,
1223 ) where
1224 T: Iterator<Item = TextureCopy>;
1225
1226 /// Copy from buffer to texture.
1227 /// Works with a single array layer.
1228 /// Note: `dst` current usage has to be `TextureUses::COPY_DST`.
1229 /// Note: the copy extent is in physical size (rounded to the block size)
1230 unsafe fn copy_buffer_to_texture<T>(
1231 &mut self,
1232 src: &<Self::A as Api>::Buffer,
1233 dst: &<Self::A as Api>::Texture,
1234 regions: T,
1235 ) where
1236 T: Iterator<Item = BufferTextureCopy>;
1237
1238 /// Copy from texture to buffer.
1239 /// Works with a single array layer.
1240 /// Note: the copy extent is in physical size (rounded to the block size)
1241 unsafe fn copy_texture_to_buffer<T>(
1242 &mut self,
1243 src: &<Self::A as Api>::Texture,
1244 src_usage: TextureUses,
1245 dst: &<Self::A as Api>::Buffer,
1246 regions: T,
1247 ) where
1248 T: Iterator<Item = BufferTextureCopy>;
1249
1250 // pass common
1251
1252 /// Sets the bind group at `index` to `group`.
1253 ///
1254 /// If this is not the first call to `set_bind_group` within the current
1255 /// render or compute pass:
1256 ///
1257 /// - If `layout` contains `n` bind group layouts, then any previously set
1258 /// bind groups at indices `n` or higher are cleared.
1259 ///
1260 /// - If the first `m` bind group layouts of `layout` are equal to those of
1261 /// the previously passed layout, but no more, then any previously set
1262 /// bind groups at indices `m` or higher are cleared.
1263 ///
1264 /// It follows from the above that passing the same layout as before doesn't
1265 /// clear any bind groups.
1266 ///
1267 /// # Safety
1268 ///
1269 /// - This [`CommandEncoder`] must be within a render or compute pass.
1270 ///
1271 /// - `index` must be the valid index of some bind group layout in `layout`.
1272 /// Call this the "relevant bind group layout".
1273 ///
1274 /// - The layout of `group` must be equal to the relevant bind group layout.
1275 ///
1276 /// - The length of `dynamic_offsets` must match the number of buffer
1277 /// bindings [with dynamic offsets][hdo] in the relevant bind group
1278 /// layout.
1279 ///
1280 /// - If those buffer bindings are ordered by increasing [`binding` number]
1281 /// and paired with elements from `dynamic_offsets`, then each offset must
1282 /// be a valid offset for the binding's corresponding buffer in `group`.
1283 ///
1284 /// [hdo]: wgt::BindingType::Buffer::has_dynamic_offset
1285 /// [`binding` number]: wgt::BindGroupLayoutEntry::binding
1286 unsafe fn set_bind_group(
1287 &mut self,
1288 layout: &<Self::A as Api>::PipelineLayout,
1289 index: u32,
1290 group: &<Self::A as Api>::BindGroup,
1291 dynamic_offsets: &[wgt::DynamicOffset],
1292 );
1293
1294 /// Sets a range in push constant data.
1295 ///
1296 /// IMPORTANT: while the data is passed as words, the offset is in bytes!
1297 ///
1298 /// # Safety
1299 ///
1300 /// - `offset_bytes` must be a multiple of 4.
1301 /// - The range of push constants written must be valid for the pipeline layout at draw time.
1302 unsafe fn set_push_constants(
1303 &mut self,
1304 layout: &<Self::A as Api>::PipelineLayout,
1305 stages: wgt::ShaderStages,
1306 offset_bytes: u32,
1307 data: &[u32],
1308 );
1309
1310 unsafe fn insert_debug_marker(&mut self, label: &str);
1311 unsafe fn begin_debug_marker(&mut self, group_label: &str);
1312 unsafe fn end_debug_marker(&mut self);
1313
1314 // queries
1315
1316 /// # Safety:
1317 ///
1318 /// - If `set` is an occlusion query set, it must be the same one as used in the [`RenderPassDescriptor::occlusion_query_set`] parameter.
1319 unsafe fn begin_query(&mut self, set: &<Self::A as Api>::QuerySet, index: u32);
1320 /// # Safety:
1321 ///
1322 /// - If `set` is an occlusion query set, it must be the same one as used in the [`RenderPassDescriptor::occlusion_query_set`] parameter.
1323 unsafe fn end_query(&mut self, set: &<Self::A as Api>::QuerySet, index: u32);
1324 unsafe fn write_timestamp(&mut self, set: &<Self::A as Api>::QuerySet, index: u32);
1325 unsafe fn reset_queries(&mut self, set: &<Self::A as Api>::QuerySet, range: Range<u32>);
1326 unsafe fn copy_query_results(
1327 &mut self,
1328 set: &<Self::A as Api>::QuerySet,
1329 range: Range<u32>,
1330 buffer: &<Self::A as Api>::Buffer,
1331 offset: wgt::BufferAddress,
1332 stride: wgt::BufferSize,
1333 );
1334
1335 // render passes
1336
1337 /// Begin a new render pass, clearing all active bindings.
1338 ///
1339 /// This clears any bindings established by the following calls:
1340 ///
1341 /// - [`set_bind_group`](CommandEncoder::set_bind_group)
1342 /// - [`set_push_constants`](CommandEncoder::set_push_constants)
1343 /// - [`begin_query`](CommandEncoder::begin_query)
1344 /// - [`set_render_pipeline`](CommandEncoder::set_render_pipeline)
1345 /// - [`set_index_buffer`](CommandEncoder::set_index_buffer)
1346 /// - [`set_vertex_buffer`](CommandEncoder::set_vertex_buffer)
1347 ///
1348 /// # Safety
1349 ///
1350 /// - All prior calls to [`begin_render_pass`] on this [`CommandEncoder`] must have been followed
1351 /// by a call to [`end_render_pass`].
1352 ///
1353 /// - All prior calls to [`begin_compute_pass`] on this [`CommandEncoder`] must have been followed
1354 /// by a call to [`end_compute_pass`].
1355 ///
1356 /// [`begin_render_pass`]: CommandEncoder::begin_render_pass
1357 /// [`begin_compute_pass`]: CommandEncoder::begin_compute_pass
1358 /// [`end_render_pass`]: CommandEncoder::end_render_pass
1359 /// [`end_compute_pass`]: CommandEncoder::end_compute_pass
1360 unsafe fn begin_render_pass(
1361 &mut self,
1362 desc: &RenderPassDescriptor<<Self::A as Api>::QuerySet, <Self::A as Api>::TextureView>,
1363 );
1364
1365 /// End the current render pass.
1366 ///
1367 /// # Safety
1368 ///
1369 /// - There must have been a prior call to [`begin_render_pass`] on this [`CommandEncoder`]
1370 /// that has not been followed by a call to [`end_render_pass`].
1371 ///
1372 /// [`begin_render_pass`]: CommandEncoder::begin_render_pass
1373 /// [`end_render_pass`]: CommandEncoder::end_render_pass
1374 unsafe fn end_render_pass(&mut self);
1375
1376 unsafe fn set_render_pipeline(&mut self, pipeline: &<Self::A as Api>::RenderPipeline);
1377
1378 unsafe fn set_index_buffer<'a>(
1379 &mut self,
1380 binding: BufferBinding<'a, <Self::A as Api>::Buffer>,
1381 format: wgt::IndexFormat,
1382 );
1383 unsafe fn set_vertex_buffer<'a>(
1384 &mut self,
1385 index: u32,
1386 binding: BufferBinding<'a, <Self::A as Api>::Buffer>,
1387 );
1388 unsafe fn set_viewport(&mut self, rect: &Rect<f32>, depth_range: Range<f32>);
1389 unsafe fn set_scissor_rect(&mut self, rect: &Rect<u32>);
1390 unsafe fn set_stencil_reference(&mut self, value: u32);
1391 unsafe fn set_blend_constants(&mut self, color: &[f32; 4]);
1392
1393 unsafe fn draw(
1394 &mut self,
1395 first_vertex: u32,
1396 vertex_count: u32,
1397 first_instance: u32,
1398 instance_count: u32,
1399 );
1400 unsafe fn draw_indexed(
1401 &mut self,
1402 first_index: u32,
1403 index_count: u32,
1404 base_vertex: i32,
1405 first_instance: u32,
1406 instance_count: u32,
1407 );
1408 unsafe fn draw_indirect(
1409 &mut self,
1410 buffer: &<Self::A as Api>::Buffer,
1411 offset: wgt::BufferAddress,
1412 draw_count: u32,
1413 );
1414 unsafe fn draw_indexed_indirect(
1415 &mut self,
1416 buffer: &<Self::A as Api>::Buffer,
1417 offset: wgt::BufferAddress,
1418 draw_count: u32,
1419 );
1420 unsafe fn draw_indirect_count(
1421 &mut self,
1422 buffer: &<Self::A as Api>::Buffer,
1423 offset: wgt::BufferAddress,
1424 count_buffer: &<Self::A as Api>::Buffer,
1425 count_offset: wgt::BufferAddress,
1426 max_count: u32,
1427 );
1428 unsafe fn draw_indexed_indirect_count(
1429 &mut self,
1430 buffer: &<Self::A as Api>::Buffer,
1431 offset: wgt::BufferAddress,
1432 count_buffer: &<Self::A as Api>::Buffer,
1433 count_offset: wgt::BufferAddress,
1434 max_count: u32,
1435 );
1436
1437 // compute passes
1438
1439 /// Begin a new compute pass, clearing all active bindings.
1440 ///
1441 /// This clears any bindings established by the following calls:
1442 ///
1443 /// - [`set_bind_group`](CommandEncoder::set_bind_group)
1444 /// - [`set_push_constants`](CommandEncoder::set_push_constants)
1445 /// - [`begin_query`](CommandEncoder::begin_query)
1446 /// - [`set_compute_pipeline`](CommandEncoder::set_compute_pipeline)
1447 ///
1448 /// # Safety
1449 ///
1450 /// - All prior calls to [`begin_render_pass`] on this [`CommandEncoder`] must have been followed
1451 /// by a call to [`end_render_pass`].
1452 ///
1453 /// - All prior calls to [`begin_compute_pass`] on this [`CommandEncoder`] must have been followed
1454 /// by a call to [`end_compute_pass`].
1455 ///
1456 /// [`begin_render_pass`]: CommandEncoder::begin_render_pass
1457 /// [`begin_compute_pass`]: CommandEncoder::begin_compute_pass
1458 /// [`end_render_pass`]: CommandEncoder::end_render_pass
1459 /// [`end_compute_pass`]: CommandEncoder::end_compute_pass
1460 unsafe fn begin_compute_pass(
1461 &mut self,
1462 desc: &ComputePassDescriptor<<Self::A as Api>::QuerySet>,
1463 );
1464
1465 /// End the current compute pass.
1466 ///
1467 /// # Safety
1468 ///
1469 /// - There must have been a prior call to [`begin_compute_pass`] on this [`CommandEncoder`]
1470 /// that has not been followed by a call to [`end_compute_pass`].
1471 ///
1472 /// [`begin_compute_pass`]: CommandEncoder::begin_compute_pass
1473 /// [`end_compute_pass`]: CommandEncoder::end_compute_pass
1474 unsafe fn end_compute_pass(&mut self);
1475
1476 unsafe fn set_compute_pipeline(&mut self, pipeline: &<Self::A as Api>::ComputePipeline);
1477
1478 unsafe fn dispatch(&mut self, count: [u32; 3]);
1479 unsafe fn dispatch_indirect(
1480 &mut self,
1481 buffer: &<Self::A as Api>::Buffer,
1482 offset: wgt::BufferAddress,
1483 );
1484
1485 /// To get the required sizes for the buffer allocations use `get_acceleration_structure_build_sizes` per descriptor
1486 /// All buffers must be synchronized externally
1487 /// All buffer regions, which are written to may only be passed once per function call,
1488 /// with the exception of updates in the same descriptor.
1489 /// Consequences of this limitation:
1490 /// - scratch buffers need to be unique
1491 /// - a tlas can't be build in the same call with a blas it contains
1492 unsafe fn build_acceleration_structures<'a, T>(
1493 &mut self,
1494 descriptor_count: u32,
1495 descriptors: T,
1496 ) where
1497 Self::A: 'a,
1498 T: IntoIterator<
1499 Item = BuildAccelerationStructureDescriptor<
1500 'a,
1501 <Self::A as Api>::Buffer,
1502 <Self::A as Api>::AccelerationStructure,
1503 >,
1504 >;
1505
1506 unsafe fn place_acceleration_structure_barrier(
1507 &mut self,
1508 barrier: AccelerationStructureBarrier,
1509 );
1510}
1511
1512bitflags!(
1513 /// Pipeline layout creation flags.
1514 #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1515 pub struct PipelineLayoutFlags: u32 {
1516 /// Include support for `first_vertex` / `first_instance` drawing.
1517 const FIRST_VERTEX_INSTANCE = 1 << 0;
1518 /// Include support for num work groups builtin.
1519 const NUM_WORK_GROUPS = 1 << 1;
1520 }
1521);
1522
1523bitflags!(
1524 /// Pipeline layout creation flags.
1525 #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1526 pub struct BindGroupLayoutFlags: u32 {
1527 /// Allows for bind group binding arrays to be shorter than the array in the BGL.
1528 const PARTIALLY_BOUND = 1 << 0;
1529 }
1530);
1531
1532bitflags!(
1533 /// Texture format capability flags.
1534 #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1535 pub struct TextureFormatCapabilities: u32 {
1536 /// Format can be sampled.
1537 const SAMPLED = 1 << 0;
1538 /// Format can be sampled with a linear sampler.
1539 const SAMPLED_LINEAR = 1 << 1;
1540 /// Format can be sampled with a min/max reduction sampler.
1541 const SAMPLED_MINMAX = 1 << 2;
1542
1543 /// Format can be used as storage with write-only access.
1544 const STORAGE = 1 << 3;
1545 /// Format can be used as storage with read and read/write access.
1546 const STORAGE_READ_WRITE = 1 << 4;
1547 /// Format can be used as storage with atomics.
1548 const STORAGE_ATOMIC = 1 << 5;
1549
1550 /// Format can be used as color and input attachment.
1551 const COLOR_ATTACHMENT = 1 << 6;
1552 /// Format can be used as color (with blending) and input attachment.
1553 const COLOR_ATTACHMENT_BLEND = 1 << 7;
1554 /// Format can be used as depth-stencil and input attachment.
1555 const DEPTH_STENCIL_ATTACHMENT = 1 << 8;
1556
1557 /// Format can be multisampled by x2.
1558 const MULTISAMPLE_X2 = 1 << 9;
1559 /// Format can be multisampled by x4.
1560 const MULTISAMPLE_X4 = 1 << 10;
1561 /// Format can be multisampled by x8.
1562 const MULTISAMPLE_X8 = 1 << 11;
1563 /// Format can be multisampled by x16.
1564 const MULTISAMPLE_X16 = 1 << 12;
1565
1566 /// Format can be used for render pass resolve targets.
1567 const MULTISAMPLE_RESOLVE = 1 << 13;
1568
1569 /// Format can be copied from.
1570 const COPY_SRC = 1 << 14;
1571 /// Format can be copied to.
1572 const COPY_DST = 1 << 15;
1573 }
1574);
1575
1576bitflags!(
1577 /// Texture format capability flags.
1578 #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1579 pub struct FormatAspects: u8 {
1580 const COLOR = 1 << 0;
1581 const DEPTH = 1 << 1;
1582 const STENCIL = 1 << 2;
1583 const PLANE_0 = 1 << 3;
1584 const PLANE_1 = 1 << 4;
1585 const PLANE_2 = 1 << 5;
1586
1587 const DEPTH_STENCIL = Self::DEPTH.bits() | Self::STENCIL.bits();
1588 }
1589);
1590
1591impl FormatAspects {
1592 pub fn new(format: wgt::TextureFormat, aspect: wgt::TextureAspect) -> Self {
1593 let aspect_mask = match aspect {
1594 wgt::TextureAspect::All => Self::all(),
1595 wgt::TextureAspect::DepthOnly => Self::DEPTH,
1596 wgt::TextureAspect::StencilOnly => Self::STENCIL,
1597 wgt::TextureAspect::Plane0 => Self::PLANE_0,
1598 wgt::TextureAspect::Plane1 => Self::PLANE_1,
1599 wgt::TextureAspect::Plane2 => Self::PLANE_2,
1600 };
1601 Self::from(format) & aspect_mask
1602 }
1603
1604 /// Returns `true` if only one flag is set
1605 pub fn is_one(&self) -> bool {
1606 self.bits().count_ones() == 1
1607 }
1608
1609 pub fn map(&self) -> wgt::TextureAspect {
1610 match *self {
1611 Self::COLOR => wgt::TextureAspect::All,
1612 Self::DEPTH => wgt::TextureAspect::DepthOnly,
1613 Self::STENCIL => wgt::TextureAspect::StencilOnly,
1614 Self::PLANE_0 => wgt::TextureAspect::Plane0,
1615 Self::PLANE_1 => wgt::TextureAspect::Plane1,
1616 Self::PLANE_2 => wgt::TextureAspect::Plane2,
1617 _ => unreachable!(),
1618 }
1619 }
1620}
1621
1622impl From<wgt::TextureFormat> for FormatAspects {
1623 fn from(format: wgt::TextureFormat) -> Self {
1624 match format {
1625 wgt::TextureFormat::Stencil8 => Self::STENCIL,
1626 wgt::TextureFormat::Depth16Unorm
1627 | wgt::TextureFormat::Depth32Float
1628 | wgt::TextureFormat::Depth24Plus => Self::DEPTH,
1629 wgt::TextureFormat::Depth32FloatStencil8 | wgt::TextureFormat::Depth24PlusStencil8 => {
1630 Self::DEPTH_STENCIL
1631 }
1632 wgt::TextureFormat::NV12 => Self::PLANE_0 | Self::PLANE_1,
1633 _ => Self::COLOR,
1634 }
1635 }
1636}
1637
1638bitflags!(
1639 #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1640 pub struct MemoryFlags: u32 {
1641 const TRANSIENT = 1 << 0;
1642 const PREFER_COHERENT = 1 << 1;
1643 }
1644);
1645
1646//TODO: it's not intuitive for the backends to consider `LOAD` being optional.
1647
1648bitflags!(
1649 #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1650 pub struct AttachmentOps: u8 {
1651 const LOAD = 1 << 0;
1652 const STORE = 1 << 1;
1653 }
1654);
1655
1656bitflags::bitflags! {
1657 /// Similar to `wgt::BufferUsages` but for internal use.
1658 #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1659 pub struct BufferUses: u16 {
1660 /// The argument to a read-only mapping.
1661 const MAP_READ = 1 << 0;
1662 /// The argument to a write-only mapping.
1663 const MAP_WRITE = 1 << 1;
1664 /// The source of a hardware copy.
1665 const COPY_SRC = 1 << 2;
1666 /// The destination of a hardware copy.
1667 const COPY_DST = 1 << 3;
1668 /// The index buffer used for drawing.
1669 const INDEX = 1 << 4;
1670 /// A vertex buffer used for drawing.
1671 const VERTEX = 1 << 5;
1672 /// A uniform buffer bound in a bind group.
1673 const UNIFORM = 1 << 6;
1674 /// A read-only storage buffer used in a bind group.
1675 const STORAGE_READ = 1 << 7;
1676 /// A read-write or write-only buffer used in a bind group.
1677 const STORAGE_READ_WRITE = 1 << 8;
1678 /// The indirect or count buffer in a indirect draw or dispatch.
1679 const INDIRECT = 1 << 9;
1680 /// A buffer used to store query results.
1681 const QUERY_RESOLVE = 1 << 10;
1682 const ACCELERATION_STRUCTURE_SCRATCH = 1 << 11;
1683 const BOTTOM_LEVEL_ACCELERATION_STRUCTURE_INPUT = 1 << 12;
1684 const TOP_LEVEL_ACCELERATION_STRUCTURE_INPUT = 1 << 13;
1685 /// The combination of states that a buffer may be in _at the same time_.
1686 const INCLUSIVE = Self::MAP_READ.bits() | Self::COPY_SRC.bits() |
1687 Self::INDEX.bits() | Self::VERTEX.bits() | Self::UNIFORM.bits() |
1688 Self::STORAGE_READ.bits() | Self::INDIRECT.bits() | Self::BOTTOM_LEVEL_ACCELERATION_STRUCTURE_INPUT.bits() | Self::TOP_LEVEL_ACCELERATION_STRUCTURE_INPUT.bits();
1689 /// The combination of states that a buffer must exclusively be in.
1690 const EXCLUSIVE = Self::MAP_WRITE.bits() | Self::COPY_DST.bits() | Self::STORAGE_READ_WRITE.bits() | Self::ACCELERATION_STRUCTURE_SCRATCH.bits();
1691 /// The combination of all usages that the are guaranteed to be be ordered by the hardware.
1692 /// If a usage is ordered, then if the buffer state doesn't change between draw calls, there
1693 /// are no barriers needed for synchronization.
1694 const ORDERED = Self::INCLUSIVE.bits() | Self::MAP_WRITE.bits();
1695 }
1696}
1697
1698bitflags::bitflags! {
1699 /// Similar to `wgt::TextureUsages` but for internal use.
1700 #[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
1701 pub struct TextureUses: u16 {
1702 /// The texture is in unknown state.
1703 const UNINITIALIZED = 1 << 0;
1704 /// Ready to present image to the surface.
1705 const PRESENT = 1 << 1;
1706 /// The source of a hardware copy.
1707 const COPY_SRC = 1 << 2;
1708 /// The destination of a hardware copy.
1709 const COPY_DST = 1 << 3;
1710 /// Read-only sampled or fetched resource.
1711 const RESOURCE = 1 << 4;
1712 /// The color target of a renderpass.
1713 const COLOR_TARGET = 1 << 5;
1714 /// Read-only depth stencil usage.
1715 const DEPTH_STENCIL_READ = 1 << 6;
1716 /// Read-write depth stencil usage
1717 const DEPTH_STENCIL_WRITE = 1 << 7;
1718 /// Read-only storage buffer usage. Corresponds to a UAV in d3d, so is exclusive, despite being read only.
1719 const STORAGE_READ = 1 << 8;
1720 /// Read-write or write-only storage buffer usage.
1721 const STORAGE_READ_WRITE = 1 << 9;
1722 /// The combination of states that a texture may be in _at the same time_.
1723 const INCLUSIVE = Self::COPY_SRC.bits() | Self::RESOURCE.bits() | Self::DEPTH_STENCIL_READ.bits();
1724 /// The combination of states that a texture must exclusively be in.
1725 const EXCLUSIVE = Self::COPY_DST.bits() | Self::COLOR_TARGET.bits() | Self::DEPTH_STENCIL_WRITE.bits() | Self::STORAGE_READ.bits() | Self::STORAGE_READ_WRITE.bits() | Self::PRESENT.bits();
1726 /// The combination of all usages that the are guaranteed to be be ordered by the hardware.
1727 /// If a usage is ordered, then if the texture state doesn't change between draw calls, there
1728 /// are no barriers needed for synchronization.
1729 const ORDERED = Self::INCLUSIVE.bits() | Self::COLOR_TARGET.bits() | Self::DEPTH_STENCIL_WRITE.bits() | Self::STORAGE_READ.bits();
1730
1731 /// Flag used by the wgpu-core texture tracker to say a texture is in different states for every sub-resource
1732 const COMPLEX = 1 << 10;
1733 /// Flag used by the wgpu-core texture tracker to say that the tracker does not know the state of the sub-resource.
1734 /// This is different from UNINITIALIZED as that says the tracker does know, but the texture has not been initialized.
1735 const UNKNOWN = 1 << 11;
1736 }
1737}
1738
1739#[derive(Clone, Debug)]
1740pub struct InstanceDescriptor<'a> {
1741 pub name: &'a str,
1742 pub flags: wgt::InstanceFlags,
1743 pub dx12_shader_compiler: wgt::Dx12Compiler,
1744 pub gles_minor_version: wgt::Gles3MinorVersion,
1745}
1746
1747#[derive(Clone, Debug)]
1748pub struct Alignments {
1749 /// The alignment of the start of the buffer used as a GPU copy source.
1750 pub buffer_copy_offset: wgt::BufferSize,
1751
1752 /// The alignment of the row pitch of the texture data stored in a buffer that is
1753 /// used in a GPU copy operation.
1754 pub buffer_copy_pitch: wgt::BufferSize,
1755
1756 /// The finest alignment of bound range checking for uniform buffers.
1757 ///
1758 /// When `wgpu_hal` restricts shader references to the [accessible
1759 /// region][ar] of a [`Uniform`] buffer, the size of the accessible region
1760 /// is the bind group binding's stated [size], rounded up to the next
1761 /// multiple of this value.
1762 ///
1763 /// We don't need an analogous field for storage buffer bindings, because
1764 /// all our backends promise to enforce the size at least to a four-byte
1765 /// alignment, and `wgpu_hal` requires bound range lengths to be a multiple
1766 /// of four anyway.
1767 ///
1768 /// [ar]: struct.BufferBinding.html#accessible-region
1769 /// [`Uniform`]: wgt::BufferBindingType::Uniform
1770 /// [size]: BufferBinding::size
1771 pub uniform_bounds_check_alignment: wgt::BufferSize,
1772}
1773
1774#[derive(Clone, Debug)]
1775pub struct Capabilities {
1776 pub limits: wgt::Limits,
1777 pub alignments: Alignments,
1778 pub downlevel: wgt::DownlevelCapabilities,
1779}
1780
1781#[derive(Debug)]
1782pub struct ExposedAdapter<A: Api> {
1783 pub adapter: A::Adapter,
1784 pub info: wgt::AdapterInfo,
1785 pub features: wgt::Features,
1786 pub capabilities: Capabilities,
1787}
1788
1789/// Describes information about what a `Surface`'s presentation capabilities are.
1790/// Fetch this with [Adapter::surface_capabilities].
1791#[derive(Debug, Clone)]
1792pub struct SurfaceCapabilities {
1793 /// List of supported texture formats.
1794 ///
1795 /// Must be at least one.
1796 pub formats: Vec<wgt::TextureFormat>,
1797
1798 /// Range for the number of queued frames.
1799 ///
1800 /// This adjusts either the swapchain frame count to value + 1 - or sets SetMaximumFrameLatency to the value given,
1801 /// or uses a wait-for-present in the acquire method to limit rendering such that it acts like it's a value + 1 swapchain frame set.
1802 ///
1803 /// - `maximum_frame_latency.start` must be at least 1.
1804 /// - `maximum_frame_latency.end` must be larger or equal to `maximum_frame_latency.start`.
1805 pub maximum_frame_latency: RangeInclusive<u32>,
1806
1807 /// Current extent of the surface, if known.
1808 pub current_extent: Option<wgt::Extent3d>,
1809
1810 /// Supported texture usage flags.
1811 ///
1812 /// Must have at least `TextureUses::COLOR_TARGET`
1813 pub usage: TextureUses,
1814
1815 /// List of supported V-sync modes.
1816 ///
1817 /// Must be at least one.
1818 pub present_modes: Vec<wgt::PresentMode>,
1819
1820 /// List of supported alpha composition modes.
1821 ///
1822 /// Must be at least one.
1823 pub composite_alpha_modes: Vec<wgt::CompositeAlphaMode>,
1824}
1825
1826#[derive(Debug)]
1827pub struct AcquiredSurfaceTexture<A: Api> {
1828 pub texture: A::SurfaceTexture,
1829 /// The presentation configuration no longer matches
1830 /// the surface properties exactly, but can still be used to present
1831 /// to the surface successfully.
1832 pub suboptimal: bool,
1833}
1834
1835#[derive(Debug)]
1836pub struct OpenDevice<A: Api> {
1837 pub device: A::Device,
1838 pub queue: A::Queue,
1839}
1840
1841#[derive(Clone, Debug)]
1842pub struct BufferMapping {
1843 pub ptr: NonNull<u8>,
1844 pub is_coherent: bool,
1845}
1846
1847#[derive(Clone, Debug)]
1848pub struct BufferDescriptor<'a> {
1849 pub label: Label<'a>,
1850 pub size: wgt::BufferAddress,
1851 pub usage: BufferUses,
1852 pub memory_flags: MemoryFlags,
1853}
1854
1855#[derive(Clone, Debug)]
1856pub struct TextureDescriptor<'a> {
1857 pub label: Label<'a>,
1858 pub size: wgt::Extent3d,
1859 pub mip_level_count: u32,
1860 pub sample_count: u32,
1861 pub dimension: wgt::TextureDimension,
1862 pub format: wgt::TextureFormat,
1863 pub usage: TextureUses,
1864 pub memory_flags: MemoryFlags,
1865 /// Allows views of this texture to have a different format
1866 /// than the texture does.
1867 pub view_formats: Vec<wgt::TextureFormat>,
1868}
1869
1870impl TextureDescriptor<'_> {
1871 pub fn copy_extent(&self) -> CopyExtent {
1872 CopyExtent::map_extent_to_copy_size(&self.size, self.dimension)
1873 }
1874
1875 pub fn is_cube_compatible(&self) -> bool {
1876 self.dimension == wgt::TextureDimension::D2
1877 && self.size.depth_or_array_layers % 6 == 0
1878 && self.sample_count == 1
1879 && self.size.width == self.size.height
1880 }
1881
1882 pub fn array_layer_count(&self) -> u32 {
1883 match self.dimension {
1884 wgt::TextureDimension::D1 | wgt::TextureDimension::D3 => 1,
1885 wgt::TextureDimension::D2 => self.size.depth_or_array_layers,
1886 }
1887 }
1888}
1889
1890/// TextureView descriptor.
1891///
1892/// Valid usage:
1893///. - `format` has to be the same as `TextureDescriptor::format`
1894///. - `dimension` has to be compatible with `TextureDescriptor::dimension`
1895///. - `usage` has to be a subset of `TextureDescriptor::usage`
1896///. - `range` has to be a subset of parent texture
1897#[derive(Clone, Debug)]
1898pub struct TextureViewDescriptor<'a> {
1899 pub label: Label<'a>,
1900 pub format: wgt::TextureFormat,
1901 pub dimension: wgt::TextureViewDimension,
1902 pub usage: TextureUses,
1903 pub range: wgt::ImageSubresourceRange,
1904}
1905
1906#[derive(Clone, Debug)]
1907pub struct SamplerDescriptor<'a> {
1908 pub label: Label<'a>,
1909 pub address_modes: [wgt::AddressMode; 3],
1910 pub mag_filter: wgt::FilterMode,
1911 pub min_filter: wgt::FilterMode,
1912 pub mipmap_filter: wgt::FilterMode,
1913 pub lod_clamp: Range<f32>,
1914 pub compare: Option<wgt::CompareFunction>,
1915 // Must in the range [1, 16].
1916 //
1917 // Anisotropic filtering must be supported if this is not 1.
1918 pub anisotropy_clamp: u16,
1919 pub border_color: Option<wgt::SamplerBorderColor>,
1920}
1921
1922/// BindGroupLayout descriptor.
1923///
1924/// Valid usage:
1925/// - `entries` are sorted by ascending `wgt::BindGroupLayoutEntry::binding`
1926#[derive(Clone, Debug)]
1927pub struct BindGroupLayoutDescriptor<'a> {
1928 pub label: Label<'a>,
1929 pub flags: BindGroupLayoutFlags,
1930 pub entries: &'a [wgt::BindGroupLayoutEntry],
1931}
1932
1933#[derive(Clone, Debug)]
1934pub struct PipelineLayoutDescriptor<'a, B: DynBindGroupLayout + ?Sized> {
1935 pub label: Label<'a>,
1936 pub flags: PipelineLayoutFlags,
1937 pub bind_group_layouts: &'a [&'a B],
1938 pub push_constant_ranges: &'a [wgt::PushConstantRange],
1939}
1940
1941/// A region of a buffer made visible to shaders via a [`BindGroup`].
1942///
1943/// [`BindGroup`]: Api::BindGroup
1944///
1945/// ## Accessible region
1946///
1947/// `wgpu_hal` guarantees that shaders compiled with
1948/// [`ShaderModuleDescriptor::runtime_checks`] set to `true` cannot read or
1949/// write data via this binding outside the *accessible region* of [`buffer`]:
1950///
1951/// - The accessible region starts at [`offset`].
1952///
1953/// - For [`Storage`] bindings, the size of the accessible region is [`size`],
1954/// which must be a multiple of 4.
1955///
1956/// - For [`Uniform`] bindings, the size of the accessible region is [`size`]
1957/// rounded up to the next multiple of
1958/// [`Alignments::uniform_bounds_check_alignment`].
1959///
1960/// Note that this guarantee is stricter than WGSL's requirements for
1961/// [out-of-bounds accesses][woob], as WGSL allows them to return values from
1962/// elsewhere in the buffer. But this guarantee is necessary anyway, to permit
1963/// `wgpu-core` to avoid clearing uninitialized regions of buffers that will
1964/// never be read by the application before they are overwritten. This
1965/// optimization consults bind group buffer binding regions to determine which
1966/// parts of which buffers shaders might observe. This optimization is only
1967/// sound if shader access is bounds-checked.
1968///
1969/// [`buffer`]: BufferBinding::buffer
1970/// [`offset`]: BufferBinding::offset
1971/// [`size`]: BufferBinding::size
1972/// [`Storage`]: wgt::BufferBindingType::Storage
1973/// [`Uniform`]: wgt::BufferBindingType::Uniform
1974/// [woob]: https://gpuweb.github.io/gpuweb/wgsl/#out-of-bounds-access-sec
1975#[derive(Debug)]
1976pub struct BufferBinding<'a, B: DynBuffer + ?Sized> {
1977 /// The buffer being bound.
1978 pub buffer: &'a B,
1979
1980 /// The offset at which the bound region starts.
1981 ///
1982 /// This must be less than the size of the buffer. Some back ends
1983 /// cannot tolerate zero-length regions; for example, see
1984 /// [VUID-VkDescriptorBufferInfo-offset-00340][340] and
1985 /// [VUID-VkDescriptorBufferInfo-range-00341][341], or the
1986 /// documentation for GLES's [glBindBufferRange][bbr].
1987 ///
1988 /// [340]: https://registry.khronos.org/vulkan/specs/1.3-extensions/html/vkspec.html#VUID-VkDescriptorBufferInfo-offset-00340
1989 /// [341]: https://registry.khronos.org/vulkan/specs/1.3-extensions/html/vkspec.html#VUID-VkDescriptorBufferInfo-range-00341
1990 /// [bbr]: https://registry.khronos.org/OpenGL-Refpages/es3.0/html/glBindBufferRange.xhtml
1991 pub offset: wgt::BufferAddress,
1992
1993 /// The size of the region bound, in bytes.
1994 ///
1995 /// If `None`, the region extends from `offset` to the end of the
1996 /// buffer. Given the restrictions on `offset`, this means that
1997 /// the size is always greater than zero.
1998 pub size: Option<wgt::BufferSize>,
1999}
2000
2001impl<'a, T: DynBuffer + ?Sized> Clone for BufferBinding<'a, T> {
2002 fn clone(&self) -> Self {
2003 BufferBinding {
2004 buffer: self.buffer,
2005 offset: self.offset,
2006 size: self.size,
2007 }
2008 }
2009}
2010
2011#[derive(Debug)]
2012pub struct TextureBinding<'a, T: DynTextureView + ?Sized> {
2013 pub view: &'a T,
2014 pub usage: TextureUses,
2015}
2016
2017impl<'a, T: DynTextureView + ?Sized> Clone for TextureBinding<'a, T> {
2018 fn clone(&self) -> Self {
2019 TextureBinding {
2020 view: self.view,
2021 usage: self.usage,
2022 }
2023 }
2024}
2025
2026#[derive(Clone, Debug)]
2027pub struct BindGroupEntry {
2028 pub binding: u32,
2029 pub resource_index: u32,
2030 pub count: u32,
2031}
2032
2033/// BindGroup descriptor.
2034///
2035/// Valid usage:
2036///. - `entries` has to be sorted by ascending `BindGroupEntry::binding`
2037///. - `entries` has to have the same set of `BindGroupEntry::binding` as `layout`
2038///. - each entry has to be compatible with the `layout`
2039///. - each entry's `BindGroupEntry::resource_index` is within range
2040/// of the corresponding resource array, selected by the relevant
2041/// `BindGroupLayoutEntry`.
2042#[derive(Clone, Debug)]
2043pub struct BindGroupDescriptor<
2044 'a,
2045 Bgl: DynBindGroupLayout + ?Sized,
2046 B: DynBuffer + ?Sized,
2047 S: DynSampler + ?Sized,
2048 T: DynTextureView + ?Sized,
2049 A: DynAccelerationStructure + ?Sized,
2050> {
2051 pub label: Label<'a>,
2052 pub layout: &'a Bgl,
2053 pub buffers: &'a [BufferBinding<'a, B>],
2054 pub samplers: &'a [&'a S],
2055 pub textures: &'a [TextureBinding<'a, T>],
2056 pub entries: &'a [BindGroupEntry],
2057 pub acceleration_structures: &'a [&'a A],
2058}
2059
2060#[derive(Clone, Debug)]
2061pub struct CommandEncoderDescriptor<'a, Q: DynQueue + ?Sized> {
2062 pub label: Label<'a>,
2063 pub queue: &'a Q,
2064}
2065
2066/// Naga shader module.
2067pub struct NagaShader {
2068 /// Shader module IR.
2069 pub module: Cow<'static, naga::Module>,
2070 /// Analysis information of the module.
2071 pub info: naga::valid::ModuleInfo,
2072 /// Source codes for debug
2073 pub debug_source: Option<DebugSource>,
2074}
2075
2076// Custom implementation avoids the need to generate Debug impl code
2077// for the whole Naga module and info.
2078impl fmt::Debug for NagaShader {
2079 fn fmt(&self, formatter: &mut fmt::Formatter) -> fmt::Result {
2080 write!(formatter, "Naga shader")
2081 }
2082}
2083
2084/// Shader input.
2085#[allow(clippy::large_enum_variant)]
2086pub enum ShaderInput<'a> {
2087 Naga(NagaShader),
2088 SpirV(&'a [u32]),
2089}
2090
2091pub struct ShaderModuleDescriptor<'a> {
2092 pub label: Label<'a>,
2093
2094 /// Enforce bounds checks in shaders, even if the underlying driver doesn't
2095 /// support doing so natively.
2096 ///
2097 /// When this is `true`, `wgpu_hal` promises that shaders can only read or
2098 /// write the [accessible region][ar] of a bindgroup's buffer bindings. If
2099 /// the underlying graphics platform cannot implement these bounds checks
2100 /// itself, `wgpu_hal` will inject bounds checks before presenting the
2101 /// shader to the platform.
2102 ///
2103 /// When this is `false`, `wgpu_hal` only enforces such bounds checks if the
2104 /// underlying platform provides a way to do so itself. `wgpu_hal` does not
2105 /// itself add any bounds checks to generated shader code.
2106 ///
2107 /// Note that `wgpu_hal` users may try to initialize only those portions of
2108 /// buffers that they anticipate might be read from. Passing `false` here
2109 /// may allow shaders to see wider regions of the buffers than expected,
2110 /// making such deferred initialization visible to the application.
2111 ///
2112 /// [ar]: struct.BufferBinding.html#accessible-region
2113 pub runtime_checks: bool,
2114}
2115
2116#[derive(Debug, Clone)]
2117pub struct DebugSource {
2118 pub file_name: Cow<'static, str>,
2119 pub source_code: Cow<'static, str>,
2120}
2121
2122/// Describes a programmable pipeline stage.
2123#[derive(Debug)]
2124pub struct ProgrammableStage<'a, M: DynShaderModule + ?Sized> {
2125 /// The compiled shader module for this stage.
2126 pub module: &'a M,
2127 /// The name of the entry point in the compiled shader. There must be a function with this name
2128 /// in the shader.
2129 pub entry_point: &'a str,
2130 /// Pipeline constants
2131 pub constants: &'a naga::back::PipelineConstants,
2132 /// Whether workgroup scoped memory will be initialized with zero values for this stage.
2133 ///
2134 /// This is required by the WebGPU spec, but may have overhead which can be avoided
2135 /// for cross-platform applications
2136 pub zero_initialize_workgroup_memory: bool,
2137}
2138
2139impl<M: DynShaderModule + ?Sized> Clone for ProgrammableStage<'_, M> {
2140 fn clone(&self) -> Self {
2141 Self {
2142 module: self.module,
2143 entry_point: self.entry_point,
2144 constants: self.constants,
2145 zero_initialize_workgroup_memory: self.zero_initialize_workgroup_memory,
2146 }
2147 }
2148}
2149
2150/// Describes a compute pipeline.
2151#[derive(Clone, Debug)]
2152pub struct ComputePipelineDescriptor<
2153 'a,
2154 Pl: DynPipelineLayout + ?Sized,
2155 M: DynShaderModule + ?Sized,
2156 Pc: DynPipelineCache + ?Sized,
2157> {
2158 pub label: Label<'a>,
2159 /// The layout of bind groups for this pipeline.
2160 pub layout: &'a Pl,
2161 /// The compiled compute stage and its entry point.
2162 pub stage: ProgrammableStage<'a, M>,
2163 /// The cache which will be used and filled when compiling this pipeline
2164 pub cache: Option<&'a Pc>,
2165}
2166
2167pub struct PipelineCacheDescriptor<'a> {
2168 pub label: Label<'a>,
2169 pub data: Option<&'a [u8]>,
2170}
2171
2172/// Describes how the vertex buffer is interpreted.
2173#[derive(Clone, Debug)]
2174pub struct VertexBufferLayout<'a> {
2175 /// The stride, in bytes, between elements of this buffer.
2176 pub array_stride: wgt::BufferAddress,
2177 /// How often this vertex buffer is "stepped" forward.
2178 pub step_mode: wgt::VertexStepMode,
2179 /// The list of attributes which comprise a single vertex.
2180 pub attributes: &'a [wgt::VertexAttribute],
2181}
2182
2183/// Describes a render (graphics) pipeline.
2184#[derive(Clone, Debug)]
2185pub struct RenderPipelineDescriptor<
2186 'a,
2187 Pl: DynPipelineLayout + ?Sized,
2188 M: DynShaderModule + ?Sized,
2189 Pc: DynPipelineCache + ?Sized,
2190> {
2191 pub label: Label<'a>,
2192 /// The layout of bind groups for this pipeline.
2193 pub layout: &'a Pl,
2194 /// The format of any vertex buffers used with this pipeline.
2195 pub vertex_buffers: &'a [VertexBufferLayout<'a>],
2196 /// The vertex stage for this pipeline.
2197 pub vertex_stage: ProgrammableStage<'a, M>,
2198 /// The properties of the pipeline at the primitive assembly and rasterization level.
2199 pub primitive: wgt::PrimitiveState,
2200 /// The effect of draw calls on the depth and stencil aspects of the output target, if any.
2201 pub depth_stencil: Option<wgt::DepthStencilState>,
2202 /// The multi-sampling properties of the pipeline.
2203 pub multisample: wgt::MultisampleState,
2204 /// The fragment stage for this pipeline.
2205 pub fragment_stage: Option<ProgrammableStage<'a, M>>,
2206 /// The effect of draw calls on the color aspect of the output target.
2207 pub color_targets: &'a [Option<wgt::ColorTargetState>],
2208 /// If the pipeline will be used with a multiview render pass, this indicates how many array
2209 /// layers the attachments will have.
2210 pub multiview: Option<NonZeroU32>,
2211 /// The cache which will be used and filled when compiling this pipeline
2212 pub cache: Option<&'a Pc>,
2213}
2214
2215#[derive(Debug, Clone)]
2216pub struct SurfaceConfiguration {
2217 /// Maximum number of queued frames. Must be in
2218 /// `SurfaceCapabilities::maximum_frame_latency` range.
2219 pub maximum_frame_latency: u32,
2220 /// Vertical synchronization mode.
2221 pub present_mode: wgt::PresentMode,
2222 /// Alpha composition mode.
2223 pub composite_alpha_mode: wgt::CompositeAlphaMode,
2224 /// Format of the surface textures.
2225 pub format: wgt::TextureFormat,
2226 /// Requested texture extent. Must be in
2227 /// `SurfaceCapabilities::extents` range.
2228 pub extent: wgt::Extent3d,
2229 /// Allowed usage of surface textures,
2230 pub usage: TextureUses,
2231 /// Allows views of swapchain texture to have a different format
2232 /// than the texture does.
2233 pub view_formats: Vec<wgt::TextureFormat>,
2234}
2235
2236#[derive(Debug, Clone)]
2237pub struct Rect<T> {
2238 pub x: T,
2239 pub y: T,
2240 pub w: T,
2241 pub h: T,
2242}
2243
2244#[derive(Debug, Clone)]
2245pub struct BufferBarrier<'a, B: DynBuffer + ?Sized> {
2246 pub buffer: &'a B,
2247 pub usage: Range<BufferUses>,
2248}
2249
2250#[derive(Debug, Clone)]
2251pub struct TextureBarrier<'a, T: DynTexture + ?Sized> {
2252 pub texture: &'a T,
2253 pub range: wgt::ImageSubresourceRange,
2254 pub usage: Range<TextureUses>,
2255}
2256
2257#[derive(Clone, Copy, Debug)]
2258pub struct BufferCopy {
2259 pub src_offset: wgt::BufferAddress,
2260 pub dst_offset: wgt::BufferAddress,
2261 pub size: wgt::BufferSize,
2262}
2263
2264#[derive(Clone, Debug)]
2265pub struct TextureCopyBase {
2266 pub mip_level: u32,
2267 pub array_layer: u32,
2268 /// Origin within a texture.
2269 /// Note: for 1D and 2D textures, Z must be 0.
2270 pub origin: wgt::Origin3d,
2271 pub aspect: FormatAspects,
2272}
2273
2274#[derive(Clone, Copy, Debug)]
2275pub struct CopyExtent {
2276 pub width: u32,
2277 pub height: u32,
2278 pub depth: u32,
2279}
2280
2281#[derive(Clone, Debug)]
2282pub struct TextureCopy {
2283 pub src_base: TextureCopyBase,
2284 pub dst_base: TextureCopyBase,
2285 pub size: CopyExtent,
2286}
2287
2288#[derive(Clone, Debug)]
2289pub struct BufferTextureCopy {
2290 pub buffer_layout: wgt::ImageDataLayout,
2291 pub texture_base: TextureCopyBase,
2292 pub size: CopyExtent,
2293}
2294
2295#[derive(Clone, Debug)]
2296pub struct Attachment<'a, T: DynTextureView + ?Sized> {
2297 pub view: &'a T,
2298 /// Contains either a single mutating usage as a target,
2299 /// or a valid combination of read-only usages.
2300 pub usage: TextureUses,
2301}
2302
2303#[derive(Clone, Debug)]
2304pub struct ColorAttachment<'a, T: DynTextureView + ?Sized> {
2305 pub target: Attachment<'a, T>,
2306 pub resolve_target: Option<Attachment<'a, T>>,
2307 pub ops: AttachmentOps,
2308 pub clear_value: wgt::Color,
2309}
2310
2311#[derive(Clone, Debug)]
2312pub struct DepthStencilAttachment<'a, T: DynTextureView + ?Sized> {
2313 pub target: Attachment<'a, T>,
2314 pub depth_ops: AttachmentOps,
2315 pub stencil_ops: AttachmentOps,
2316 pub clear_value: (f32, u32),
2317}
2318
2319#[derive(Clone, Debug)]
2320pub struct PassTimestampWrites<'a, Q: DynQuerySet + ?Sized> {
2321 pub query_set: &'a Q,
2322 pub beginning_of_pass_write_index: Option<u32>,
2323 pub end_of_pass_write_index: Option<u32>,
2324}
2325
2326#[derive(Clone, Debug)]
2327pub struct RenderPassDescriptor<'a, Q: DynQuerySet + ?Sized, T: DynTextureView + ?Sized> {
2328 pub label: Label<'a>,
2329 pub extent: wgt::Extent3d,
2330 pub sample_count: u32,
2331 pub color_attachments: &'a [Option<ColorAttachment<'a, T>>],
2332 pub depth_stencil_attachment: Option<DepthStencilAttachment<'a, T>>,
2333 pub multiview: Option<NonZeroU32>,
2334 pub timestamp_writes: Option<PassTimestampWrites<'a, Q>>,
2335 pub occlusion_query_set: Option<&'a Q>,
2336}
2337
2338#[derive(Clone, Debug)]
2339pub struct ComputePassDescriptor<'a, Q: DynQuerySet + ?Sized> {
2340 pub label: Label<'a>,
2341 pub timestamp_writes: Option<PassTimestampWrites<'a, Q>>,
2342}
2343
2344/// Stores the text of any validation errors that have occurred since
2345/// the last call to `get_and_reset`.
2346///
2347/// Each value is a validation error and a message associated with it,
2348/// or `None` if the error has no message from the api.
2349///
2350/// This is used for internal wgpu testing only and _must not_ be used
2351/// as a way to check for errors.
2352///
2353/// This works as a static because `cargo nextest` runs all of our
2354/// tests in separate processes, so each test gets its own canary.
2355///
2356/// This prevents the issue of one validation error terminating the
2357/// entire process.
2358pub static VALIDATION_CANARY: ValidationCanary = ValidationCanary {
2359 inner: Mutex::new(Vec::new()),
2360};
2361
2362/// Flag for internal testing.
2363pub struct ValidationCanary {
2364 inner: Mutex<Vec<String>>,
2365}
2366
2367impl ValidationCanary {
2368 #[allow(dead_code)] // in some configurations this function is dead
2369 fn add(&self, msg: String) {
2370 self.inner.lock().push(msg);
2371 }
2372
2373 /// Returns any API validation errors that have occurred in this process
2374 /// since the last call to this function.
2375 pub fn get_and_reset(&self) -> Vec<String> {
2376 self.inner.lock().drain(..).collect()
2377 }
2378}
2379
2380#[test]
2381fn test_default_limits() {
2382 let limits = wgt::Limits::default();
2383 assert!(limits.max_bind_groups <= MAX_BIND_GROUPS as u32);
2384}
2385
2386#[derive(Clone, Debug)]
2387pub struct AccelerationStructureDescriptor<'a> {
2388 pub label: Label<'a>,
2389 pub size: wgt::BufferAddress,
2390 pub format: AccelerationStructureFormat,
2391}
2392
2393#[derive(Debug, Clone, Copy, Eq, PartialEq)]
2394pub enum AccelerationStructureFormat {
2395 TopLevel,
2396 BottomLevel,
2397}
2398
2399#[derive(Debug, Clone, Copy, Eq, PartialEq)]
2400pub enum AccelerationStructureBuildMode {
2401 Build,
2402 Update,
2403}
2404
2405/// Information of the required size for a corresponding entries struct (+ flags)
2406#[derive(Copy, Clone, Debug, Default, Eq, PartialEq)]
2407pub struct AccelerationStructureBuildSizes {
2408 pub acceleration_structure_size: wgt::BufferAddress,
2409 pub update_scratch_size: wgt::BufferAddress,
2410 pub build_scratch_size: wgt::BufferAddress,
2411}
2412
2413/// Updates use source_acceleration_structure if present, else the update will be performed in place.
2414/// For updates, only the data is allowed to change (not the meta data or sizes).
2415#[derive(Clone, Debug)]
2416pub struct BuildAccelerationStructureDescriptor<
2417 'a,
2418 B: DynBuffer + ?Sized,
2419 A: DynAccelerationStructure + ?Sized,
2420> {
2421 pub entries: &'a AccelerationStructureEntries<'a, B>,
2422 pub mode: AccelerationStructureBuildMode,
2423 pub flags: AccelerationStructureBuildFlags,
2424 pub source_acceleration_structure: Option<&'a A>,
2425 pub destination_acceleration_structure: &'a A,
2426 pub scratch_buffer: &'a B,
2427 pub scratch_buffer_offset: wgt::BufferAddress,
2428}
2429
2430/// - All buffers, buffer addresses and offsets will be ignored.
2431/// - The build mode will be ignored.
2432/// - Reducing the amount of Instances, Triangle groups or AABB groups (or the number of Triangles/AABBs in corresponding groups),
2433/// may result in reduced size requirements.
2434/// - Any other change may result in a bigger or smaller size requirement.
2435#[derive(Clone, Debug)]
2436pub struct GetAccelerationStructureBuildSizesDescriptor<'a, B: DynBuffer + ?Sized> {
2437 pub entries: &'a AccelerationStructureEntries<'a, B>,
2438 pub flags: AccelerationStructureBuildFlags,
2439}
2440
2441/// Entries for a single descriptor
2442/// * `Instances` - Multiple instances for a top level acceleration structure
2443/// * `Triangles` - Multiple triangle meshes for a bottom level acceleration structure
2444/// * `AABBs` - List of list of axis aligned bounding boxes for a bottom level acceleration structure
2445#[derive(Debug)]
2446pub enum AccelerationStructureEntries<'a, B: DynBuffer + ?Sized> {
2447 Instances(AccelerationStructureInstances<'a, B>),
2448 Triangles(Vec<AccelerationStructureTriangles<'a, B>>),
2449 AABBs(Vec<AccelerationStructureAABBs<'a, B>>),
2450}
2451
2452/// * `first_vertex` - offset in the vertex buffer (as number of vertices)
2453/// * `indices` - optional index buffer with attributes
2454/// * `transform` - optional transform
2455#[derive(Clone, Debug)]
2456pub struct AccelerationStructureTriangles<'a, B: DynBuffer + ?Sized> {
2457 pub vertex_buffer: Option<&'a B>,
2458 pub vertex_format: wgt::VertexFormat,
2459 pub first_vertex: u32,
2460 pub vertex_count: u32,
2461 pub vertex_stride: wgt::BufferAddress,
2462 pub indices: Option<AccelerationStructureTriangleIndices<'a, B>>,
2463 pub transform: Option<AccelerationStructureTriangleTransform<'a, B>>,
2464 pub flags: AccelerationStructureGeometryFlags,
2465}
2466
2467/// * `offset` - offset in bytes
2468#[derive(Clone, Debug)]
2469pub struct AccelerationStructureAABBs<'a, B: DynBuffer + ?Sized> {
2470 pub buffer: Option<&'a B>,
2471 pub offset: u32,
2472 pub count: u32,
2473 pub stride: wgt::BufferAddress,
2474 pub flags: AccelerationStructureGeometryFlags,
2475}
2476
2477/// * `offset` - offset in bytes
2478#[derive(Clone, Debug)]
2479pub struct AccelerationStructureInstances<'a, B: DynBuffer + ?Sized> {
2480 pub buffer: Option<&'a B>,
2481 pub offset: u32,
2482 pub count: u32,
2483}
2484
2485/// * `offset` - offset in bytes
2486#[derive(Clone, Debug)]
2487pub struct AccelerationStructureTriangleIndices<'a, B: DynBuffer + ?Sized> {
2488 pub format: wgt::IndexFormat,
2489 pub buffer: Option<&'a B>,
2490 pub offset: u32,
2491 pub count: u32,
2492}
2493
2494/// * `offset` - offset in bytes
2495#[derive(Clone, Debug)]
2496pub struct AccelerationStructureTriangleTransform<'a, B: DynBuffer + ?Sized> {
2497 pub buffer: &'a B,
2498 pub offset: u32,
2499}
2500
2501pub use wgt::AccelerationStructureFlags as AccelerationStructureBuildFlags;
2502pub use wgt::AccelerationStructureGeometryFlags;
2503
2504bitflags::bitflags! {
2505 #[derive(Clone, Copy, Debug, PartialEq, Eq, Hash)]
2506 pub struct AccelerationStructureUses: u8 {
2507 // For blas used as input for tlas
2508 const BUILD_INPUT = 1 << 0;
2509 // Target for acceleration structure build
2510 const BUILD_OUTPUT = 1 << 1;
2511 // Tlas used in a shader
2512 const SHADER_INPUT = 1 << 2;
2513 }
2514}
2515
2516#[derive(Debug, Clone)]
2517pub struct AccelerationStructureBarrier {
2518 pub usage: Range<AccelerationStructureUses>,
2519}