wgpu/api/
pipeline_cache.rs

1use crate::*;
2
3/// Handle to a pipeline cache, which is used to accelerate
4/// creating [`RenderPipeline`]s and [`ComputePipeline`]s
5/// in subsequent executions
6///
7/// This reuse is only applicable for the same or similar devices.
8/// See [`util::pipeline_cache_key`] for some details.
9///
10/// # Background
11///
12/// In most GPU drivers, shader code must be converted into a machine code
13/// which can be executed on the GPU.
14/// Generating this machine code can require a lot of computation.
15/// Pipeline caches allow this computation to be reused between executions
16/// of the program.
17/// This can be very useful for reducing program startup time.
18///
19/// Note that most desktop GPU drivers will manage their own caches,
20/// meaning that little advantage can be gained from this on those platforms.
21/// However, on some platforms, especially Android, drivers leave this to the
22/// application to implement.
23///
24/// Unfortunately, drivers do not expose whether they manage their own caches.
25/// Some reasonable policies for applications to use are:
26/// - Manage their own pipeline cache on all platforms
27/// - Only manage pipeline caches on Android
28///
29/// # Usage
30///
31/// It is valid to use this resource when creating multiple pipelines, in
32/// which case it will likely cache each of those pipelines.
33/// It is also valid to create a new cache for each pipeline.
34///
35/// This resource is most useful when the data produced from it (using
36/// [`PipelineCache::get_data`]) is persisted.
37/// Care should be taken that pipeline caches are only used for the same device,
38/// as pipeline caches from compatible devices are unlikely to provide any advantage.
39/// `util::pipeline_cache_key` can be used as a file/directory name to help ensure that.
40///
41/// It is recommended to store pipeline caches atomically. If persisting to disk,
42/// this can usually be achieved by creating a temporary file, then moving/[renaming]
43/// the temporary file over the existing cache
44///
45/// # Storage Usage
46///
47/// There is not currently an API available to reduce the size of a cache.
48/// This is due to limitations in the underlying graphics APIs used.
49/// This is especially impactful if your application is being updated, so
50/// previous caches are no longer being used.
51///
52/// One option to work around this is to regenerate the cache.
53/// That is, creating the pipelines which your program runs using
54/// with the stored cached data, then recreating the *same* pipelines
55/// using a new cache, which your application then store.
56///
57/// # Implementations
58///
59/// This resource currently only works on the following backends:
60///  - Vulkan
61///
62/// This type is unique to the Rust API of `wgpu`.
63///
64/// [renaming]: std::fs::rename
65#[derive(Debug, Clone)]
66pub struct PipelineCache {
67    pub(crate) inner: dispatch::DispatchPipelineCache,
68}
69
70#[cfg(send_sync)]
71static_assertions::assert_impl_all!(PipelineCache: Send, Sync);
72
73crate::cmp::impl_eq_ord_hash_proxy!(PipelineCache => .inner);
74
75impl PipelineCache {
76    /// Get the data associated with this pipeline cache.
77    /// The data format is an implementation detail of `wgpu`.
78    /// The only defined operation on this data setting it as the `data` field
79    /// on [`PipelineCacheDescriptor`], then to [`Device::create_pipeline_cache`].
80    ///
81    /// This function is unique to the Rust API of `wgpu`.
82    pub fn get_data(&self) -> Option<Vec<u8>> {
83        self.inner.get_data()
84    }
85}