Bevy game engine 0.6
Advertisement

The Lumberyard Bistro scene rendered in the New Bevy Renderer by @mockersf

Thanks to 170 contributors, 623 pull requests, and our generous sponsors, I’m happy to announce the Bevy 0.6 release on crates.io!

For those who don’t know, Bevy is a refreshingly simple data-driven game engine built in Rust. You can check out Quick Start Guide to get started. Bevy is also free and open source forever! You can grab the full source code on GitHub. Check out Bevy Assets for a collection of community-developed plugins, games, and learning resources.

To update an existing Bevy App or Plugin to Bevy 0.6, check out our 0.5 to 0.6 Migration Guide.

Advertisement

There are a ton of improvements, bug fixes and quality of life tweaks in this release. Here are some of the highlights:

  • A brand new modern renderer that is prettier, faster, and simpler to extend
  • Directional and point light shadows
  • Clustered forward rendering
  • Frustum culling
  • Significantly faster sprite rendering with less boilerplate
  • Native WebGL2 support. You can test this out by running the Bevy Examples in your browser!
  • High level custom Materials
  • More powerful shaders: preprocessors, imports, WGSL support
  • Bevy ECS ergonomics and performance improvements. No more .system()!

Read on for details!

The New Bevy Renderer#

Bevy 0.6 introduces a brand new modern renderer that is:

  • Faster: More parallel, less computation per-entity, more efficient CPU->GPU dataflow, and (with soon-to-be-enabled) pipelined rendering
  • Prettier: We’re releasing the new renderer alongside a number of graphical improvements, such as directional and point light shadows, clustered forward rendering (so you can draw more lights in a scene), and spherical area lights. We also have a ton of new features in development (cascaded shadow maps, bloom, particles, shadow filters, and more!)
  • Simpler: Fewer layers of abstraction, simpler data flow, improved low-level, mid-level, and high-level interfaces, direct wgpu access
  • Modular to its core: Standardized 2d and 3d core pipelines, extensible Render Phases and Views, composable entity/component-driven draw functions, shader imports, extensible and repeatable render pipelines via “sub graphs”
  • Industry Proven: We’ve taken inspiration from battle tested renderer architectures, such as Bungie’s pipelined Destiny renderer. We also learned a lot from (and worked closely with) other renderer developers in the Rust space, namely @aclysma (rafx) and @cwfitzgerald (rend3). The New Bevy Renderer wouldn’t be what it is without them, and I highly recommend checking out their projects!

I promise I’ll qualify all of those fluffy buzzwords below. I am confident that the New Bevy Renderer will be a rallying point for the Bevy graphics ecosystem and (hopefully) the Rust graphics ecosystem at large. We still have plenty of work to do, but I’m proud of what we have accomplished so far and I’m excited for the future!

bistro day

Why build a new renderer?#

Before we cover what’s new, it’s worth discussing why we embarked on such a massive effort. The old Bevy Renderer got a number of things right:

  • Modular render logic (via the Render Graph)
  • Multiple backends (both first and third party)
  • High level data-driven API: this made it easy and ergonomic to write custom per-entity render logic

However, it also had a number of significant shortcomings:

  • Complex: The “high-level ease of use” came at the cost of significant implementation complexity, performance overhead, and invented jargon. Users were often overwhelmed when trying to operate at any level but “high-level”. When managing “render resources”, it was easy to do something “wrong” and hard to tell “what went wrong”.
  • Often slow: Features like “sprite rendering” were built on the costly high-level abstractions mentioned above. Performance was … suboptimal when compared to other options in the ecosystem.
  • User-facing internals: It stored a lot of internal render state directly on each entity. This took up space, computing the state was expensive, and it gunked up user-facing APIs with a bunch of “do not touch” render Components. This state (or at least, the component metadata) needed to be written to / read from Scenes, which was also suboptimal and error prone.
  • Repeating render logic was troublesome: Viewports, rendering to multiple textures / windows, and shadow maps were possible, but they required hard-coding, special casing, and boilerplate. This wasn’t aligned with our goals for modularity and clarity.

Why now?#

The shortcomings above were acceptable in Bevy’s early days, but were clearly holding us back as Bevy grew from a one person side project to the most popular Rust game engine on GitHub (and one of the most popular open source game engines … period). A “passable” renderer no longer cuts it when we have hundreds of contributors, a paid full-time developer, thousands of individual users, and a growing number of companies paying people to work on Bevy apps and features. It was time for a change.

For a deeper view into our decision-making and development process (including the alternatives we considered) check out the New Renderer Tracking Issue.

authors: @cart

Pipelined Rendering is a cornerstone of the new renderer. It accomplishes a number of goals:

  • Increased Parallelism: We can now start running the main app logic for the next frame, while rendering the current frame. Given that rendering is often a bottleneck, this can be a huge win when there is also a lot of app work to do.
  • Clearer Dataflow and Structure: Pipelining requires drawing hard lines between “app logic” and “render logic”, with a fixed synchronization point (which we call the “extract” step). This makes it easier to reason about dataflow and ownership. Code can be organized along these lines, which improves clarity.

From a high level, traditional “non-pipelined rendering” looks like this:

non-pipelined rendering

Pipelined rendering looks like this:

pipelined rendering

Much better!

Bevy apps are now split into the Main App, which is where app logic occurs, and the Render App, which has its own separate ECS World and Schedule. The Render App consists of the following ECS stages, which developers add ECS Systems to when they are composing new render features:

  • Extract: This is the one synchronization point between the Main World and the Render World. Relevant Entities, Components, and Resources are read from the Main World and written to corresponding Entities, Components, and Resources in the Render World. The goal is to keep this step as quick as possible, as it is the one piece of logic that cannot run in parallel. It is a good rule of thumb to extract only the minimum amount of data needed for rendering, such as by only considering “visible” entities and only copying the relevant components.
  • Prepare: Extracted data is then “prepared” by writing it to the GPU. This generally involves writing to GPU Buffers and Textures and creating Bind Groups.
  • Queue: This “queues” render jobs that feed off of “prepared” data.
  • Render: This runs the Render Graph, which produces actual render commands from the results stored in the Render World from the Extract, Prepare, and Queue steps.

So pipelined rendering actually looks more like this, with the next app update occurring after the extract step:

pipelined rendering stages

As a quick callout, pipelined rendering doesn’t actually happen in parallel yet. I have a branch with parallel pipelining enabled, but running app logic in a separate thread currently breaks “non send” resources (because the main app is moved to a separate thread, breaking non send guarantees). There will be a fix for this soon, I just wanted to get the new renderer in peoples’ hands as soon as possible! When we enable parallel pipelining, no user-facing code changes will be required.

Render Graphs and Sub Graphs#

authors: @cart

render graph

The New Bevy Renderer has a Render Graph, much like the old Bevy renderer. Render Graphs are a way to logically model GPU command construction in a modular way. Graph Nodes pass GPU resources like Textures and Buffers (and sometimes Entities) to each other, forming a directed acyclic graph. When a Graph Node runs, it uses its graph inputs and the Render World to construct GPU command lists.

The biggest change to this API is that we now support Sub Graphs, which are basically “namespaced” Render Graphs that can be run from any Node in the graph with arbitrary inputs. This enables us to define things like a “2d” and “3d” sub graph, which users can insert custom logic into. This opens two doors simultaneously:

  • The ability to repeat render logic, but for different views (split screen, mirrors, rendering to a texture, shadow maps).
  • The ability for users to extend this repeated logic.

Embracing wgpu#

authors: @cart

Bevy has always used wgpu, a native GPU abstraction layer with support for most graphics backends: Vulkan, Metal, DX12, OpenGL, WebGL2, and WebGPU (and WIP DX11 support). But the old renderer hid it behind our own hardware abstraction layer. In practice, this was largely just a mirror of the wgpu API. It gave us the ability to build our own graphics backends without bothering the wgpu folks, but in practice it created a lot of pain (due to being an imperfect mirror), overhead (due to introducing a dynamic API and requiring global mutex locks over GPU resource collections), and complexity (bevy_render -> wgpu -> Vulkan). In return, we didn’t get many practical benefits … just slightly more autonomy.

The truth of the matter is that wgpu already occupies exactly the space we want it to:

  • Multiple backends, with the goal to support as many platforms as possible
  • A “baseline” feature set that works almost everywhere with a consistent API
  • A “limits” and “features” system that enables opting-in to arbitrary (sometimes backend-specific features) and detecting when those features are available. This will be important when we start adding things like raytracing and VR support.
  • A modern GPU API, but without the pain and complexity of raw Vulkan. Perfect for user-facing Bevy renderer extensions.

However, initially there were a couple of reasons not to make it our “public facing API”:

  • Complexity: wgpu used to be built on top of gfx-hal (an older GPU abstraction layer also built and managed by the wgpu team). These multiple layers of abstraction in multiple repos made contributing to and reasoning about the internals difficult. Additionally, I have a rule for “3rd party dependencies publicly exposed in Bevy APIs”: we must feel comfortable forking and maintaining them if we need to (ex: upstream stops being maintained, visions diverge, etc). I wasn’t particularly comfortable with doing that with the old architecture.
  • Licensing: wgpu used to be licensed under the “copyleft” MPL license, which created concerns about integration with proprietary graphics apis (such as consoles like the Switch).
  • WebGL2 Support: wgpu used to not have a WebGL2 backend. Bevy’s old renderer had a custom WebGL2 backend and we weren’t willing to give up support for the Web as a platform.

Almost immediately after we voiced these concerns, @kvark kicked off a relicensing effort that switched wgpu to the Rust-standard dual MIT/Apache-2.0 license. They also removed gfx-hal in favor of a much simpler and flatter architecture. Soon after, @zicklag added a WebGL2 backend. Having resolved all of my remaining hangups, it was clear to me that @kvark’s priorities were aligned with mine and that I could trust them to adjust to community feedback.

The New Bevy Renderer tosses out our old intermediate GPU abstraction layer in favor of using wgpu directly as our “low-level” GPU api. The result is a simpler (and faster) architecture with full and direct access to wgpu. Feedback from Bevy Renderer feature developers so far has been very positive.

Bevy was also updated to use the latest and greatest wgpu version: 0.12.

ECS-Driven Rendering#

authors: @cart

The new renderer is what I like to call “ECS-driven”:

  • As we covered previously, the Render World is populated using data Extracted from the Main World.
  • Scenes are rendered from one or more Views, which are just Entities in the Render World with Components relevant to that View. View Entities can be extended with arbitrary Components, which makes it easy to extend the renderer with custom View data and logic. Cameras aren’t the only type of View. Views can be defined by the Render App for arbitrary concepts, such as “shadow map perspectives”.
  • Views can have zero or more generic RenderPhase Components, where T defines the “type and scope” of thing being rendered in the phase (ex: “transparent 3d entities in the main pass”). At its core, a RenderPhase is a (potentially sorted) list of Entities to be drawn.
  • Entities in a RenderPhase are drawn using DrawFunctions, which read ECS data from the Render World and produce GPU commands.
  • DrawFunctions can (optionally) be composed of modular DrawCommands. These are generally scoped to specific actions like SetStandardMaterialBindGroup, DrawMesh, SetItemPipeline, etc. Bevy provides a number of built-in DrawCommands and users can also define their own.
  • Render Graph Nodes convert a specific View’s RenderPhases into GPU commands by iterating each RenderPhases’ Entities and running the appropriate Draw Functions.

If that seems complicated … don’t worry! These are what I like to call “mid-level” renderer APIs. They provide the necessary tools for experienced render feature developers to build modular render plugins with relative ease. We also provide easy to use high-level APIs like Materials, which cover the majority of “custom shader logic” use cases.

Bevy’s Core Pipeline#

authors: @cart, Rob Swain (@superdump), @KirmesBude, @mockersf

The new renderer is very flexible and unopinionated by default. However, too much flexibility isn’t always desirable. We want a rich Bevy renderer plugin ecosystem where developers have enough freedom to implement what they want, while still maximizing compatibility across plugins.

The new bevy_core_pipeline crate is our answer to this problem. It defines a “core” set of Views / Cameras (2d and 3d), Sub Graphs (ClearPass, MainPass2d, MainPass3d), and Render Phases (Transparent2d, Opaque3d, AlphaMask3d, Transparent3d). This provides a “common ground” for render feature developers to build on while still maintaining compatibility with each other. As long as developers operate within these constraints, they should be compatible with the wider ecosystem. Developers are also free to operate outside these constraints, but that also increases the likelihood that they will be incompatible.

Bevy’s built-in render features build on top of the Core Pipeline (ex: bevy_sprite and bevy_pbr). The Core Pipeline will continue to expand with things like a standardized “post-processing” effect stack.

Materials#

authors: @cart

The new renderer structure gives developers fine-grained control over how entities are drawn. Developers can manually define Extract, Prepare, and Queue systems to draw entities using arbitrary render commands in custom or built-in

RenderPhases

. However this level of control necessitates understanding the render pipeline internals and involve more boilerplate than most users are willing to tolerate. Sometimes all you want to do is slot your custom material shader into the existing pipelines!

The new

Material

trait enables users to ignore nitty gritty details in favor of a simpler interface: just implement the

Material

trait and add a

MaterialPlugin

for your type. The new shader_material.rs example illustrates this.

app.add_plugin(MaterialPlugin::<CustomMaterial>::default())

impl Material for CustomMaterial {
            fn fragment_shader(asset_server: &AssetServer) -> Option<Handle<Shader>> {
        Some(asset_server.load("shaders/custom_material.wgsl"))
    }

    fn bind_group_layout(render_device: &RenderDevice) -> BindGroupLayout {
        
    }

    fn bind_group(render_asset: &<Self as RenderAsset>::PreparedAsset) -> &BindGroup {
        
    }
}

There is also a

SpecializedMaterial

variant, which enables “specializing” shaders and pipelines using custom per-entity keys. This extra flexibility isn’t always needed, but when you need it, you will be glad to have it! For example, the built-in StandardMaterial uses specialization to toggle whether or not the Entity should receive lighting in the shader.

We also have big plans to make

Materials

even better:

  • Bind Group derives: this should cut down on the boilerplate of passing materials to the GPU.
  • Material Instancing: materials enable us to implement high-level mesh instancing as a simple configuration item for both built in and custom materials.

Visibility and Frustum Culling#

authors: Rob Swain (@superdump)

view frustum

Drawing things is expensive! It requires writing data from the CPU to the GPU, constructing draw calls, and running shaders. We can save a lot of time by not drawing things that the camera can’t see. “Frustum culling” is the act of excluding objects that are outside the bounds of the camera’s “view frustum”, to avoid wasting work drawing them. For large scenes, this can be the difference between a crisp 60 frames per second and chugging to a grinding halt.

Bevy 0.6 now automatically does frustum culling for 3d objects using their axis-aligned bounding boxes. We might also enable this for 2d objects in future releases, but the wins there will be less pronounced, as drawing sprites is now much cheaper thanks to the new batched rendering.

Directional Shadows#

authors: Rob Swain (@superdump)

Directional Lights can now cast “directional shadows”, which are “sun-like” shadows cast from a light source infinitely far away. These can be enabled by setting DirectionalLight::shadows_enabled to true.

directional light

Note: directional shadows currently require more manual configuration than necessary (ex: manual configuration of the shadow projection). We will soon make this automatic and better quality over a larger range through cascaded shadow maps.

Point Light Shadows#

authors: @mtsr, Rob Swain (@superdump), @cart

Point lights can now cast “omnidirectional shadows”, which can be enabled by setting PointLight::shadows_enabled to true:

point light

Enabling and Disabling Entity Shadows#

authors: Rob Swain (@superdump)

Mesh entities can opt out of casting shadows by adding the

NotShadowCaster

component.

commands.entity(entity).insert(NotShadowCaster);

Likewise, they can opt out of receiving shadows by adding the

NotShadowReceiver

component.

commands.entity(entity).insert(NotShadowReceiver);

Spherical Area Lights#

authors: @Josh015

PointLight Components can now define a radius value, which controls the size of the sphere that emits light. A normal zero-sized “point light” has a radius of zero.

spherical area lights

(Note that lights with a radius don’t normally take up physical space in the world … I added meshes to help illustrate light position and size)

Configurable Alpha Blend Modes#

authors: Rob Swain (@superdump)

Bevy’s StandardMaterial now has an alpha_mode field, which can be set to AlphaMode::Opaque, AlphaMode::Mask(f32), or AlphaMode::Blend. This field is properly set when loading GLTF scenes.

alpha blend modes

Clustered Forward Rendering#

authors: Rob Swain (@superdump)

Modern scenes often have many point lights. But when rendering scenes, calculating lighting for each light, for each rendered fragment rapidly becomes prohibitively expensive as the number of lights in the scene increases. Clustered Forward Rendering is a popular approach that increases the number of lights you can have in a scene by dividing up the view frustum into “clusters” (a 3d grid of sub-volumes). Each cluster is assigned lights based on whether they can affect that cluster. This is a form of “culling” that enables fragments to ignore lights that aren’t assigned to their cluster.

In practice this can significantly increase the number of lights in the scene:

clustered forward rendering

Clusters are 3d subdivisions of the view frustum. They are cuboids in projected space so for a perspective projection, they are stretched and skewed in view space. When debugging them in screen space, you are looking along a row of clusters and so they look like squares. Different colors within a square represent mesh surfaces being at different depths in the scene and so they belong to different clusters:

clusters

The current implementation is limited to at most 256 lights as we initially prioritized cross-platform compatibility so that everyone could benefit. WebGL2 specifically does not support storage buffers and so the implementation is currently constrained by the maximum uniform buffer size. We can support many more lights on other platforms by using storage buffers, which we will add support for in a future release.

Click here for a video that illustrates Bevy’s clustered forward rendering.

Sprite Batching#

authors: @cart

Sprites are now rendered in batches according to their texture within a z-level. They are also opportunistically batched across z-levels. This yields significant performance wins because it drastically reduces the number of draw calls required. Combine that with the other performance improvements in the new Bevy Renderer and things start to get very interesting! On my machine, the old Bevy renderer generally started dropping below 60fps at around 8,000 sprites in our “bevymark” benchmark. With the new renderer on that same machine I can get about 100,000 sprites!

bevymark

My machine: Nvidia GTX 1070, Intel i7 7700k, 16GB ram, Arch Linux

Sprite Ergonomics#

Sprite entities are now simpler to spawn:

fn spawn_sprite(mut commands: Commands, asset_server: Res<AssetServer>) {
    commands.spawn_bundle(SpriteBundle {
        texture: asset_server.load("player.png"),
        ..Default::default()
    });
}

No need to manage sprite materials! Their texture handle is now a direct component and color can now be set directly on the

Sprite

component.

To compare, expand this to see the old Bevy 0.5 code
fn spawn_sprite(
    mut commands: Commands,
    asset_server: Res<AssetServer>,
    mut materials: ResMut<Assets<ColorMaterial>>,
) {
    let texture_handle = asset_server.load("player.png");
    commands.spawn_bundle(SpriteBundle {
        material: materials.add(texture_handle.into()),
        ..Default::default()
    });
}

WGSL Shaders#

Bevy now uses WGSL for our built-in shaders and examples. WGSL is a new shader language being developed for WebGPU (although it is a “cross platform” shader language just like GLSL). Bevy still supports GLSL shaders, but WGSL is nice enough that, for now, we are treating it as our “officially recommended” shader language. WGSL is still being developed and polished, but given how much investment it is receiving I believe it is worth betting on. Consider this the start of the “official Bevy shader language” conversation, not the end of it.

[[group(0), binding(0)]]
var<uniform> view: View;

[[group(1), binding(0)]]
var<uniform> mesh: Mesh;

struct Vertex {
    [[location(0)]] position: vec3<f32>;
};

struct VertexOutput {
    [[builtin(position)]] clip_position: vec4<f32>;
};

[[stage(vertex)]]
fn vertex(vertex: Vertex) -> VertexOutput {
    var out: VertexOutput;
    out.clip_position = view.view_proj * mesh.model * vec4<f32>(vertex.position, 1.0);
    return out;
}

Shader Preprocessor#

authors: @cart, Rob Swain (@superdump), @mockersf

Bevy now has its own custom shader preprocessor. It currently supports #import, #ifdef FOO, #ifndef FOO, #else, and #endif, but we will be expanding it with more features to enable simple, flexible shader code reuse and extension.

Shader preprocessors are often used to conditionally enable shader code:

#ifdef TEXTURE
[[group(1), binding(0)]]
var sprite_texture: texture_2d<f32>;
#endif

This pattern is very useful when defining complicated / configurable shaders (such as Bevy’s PBR shader).

Shader Imports#

authors: @cart

The new preprocessor supports importing other shader files (which pulls in their entire contents). This comes in two forms:

Asset path imports:

#import "shaders/cool_function.wgsl"

[[stage(fragment)]]
fn fragment(input: VertexOutput) -> [[location(0)]] vec4<f32> {
    return cool_function();
}

Plugin-provided imports, which can be registered by Bevy Plugins with arbitrary paths:

#import bevy_pbr::mesh_view_bind_group

[[stage(vertex)]]
fn vertex(vertex: Vertex) -> VertexOutput {
    let world_position = vec4<f32>(vertex.position, 1.0);
    var out: VertexOutput;
        out.clip_position = view.view_proj * world_position;
    return out;
}

We also plan to experiment with using Naga for “partial imports” of specific, named symbols (ex: import a specific function or struct from a file). It’s a ‘far out’ idea, but this could also enable using Naga’s intermediate shader representation as a way of combining pieces of shader code written in different languages into one shader.

Pipeline Specialization#

authors: @cart

When shaders use a preprocessor and have multiple permutations, the associated “render pipeline” needs to be updated to accommodate those permutations (ex: different Vertex Attributes, Bind Groups, etc). To make this process straightforward, we added the SpecializedPipeline trait, which allows defining specializations for a given key:

impl SpecializedPipeline for MyPipeline {
    type Key = MyPipelineKey;
        fn specialize(&self, key: Self::Key) -> RenderPipelineDescriptor {
            }
}

Implementors of this trait can then easily and cheaply access specialized pipeline variants (with automatic per-key caching and hot-reloading). If this feels too abstract / advanced, don’t worry! This is a “mid-level power-user tool”, not something most Bevy App developers need to contend with.

Simpler Shader Stack#

Bevy now uses Naga for all of its shader needs. As a result, we were able to remove all of our complicated non-rust shader dependencies: glsl_to_spirv, shaderc, and spirv_reflect. glsl_to_spirv was a major producer of platform-specific build dependencies and bugs, so this is a huge win!

Features Ported to the New Renderer#

Render logic for internal Bevy crates had to be rewritten in a number of cases to take advantage of the new renderer. The following people helped with this effort:

  • bevy_sprites: @cart, @StarArawn, @Davier
  • bevy_pbr: Rob Swain (@superdump), @aevyrie, @cart, @zicklag, @jakobhellermann
  • bevy_ui: @Davier
  • bevy_text: @Davier
  • bevy_gltf: Rob Swain (@superdump)

WebGL2 Support#

authors: @zicklag, @mrk-its, @mockersf, Rob Swain (@superdump)

Bevy now has built-in support for deploying to the web using WebGL2 / WASM, thanks to @zicklag adding a native WebGL2 backend to wgpu. There is now no need for the third party bevy_webgl2 plugin. Any Bevy app can be deployed to the web by running the following commands:

cargo build --target wasm32-unknown-unknown
wasm-bindgen --out-dir OUTPUT_DIR --target web TARGET_DIR

The New Bevy Renderer developers prioritized cross-platform compatibility for the initial renderer feature implementation and so had to carefully operate within the limits of WebGL2 (ex: storage buffers and compute shaders aren’t supported in WebGL2), but the results were worth it! Over time, features will be implemented that leverage more modern/advanced features such as compute shaders. But it is important to us that everyone has access to a solid visual experience for their games and applications regardless of their target platform(s).

You can try out Bevy’s WASM support in your browser using our new Bevy Examples page:

wasm bevy examples

Infinite Reverse Z Perspective Projection#

authors: Rob Swain (@superdump)

For improved precision in the “useful range”, the industry has largely adopted “reverse projections” with an “infinite” far plane. The new Bevy renderer was adapted to use the “right-handed infinite reverse z” projection. This Nvidia article does a great job of explaining why this is so worthwhile.

Compute Shaders#

The new renderer makes it possible for users to write compute shaders. Our new “compute shader game of life” example (by @jakobhellermann) illustrates how to write compute shaders in Bevy.

compute game of life

New Multiple Windows Example#

authors: @DJMcNab

The “multiple windows” example has been updated to use the new renderer APIs. Thanks to the new renderer APIs, this example is now much nicer to look at (and will look even nicer when we add high-level Render Targets).

multiple windows

Crevice#

authors: @cart, @mockersf, Rob Swain (@superdump)

Bevy’s old Bytes abstraction has been replaced with a fork of the crevice crate (by @LPGhatguy), which makes it possible to write normal Rust types to GPU-friendly data layouts. Namely std140 (uniform buffers default to this layout) and std430 (storage buffers default to this layout). Bevy exports AsStd140 and AsStd430 derives:

#[derive(AsStd140)]
pub struct MeshUniform {
    pub transform: Mat4,
    pub inverse_transpose_model: Mat4,
}

Coupling an AsStd140 derive with our new UniformVec type makes it easy to write Rust types to shader-ready uniform buffers:

struct Mesh {
    model: mat4x4<f32>;
    inverse_transpose_model: mat4x4<f32>;
};

[[group(2), binding(0)]]
var<uniform> mesh: Mesh;

We (in the short term) forked crevice for a couple of reasons:

  • To merge Array Support PR by @ElectronicRU, as we need support for arrays in our uniforms.
  • To re-export crevice derives and provide an “out of the box” experience for Bevy

Ultimately, we’d like to move back upstream if possible. A big thanks to the crevice developers for building such useful software!

UV Sphere Mesh Shape#

authors: @nside

Bevy now has a built-in “uv sphere” mesh primitive.

Mesh::from(UVSphere {
    radius: 1.0,
    sectors: 16,
    stacks: 32,
})

uv sphere

Flat Normal Computation#

authors: @jakobhellermann

The Mesh type now has a compute_flat_normals() function. Imported GLTF meshes without normals now automatically have flat normals computed, in accordance with the GLTF spec.

flat normals

Faster GLTF Loading#

authors: @DJMcNab, @mockersf

@DJMcNab fixed nasty non-linear loading of GLTF nodes, which made them load much faster. One complicated scene went from 40 seconds to 0.2 seconds. Awesome!

@mockersf made GLTF textures load asynchronously in Bevy’s “IO task pool”, which almost halved GLTF scene load times in some cases.

We are also in the process of adding “compressed texture loading”, which will substantially speed up GLTF scene loading, especially for large scenes!

Bevy ECS#

No more .system()!#

authors: @DJMcNab, @Ratysz

One of our highest priorities for Bevy ECS is “ergonomics”. In the past I have made wild claims that Bevy ECS is the most ergonomic ECS in existence. We’ve spent gratuitous amounts of R&D pioneering new API techniques and I believe the results speak for themselves:

fn main() {
    App::build()
        .add_plugins(DefaultPlugins)
        .add_system(gravity.system())
        .run();
}

fn gravity(time: Res<Time>, mut query: Query<&mut Transform>) {
    for mut transform in query.iter_mut() {
        transform.translation.y += -9.8 * time.delta_seconds();
    }
}

I believe we were already the best in the market by a wide margin (especially if you take into account our automatic parallelization and change detection), but we had one thing holding us back from perfection … that pesky .system()! We’ve tried removing it a number of times, but due to rustc limitations and safety issues, it always eluded us. Finally, @DJMcNab found a solution. As a result, in Bevy 0.6 you can now register the system above like this:

App::new()
    .add_plugins(DefaultPlugins)
    .add_system(gravity)
    .run();

The New Component Trait and #[derive(Component)]#

authors: @Frizi

In Bevy 0.6 types no longer implement the

Component

trait by default. Before you get angry … stick with me for a second. I promise this is for the best! In past Bevy versions, we got away with “auto implementing”

Component

for types using this “blanket impl”:

impl<T: Send + Sync + 'static> Component for T {}

This removed th

Join the pack! Join 8000+ others registered users, and get chat, make groups, post updates and make friends around the world!
www.knowasiak.com/register/
Read More

Advertisement

1 Comment