Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Port wgpu utils and render pipeline to bevy #954

Closed
tychedelia opened this issue Jan 18, 2024 · 7 comments · Fixed by #960
Closed

Port wgpu utils and render pipeline to bevy #954

tychedelia opened this issue Jan 18, 2024 · 7 comments · Fixed by #960
Assignees
Labels

Comments

@tychedelia
Copy link
Collaborator

As a first step in getting the draw api working for #953, we need to define nannou's wgpu infrastructure in terms of nannou's mid-level render api. The closest examples are the 3d gizmos pipeline or the manual 2d mesh example.

Our goal for this ticket should be to submit some raw vertex data with attributes to the bevy render world to be drawn. We won't worry about cameras/windowing/etc just yet except to get an example working.

Many of the wgpu utilities will need to be refactored to target bevy's wgpu wrapper types, but otherwise should be able to be converted mostly in place.

The mid-level render api is mostly an ecs dependency-injectified version of our existing render code. We should be able to use a lot of the boilerplate and the existing shaders as is.

@tychedelia
Copy link
Collaborator Author

Doing a little more research — seems like we'll also need to implement a node in the render graph so we can manage some features of the render pass itself. There's an example of that here, but the main 3d pipeline source in the engine itself may be a better resource.

@tychedelia
Copy link
Collaborator Author

tychedelia commented Jan 19, 2024

Have started doing a poc of directly porting the existing rendering algorithm into bevy's mid-level render api. Most of the existing code maps pretty directly to the different pieces of bevy's api, but there's a bit of complexity when it comes to rendering the view within bevy's existing render graph. Namely, it requires using bevy's camera system (i.e. camera = view).

In my previous comment (#954 (comment)) I mentioned that we might need to implement our own render node, but going this far basically means we live entirely outside of bevy's renderer, and I'm concerned will make it more difficult to take advantage of features like windowing. It also may lead to strange interactions if users want to both use our draw api as well as bevy's mesh api.

One option I'm exploring is to just use bevy's orthographic camera and hooking into their view uniform for our shaders. This is pretty straightforward, but may mean we also need to do things like spawn lights, etc.

Another alternative is to explore just using bevy's existing pbr mesh pipeline. A simple example of what this might look like:

fn setup(
    mut commands: Commands,
    mut meshes: ResMut<Assets<Mesh>>,
    mut materials: ResMut<Assets<StandardMaterial>>,
) {
    commands.insert_resource(AmbientLight {
        color: Color::WHITE,
        brightness: 1.0,
    });
    commands.spawn(Camera3dBundle {
        transform: Transform::from_xyz(0.0, 0.0, -10.0).looking_at(Vec3::ZERO, Vec3::Z),
        projection: OrthographicProjection {
            ..Default::default()
        }
            .into(),
        ..Default::default()
    });

    let tris = vec![
        Vec3::new(-5.0,  -5.0, 0.0).to_array(),
        Vec3::new(-5.0,  5.0, 0.0).to_array(),
        Vec3::new(5.0, 5.0, 0.0).to_array(),
    ];
    let indices = vec![0, 1, 2];
    let colors = vec![
        Color::RED.as_linear_rgba_f32(),
        Color::RED.as_linear_rgba_f32(),
        Color::RED.as_linear_rgba_f32(),
    ];
    let uvs = vec![Vec2::new(1.0, 0.0); 3];
    let mesh = Mesh::new(PrimitiveTopology::TriangleList)
        .with_inserted_attribute(Mesh::ATTRIBUTE_POSITION, tris)
        .with_inserted_attribute(Mesh::ATTRIBUTE_COLOR, colors)
        .with_inserted_attribute(Mesh::ATTRIBUTE_UV_0, uvs)
        .with_indices(Some(Indices::U32(indices)));

    println!("{:?}", mesh);
    let mesh_handle = meshes.add(mesh);

    commands.spawn(PbrBundle {
        mesh: mesh_handle,
        /// the pbr shader will multiply our vertex color by this, so we just want white
        material: materials.add(Color::rgb(1.0, 1.0, 1.0).into()),
        transform: Transform::from_xyz(0.0, 0.0, 0.0),
        ..default()
    });
}

The issue here is that we either need to cache geometry or clear it the meshes every frame. This may or may not be a big deal but bevy definitely doesn't assume that meshes are drawn in a kind of immediate mode.

I think it's worth trying to complete an as-is port of the renderer to the mid-level bevy api just to see what it looks like, but my experience so far is definitely generating more questions. Ultimately, seeing actual could will probably help clarify!

TL:DR;

  1. Can we get away with just using bevy's high level based pbr mesh api? What would need to change in our draw api to do so?
  2. If we want to use their mid-level render api, how can we interact with the parts of the renderer we want to use, windowing, input, etc.
  3. Will it be possible to support both our draw api and users doing arbitrary bevy ops, spawing meshes, etc.

@tychedelia tychedelia self-assigned this Jan 20, 2024
@mitchmindtree
Copy link
Member

In my previous comment (#954 (comment)) I mentioned that we might need to implement our own render node, but going this far basically means we live entirely outside of bevy's renderer,

I love the idea of attempting to use bevy's camera and fitting the Draw API in at the highest level possible in order to work nicely alongside other bevy code, but I wouldn't be too surprised if it turns out we do need to target some mid or lower level API instead due to the way that Draw kind of builds a list of "commands" that translate to fairly low-level GPU commands (e.g. switching pipelines depending on blend modes, setting different bind groups, changing the scizzor, etc).

and I'm concerned will make it more difficult to take advantage of features like windowing.

True, one thing that comes to mind is that today by default we target an intermediary texture for each window (rather than targeting the swapchain texture from the draw pipeline directly) where the idea is that we can re-use the intermediary texture between frames 1. for that processing-style experience of drawing onto the same canvas and 2. for the larger colour channel bit-depth. I wonder if enough bevy parts are exposed to allow us to have a similar setup as a plugin 🤔

The issue here is that we either need to cache geometry or clear it the meshes every frame. This may or may not be a big deal but bevy definitely doesn't assume that meshes are drawn in a kind of immediate mode.

Yeah currently I think our draw API just reconstructs meshes each frame anyways, but I think we do re-use buffers where we can, but maybe not so crazy to reconstruct meshes each frame? Hopefully this turns out to gel OK with bevy 🙏

Looking forward to seeing where your bevy spelunking takes this !!

@tychedelia
Copy link
Collaborator Author

@mitchmindtree Some notes more notes from my research.

True, one thing that comes to mind is that today by default we target an intermediary texture for each window (rather than targeting the swapchain texture from the draw pipeline directly) where the idea is that we can re-use the intermediary texture between frames 1. for that processing-style experience of drawing onto the same canvas and 2. for the larger colour channel bit-depth. I wonder if enough bevy parts are exposed to allow us to have a similar setup as a plugin 🤔

Bevy's view logic uses the same intermediate texture pattern, maintaining two internal buffers in order to prevent tearing, etc. You can disable the clear color to get the sketch like behavior.

Color depth isn't configurable, but using an hdr camera provides the same bit depth as our default (Rgba16Float) . Otherwise, bevy uses Rgba8UnormSrgb. Maybe they'd accept a contribution here, although I'd bet these two options work for a great majority of users.

They don't support MSAA 16x, not sure why.

In terms of pipeline invalidation, you can see all the options that would cause a pipeline switch in bevy's mesh pipeline. Basically, the key is generated and used to fetch the pipeline, so if the key changes, a new pipeline is created. I believe this supports everything we track: topology, blend state, and msaa.

Scissoring seems to be the main thing that isn't supported by default in the mesh pipeline. I think it might be simple to implement as a custom node in the render graph though? Definitely need to do more investigation here. It's supported in their render pass abstraction, just isn't used in the engine or in any examples.

I'm like... 70% of the way through implementing our existing rendering logic, but as I read through the bevy source in doing so, I'm continually like, oh they're doing the exact same thing already.

Yeah currently I think our draw API just reconstructs meshes each frame anyways, but I think we do re-use buffers where we can, but maybe not so crazy to reconstruct meshes each frame?

Yeah, I don't think the performance would be worse than our existing pattern, so this is likely totally fine.

Hmm. 🤔 Much to consider. I'm definitely enjoying getting into the fiddly wgpu bits of the renderer, but it would also be great to reduce the amount of custom rendering code we need to maintain as that's kinda the whole point of this refactor.

@tychedelia
Copy link
Collaborator Author

It lives!

image

Will push my PoC to a branch in a bit. Here's some details about what I've done:

  • We are using a ViewNode, which means we hook into Bevy's windowing. So we attach our nannou specific components to a view, and are able to target that. This works really well and integrates cleanly with the renderer. This render sits at the end of bevy's core 3d pass. Still need to experiment more with mixing in bevy meshes just to see what happens, but it potentially "just works", which would be so cool.
  • We're using bevy's camera system and view uniforms here, which is nice.
  • This also means that bevy is managing all the textures for us. We write to one of their view target textures in place of the intermediate texture, and bevy writes to the swapchain automatically. Need to investigate whether that also means we can hook into MSAA.

There's a few outstanding issues to deal with in my PoC:

  • Right now, I'm just binding a mesh instance per window, with no support for binding textures. It's not clear to me whether we want to keep the same pattern of passing mesh + commands to the final render, or whether we might want to explore creating multiple meshes and baking the texture info into them.
  • Not clear to me where the best place to compute the raw vertex data is. Should we compute it in the main world, or extract our main world components into the render world and compute it there. This probably doesn't matter for performance and is more an architecture concern.
  • We can handle scissoring in the ViewNode 👍.
  • No idea about the text stuff, anything that deals with assets we'll want to lean on bevy for.
  • A bunch of misc. code organization / pattern questions I still have.

TLDR: Sans some outstanding questions about feature parity, this approach is working surprisingly well, and while it still requires us to manage some wgpu stuff, the surface area is reduced a lot and improved by some patterns bevy offers. It would still be really interesting to explore hooking into bevy's pbr mesh stuff completely, but this is definitely a viable approach that demonstrates some of the benefits of our refactor.

@mitchmindtree mitchmindtree linked a pull request Jan 29, 2024 that will close this issue
6 tasks
@tychedelia tychedelia moved this from Ready to In progress in Bevy Plugin Rework Jan 30, 2024
@tychedelia tychedelia mentioned this issue Feb 1, 2024
6 tasks
@tychedelia
Copy link
Collaborator Author

The bevy asset system is actually incredibly helpful for getting user uploaded textures to work. When a user uploads a texture, bevy by default creates a Sampler, Texture, TextureView, etc. This means that we can just import these already instantiated into our render pipeline. Configuration (i.e. for the sampler) is handled by bevy, so we may need to figure out how to manage additional configuration options there. One thing to note is that assets upload asynchronously, so there's a bit of additional complexity there.

tychedelia added a commit that referenced this issue Feb 27, 2024
@tychedelia
Copy link
Collaborator Author

Closing this and opening a new ticket to move us to the "mid level" render APIs.

@github-project-automation github-project-automation bot moved this from In progress to Done in Bevy Plugin Rework Feb 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
Status: Done
2 participants