You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, this is really a nice work, this can become a greater tutorial but I think it needs some improvements because, I'm a beginner, and not everything is straightforward to understand. I have not yet completely read the whole article, I just stopped to first sections. The goal of this article is to be understandable for beginners but it uses a lot of terms not really introduced and therefore difficult for beginners to understand. Maybe because I have never used Panda3D and I'm extrapolating things with an OpenGL program.
Materials:
The first texture is the normal map and the second is the diffuse map.
You should introduce what is a normal map and a diffuse map. A least something that normal maps are textures holding mathematic vectors coded/shown as RGB colors. (R,G,B) = (X,Y,Z). I guess vectors are normalized and coding as a single byte.
If an object uses its vertex normals, a "flat blue" normal map is used.
Certainly! But because you did not say that vector == color it is difficult to get the point. By default with what color vectors are initialized to ? (0,0,1) ? I guess that if the Z axis is not used, the green component is never used => only red + blue are used so "flat blue"?
By having the same maps in the same positions for all models, the shaders can be generalized and applied to the root node in the scene graph.
You should explain in few words what is a scene graph. As I guess it is, a graph structure where each node holds at least a change-basis matrix (from its parent node) and an optional 3d model. In your case, are nodes also hold shaders and are shaders of the parent nodes also applied to child nodes (like matrices)?
Concerning your computation:
round((0 * 0.5 + 0.5) * 255)
should not be simply round(0 * 255 + 0.5)? You should probably add a comment to say: for converting float to the clostest integer we use round(x + 0.5) coming from the fact float are converted to the lowest int and we want values in [0.0 ... 0.5[ rounded to 0.0 while values [0.5 .. 1.0[ rounded to 1.0.
round(255 / 255 * 2 - 1)
What is that? You should explain this formula. I had to go on Wiki to see that some axis are within -1 and 1 while other are within 0 and 1.
section GLSL
Instead of using the fixed-function pipeline, you'll be using the programmable GPU rendering pipeline.
Is this really a tutorial for beginners? :) I guess they probably never have programmed in Legacy OpenGL. I understand explaining again what is a non-fixed GPU pipeline is boring. Maybe you should only summarize it with fewer blocks diagrams and add links to tutorials explaining longer pipelines.
So a simple figure is enough:
+-------------------+----------------------+
| | |
| V V
[Pand3D code] ==> [Vertex shader] ==> [Fragment shader] ==> [Framebuffer]
==> pipeline for shader attributes
--> pipeline for shader uniforms
This will also inroduce framebuffer for the next section.
Note the two keywords uniform and in... The in keyword means this global variable is being given to the shader.
Seems to me they are important elements of the shader, I would not define them inside a simple note but give to them a full paragraph of descriptions. And why defining out not with them? I would simplly introduce the input/output of shaders: the input of a fragment shader is the output of a vertex shader.
Render To Texture
Instead of rendering/drawing/painting directly to the screen
Technically, the screen is a framebuffer and it is the by-default bound framebuffer.
The textures bound to the framebuffer hold the vector(s) returned by the fragment shader. Typically these vectors are color vectors (r, g, b, a) but they could also be position or normal vectors (x, y, z, w).
Would be better placed when introducing normal maps in the 1st section.
each fragment shader in the example code has only one output.
What is this output ?
texturing
Texturing involves mapping some color or some other kind of vector to a fragment using UV coordinates.
In the figure just after you should display U and V axis in the picture and say that UV are the XY axis inside the texture frame.
The text was updated successfully, but these errors were encountered:
Hi, this is really a nice work, this can become a greater tutorial but I think it needs some improvements because, I'm a beginner, and not everything is straightforward to understand. I have not yet completely read the whole article, I just stopped to first sections. The goal of this article is to be understandable for beginners but it uses a lot of terms not really introduced and therefore difficult for beginners to understand. Maybe because I have never used Panda3D and I'm extrapolating things with an OpenGL program.
Materials:
You should introduce what is a normal map and a diffuse map. A least something that normal maps are textures holding mathematic vectors coded/shown as RGB colors. (R,G,B) = (X,Y,Z). I guess vectors are normalized and coding as a single byte.
Certainly! But because you did not say that vector == color it is difficult to get the point. By default with what color vectors are initialized to ? (0,0,1) ? I guess that if the Z axis is not used, the green component is never used => only red + blue are used so "flat blue"?
Then, why displaying a such big purpole figure ? Could be interessting to add a figure giving more informations like https://en.wikipedia.org/wiki/Normal_mapping#/media/File:Normal_map_example_with_scene_and_result.png by explaining goal of colors (ie. red -> shadow, green -> light).
You should explain in few words what is a scene graph. As I guess it is, a graph structure where each node holds at least a change-basis matrix (from its parent node) and an optional 3d model. In your case, are nodes also hold shaders and are shaders of the parent nodes also applied to child nodes (like matrices)?
Concerning your computation:
should not be simply
round(0 * 255 + 0.5)
? You should probably add a comment to say: for converting float to the clostest integer we use round(x + 0.5) coming from the fact float are converted to the lowest int and we want values in [0.0 ... 0.5[ rounded to 0.0 while values [0.5 .. 1.0[ rounded to 1.0.What is that? You should explain this formula. I had to go on Wiki to see that some axis are within -1 and 1 while other are within 0 and 1.
section GLSL
Is this really a tutorial for beginners? :) I guess they probably never have programmed in Legacy OpenGL. I understand explaining again what is a non-fixed GPU pipeline is boring. Maybe you should only summarize it with fewer blocks diagrams and add links to tutorials explaining longer pipelines.
So a simple figure is enough:
This will also inroduce framebuffer for the next section.
Seems to me they are important elements of the shader, I would not define them inside a simple note but give to them a full paragraph of descriptions. And why defining
out
not with them? I would simplly introduce the input/output of shaders: the input of a fragment shader is the output of a vertex shader.Render To Texture
Technically, the screen is a framebuffer and it is the by-default bound framebuffer.
Would be better placed when introducing normal maps in the 1st section.
What is this output ?
texturing
In the figure just after you should display U and V axis in the picture and say that UV are the XY axis inside the texture frame.
The text was updated successfully, but these errors were encountered: