Will be leaving home for around 2 weeks' time, so no updates until then.
Pygame offers a basic foundation for creating 2D games, but with advanced techniques, it can generate a sense of 3D depth. Understanding the fundamentals of ray casting and texture mapping are a start to creating such an illusion on a flat 2D screen.
Simply put, depth perception is the ability to see objects in three dimensions, including their size and how far away they are from us. To simulate this effect on-screen, the player's distance from the screen needs to be calculated. Here is a breakdown of the formula below:
Calculate half the width of the player's screen.
Calculate half the player's field-of-view.
Divide the screen half-width by the tangent of half the player's field-of-view.
The screen distance dictates the scaling of walls and other objects' sizes according to player distance. The farther the object, the smaller it looks. The same also applies to close objects appearing bigger.
Disable the draw() functions in your main and ray casting files to remove your 2D map display, and you will get a clear view of your 3D ray casting game environment.
In our black-and-white, pseudo-3D space, you can apply a lighthouse effect to the environment. What this means is that the player cannot see walls far after a certain distance. The way the tutorial does this is simply darkening the faraway white walls, which are recolored by multiplying all their RGB values by player depth.
An all too common mechanic in horror games.
In each of the previous 3D raycasting demos, you might notice any straight walls you approach to become curved. This distortion is the fishbowl effect, which causes projected distance on the screen to increase exponentially as angle of view approaches 90 degrees.
Luckily, the tutorial proposed a solution in the form of a straightforward math formula:
self.game.player.angle - ray_angle: calculates the difference between the player's view direction and direction of the ray being cast.
math.cos(): calculates the cosine of the angle difference above. The cosine value will be between -1 and 1.
depth *=: multiplies the original depth value by the cosine value.
No more camera vision.
Texture mapping in 3D ray casting is essentially projecting a 2D image onto a simulated 3D surface. The code is 'painting' the texture onto the wall, using texture coordinates to determine which part of the image to use. You can create a new file to house your upcoming object renderer code.
Imagine a projector shining a beam of light onto a wall. The projector (ray) casts a rectangular image onto the wall (3D surface), distorting the image (texture) as it hits the wall at different angles.
Using different numbers as references to textures imported earlier, we can decide which textures are loaded for each wall.
In the context of 3D ray casting, scanning and projecting textures in multiple taller-height rectangles over the x-axis is a technique used to more efficiently render walls and other vertical surfaces. Here are the technical reasons it is used:
Efficient Rendering: each rectangle can be rendered independently, allowing for concurrent processing and reducing overall rendering time.
Texture Alignment: dividing the wall into rectangles ensures that textures align correctly with the wall's geometry.
Vertical Scaling: allows for vertical scaling of textures. Useful when rendering walls of varying heights, as the rectangles can be scaled to match the height of the wall.
In 3D ray casting, texture offset is a value used to adjust the position of a texture on a 3D wall. By adjusting it, the appearance of the texture on the surface can be modified, which can be useful for creating more complex patterns and effects.
Here is a breakdown of the tutorial's code:
Compare the distances from the ray's origin to the intersection points on the vertical plane (depth_vertical) and horizontal plane (depth_horizontal). It checks which plane is closer to the ray's origin.
If the vertical plane is closer, the code sets the depth to depth_vertical and texture variable to vertical texture coordinates (texture_vertical). Otherwise, it sets both to depth_horizontal and texture_horizontal in this order.
Calculate the texture coordinates for the chosen plane by taking the fractional part of the intersection point's coordinates. For the vertical plane, calculate the fractional part of the y-coordinate. Do the same for the horizontal plane with the x-coordinate instead.
Calculate the offset into the texture based on the texture coordinates and orientation of the plane. The offset is used to determine which part of the texture to display at the intersection point.
Now that the mechanisms to generating textured walls in our screen are complete, it is time to put them into our list of items to ray cast.
Remember to reset your placeholder lists before each class extraction. Otherwise, they can become shared across instances, leading to unexpected behavior and bugs.
With the ray casting data in our list, now is the time to use them to draw texture walls on our screen. To do so, we need to build a subsurface texture and position variable for every wall in the map.
Run your newly made render object method inside your constantly refreshing update method.
Finally, we need to build a texture loading method for the main file to run. The tutorial introduces a new Python term for this method: static method. It is a method that belongs to a class, not its instances. It does not require an instance of the class to be called, nor does it have access to an instance.
Imagine a library with a single copy of a book. The book (static method) is shared among all visitors (instances of a class), and its contents (behavior) remain the same for everyone. No matter how many people read the book, the words and pages (implementation) do not change, and each reader does not have their own personal copy (instance-specific data).
Now we are getting somewhere!
The current program has a bug concerning the wall graphics: if the player gets too close to one, the game frame rate drops to a single digit and becomes very slow.
The tutorial's solution to this bug is to scale very close wall columns by projection height instead of the full height of the screen, reducing the number of pixels to render by the program. If the wall is far away, it is precisely projected with the full screen height like before.
The lag may be gone now when the player camera gets up close, but the walls look pixelated. This is because the player has no actual size and is only as large as a dot.
By giving our player a size, we can keep the camera from clipping when rotating around corners of a wall. Other than that, the size can be used as a hitbox in other cases.
No more freezing up close like a shy performer on the grand stage.
The player size can be presented on the 2D debug space by a small gap between the player dot and interconnected wall border.