Gamasutra is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


Gamasutra: The Art & Business of Making Gamesspacer
View All     RSS
August 9, 2020
arrowPress Releases







If you enjoy reading this site, you might also want to check out these UBM Tech sites:


 

How Do Bullets Work in Video Games?

by Tristan Jung on 12/06/19 10:17:00 am   Featured Blogs

1 comments Share on Twitter    RSS

The following blog post, unless otherwise noted, was written by a member of Gamasutra’s community.
The thoughts and opinions expressed are those of the writer and not Gamasutra or its parent company.

 

FPS (first-person shooter) games have been a staple in the video game industry ever since the explosion of Wolfenstein 3D back in 1992. Since then, the genre has been evolving with graphical upgrades, huge budgets, and an eSports ecosystem. But what about its core, the shooting mechanics? How have we progressed on that front? Why do some guns feel like it’s the real thing, while others feel like toys?

“How do bullets work in video games?”


Hitscan

In the earlier days, many games relied on a technique called raycasting to render 3D environments onto 2D images (your screen). Raycasting also allows the engine to determine the first object intersected by a ray. Developers then started to question, “What if that ray originated from the muzzle of a gun to mimic a bullet?” With this idea, hitscan was born.

An example of raycasting
Above: An example of raycasting

In most implementations of a hitscan weapon, when the player shoots a bullet, the physics engine will:

  • Figure out the direction the gun is pointing at,
  • Cast a ray from the muzzle of the gun until a defined range,
  • Use raycasting to determine if the ray hit an object.

If the engine determines that an object is in the line of fire, it will notify it with a message that it was “hit” with a bullet. The target then can do all the calculations needed to register the damage.

Above: From Unity. Point A represents a gun casts a ray until its maximum point B. The ray makes contact with the cube, which the engine will tell it has been hit.

Hitscan is simple at its core, but a lot of different modifications can be made to support other logic:

  • If we continue the ray past the first object that it hit, we can penetrate multiple objects in a line, like the railgun in Quake
  • Removing the maximum range of the ray means that we can shoot out a laser that will continue forever until we hit something
  • Programming certain surfaces to be reflective, to bounce bullets off of

Above: Overwatch. Genji’s deflect is an example of a reflective surface.

The main advantage of using raycasting is that it’s super fast. It’s quick to compute and does not need overhead memory or processing time to build a new physics object. That means the network engineering needed to keep many clients in sync is minimal since the server only needs to keep track of the direction of the ray. Recoil is simple to add, as the addition of a small perturbation in the aim of the gun will mimic the effect.

Thus, it’s no surprise that many games in the industry use hitscan for its shooting logic. Wolfenstein 3D and Doom are classic examples, but even recent games use this technology. Characters such as Soldier 76, McCree and Widowmaker from Overwatch have hitscan weapons, and most Call of Duty guns are hitscan as well.

Below: Examples from Overwatch, Call of Duty, Wolfenstein 3D

So why don’t all games use this method?

First, you may have noticed that rays have an infinite traveling velocity, thus reaching their destination instantly. There is no travel time after you fire a bullet and hit an object. This means it’s impossible to dodge a bullet if a ray is on target, even if the target is miles away.

Above: Halo. Notice how the muzzle flare and the hit effects on the ground show up at the same time.

Second, most implementations of hitscan use straight rays. This means it’s hard to account for wind, gravity, and other external factors that may affect the bullet once it leaves the gun. Programmers can add kinks and bends to the ray to help it mimic real rounds, but once the player shoots a ray, there is no real way to modify its path in the middle.

A lot of “casual” games end up using the hitscan method as it simplifies the learning curve for most beginner players. But what about games that aim to create an “immersive and realistic” shooting experience? They cannot achieve their goals within these constraints. We need to use an alternative method.


Projectile Ballistics

It sounds pretty fancy, but the high-level idea is straightforward. Every bullet or projectile shot out of a weapon creates a new physics object in the environment. It has its own mass, velocity, and hitbox that the engine will track.

Above: Max Payne 3

The advantages of using projectile ballistics shine in games where realism is the top priority. Since every projectile exists on its own, you can now factor in wind, friction, gravity, temperature; any force that should act on the bullet. Now that you can change the physics, players can now use weapons other than simple guns and lasers; you can now add grenade and rockets to your arsenal.

Since bullets under this system aren’t moving at the speed of light, you can also implement temporal features:

  • “Bullet-time” as seen in Max PayneSniper Elite or Superhot is feasible.
  • Travel time for projectiles, which means if you’re taking a long-distance shot (or shooting a slow-moving projectile), aiming ahead becomes crucial.
  • Delayed explosions on projectiles, like grenades

With these additional computations, the processing is more taxing relative to using hitscan. Servers will have to do a lot more work to make sure all the objects are in sync, and discrepancies or conflicts in logic across clients have to be resolved not to create inconsistent experiences for players on the same server.

Below: Examples from Superhot, Battlefield 1, Overwatch

There are many ways around this to squeeze out as much performance as possible. An example of engine optimization is to have a “pool” of objects loaded before playtime, and “warp in and enable” them when needed. Once it hits a surface, you can play a ballistics animation and disable the projectile, saving it for later. This method will reduce some computation and memory costs from creating and destroying objects over and over again.

There are also multiple ways to do the computations, but the high-level difference is where they decide to process a “tick” of a game, a unit of time measurement:

  • The tick is calculated separately from the rendering logic, which means the game will have a more accurate representation of the objects even if there are frame skips. More logic is needed to calculate the exact time that passed since the last render.
  • Calculating the tick on every frame; binding the physics to the frame rate. If you disable frame rate caps or start to drop frames, you can see the accelerated or choppy effects on the world.

The consequence of tying movement to ticks is clear when projectiles are moving fast enough to cover large amounts of distance between ticks. You may run into situations where objects seem to “phase through” each other since they were never overlapping in the engine.

All of this sounds fancy, thus leading many people to think that this is a relatively new method; but it actually predates hitscan! Before FPS games, there were already many top-down shooters, such as AsteroidsSpace Invaders or Galaxian. These are arcade games from the 70s that were already implementing projectile ballistics, albeit a bit primitive.

Above: Asteroids. The bullets are a bit hard to see, but they are there!

Even with all these features, we’re not able to create a realistic representation of the real world. Is there a way we can get the advantages of both methods?


Hybrid Systems

Yes, we can!

Most game engines can handle both types of bullet simulations: hitscan and projectile ballistics. This gives the option to have a huge variety of weapons; games such as HaloGTA, and Half-Life have weapons that can support both types of physics.

Below: Halo. The Assault Rifle uses hitscan; the Needler uses projectile ballistics

Developers can also mix two techniques to cover the weaknesses of each system to provide an even more life-like experience. For example, to fix the issue of objects phasing through each other from projectile ballistics, each bullet can draw a ray every tick of the engine. This helps the engine to see if any of the rays would intersect between ticks, colliding mid-air.

They can also be blended to enhance features in a game. A great example of this is in the Sniper Elite series; after pulling the trigger, the engine uses hitscan to determine if the shot is close enough to any detectable target to trigger slow motion. If true, it will fire a bullet with projectile ballistics in bullet-time.

Above: Sniper Elite


And that about covers the basics about how bullets work in video games! It’s interesting to see that the field is more focused on smaller refinements and improvements rather than massive overhauls. We haven’t made significant bounds and leaps after the first few revolutionary games were released.

So what now? What lies on the road ahead?

I don’t see the hybrid approach going away anytime soon due to the extra features it provides, but I predict a lot of the improvements will happen on projectile ballistics. As we continue to increase the frequency of the tick computation (with increased CPU power), we will be able to approach the asymptotic limit of “real-life” bullet simulation.


This article was based on a Quora answer I posted. I would like to thank Pavel Drotár, Renaud Kyokushin, Paul Winstone, and Jason Fletcher for their comments.

Originally published on my Medium account. You can find me on Twitter.


Related Jobs

Playco
Playco — Tokyo, Mountain View, San Francisco, Seoul, Remote, Remote
[08.07.20]

Senior Game Engineer
Playco
Playco — Remote, Tokyo, Mountain View, San Francisco, Seoul, Remote, Remote
[08.07.20]

Engineering Manager
Wooga GmbH
Wooga GmbH — Berlin, Germany
[08.07.20]

Unity Game Engineer
Disbelief
Disbelief — Cambridge, Massachusetts, United States
[08.06.20]

Programmer





Loading Comments

loader image