Gamasutra: The Art & Business of Making Gamesspacer
View All     RSS
June 22, 2018
arrowPress Releases
  • Editor-In-Chief:
    Kris Graft
  • Editor:
    Alex Wawro
  • Contributors:
    Chris Kerr
    Alissa McAloon
    Emma Kidwell
    Bryant Francis
    Katherine Cross
  • Advertising:
    Libby Kruse

If you enjoy reading this site, you might also want to check out these UBM Tech sites:


Selecting by Weighted Angle

by E McNeill on 11/08/17 09:44:00 am   Expert Blogs   Featured Blogs

3 comments Share on Twitter    RSS

The following blog post, unless otherwise noted, was written by a member of Gamasutra’s community.
The thoughts and opinions expressed are those of the writer and not Gamasutra or its parent company.


THE SHORT VERSION: When trying to select among many small objects with a ray, e.g. in VR games that employ a virtual laser pointer, consider using a "weighted angle" algorithm that picks the object with the smallest angle (between the pointer's forward direction and the vector between the pointer and the object) multiplied by the distance of the object from the pointer. This allows for forgiving, flexible, and intuitive selection that allows the player to pick out distant objects yet still privileges closer ones.


This idea is pretty simple. I don't imagine that I'm the first to see it! But it wasn't obvious to me, and I was never introduced to it before, so I wanted to offer a public explanation. Hopefully I can save some people the trouble.of working it out themselves.  :)

Imagine a game with lots of small UI elements that you need to select from. Maybe this is a complicated UI or a crowded battlefield that you're clicking on, or maybe it's a VR game where you're using a laser pointer to select objects. (With several headsets that use 3DOF pointer controllers, this is pretty common.) When you point that ray into a group of objects, how do you determine which object it's pointing at? What are you "hovering over"?

As it happens, I’m making exactly that sort of game. I haven't publicly announced it yet, but my next VR game, Astraeus, involves using a laser pointer to manipulate a dense network composed of scores of small objects. Here's what it looks like when I reduce that to just 3 objects (a nearby blue sphere, a somewhat distant orange sphere, and a very distant purple sphere):

So how do we pick which object we're pointing at? The simplest solution would be to cast a ray against physics colliders and pick the object that the ray hits. But for small objects and a jittery laser pointer, that could be very difficult. Maybe we could boost the size of the colliders beyond the actual size of the object? Here's a visualization of the selection area for each object (the shaded pixels indicate that if we clicked on that point, we'd select the object with the same color):

That does help a bit, but it's limited. Small objects that are far away are still difficult to select. Maybe we could boost the size of the colliders even further:

But this introduces an occlusion problem. The collision radius of the nearby blue sphere is covering up the collision for the more distant purple sphere. Even if the player were pointing directly at the purple sphere, we wouldn't be able to select it, which would be a pain.

One alternative is to select objects based on angle. You could check all the objects in front of the player and calculate the angle between the players pointing ray (the laser pointer) and the ray that extends from the pointer's position to the object. Then you could say that the object with the smallest angle is the one that we're pointing at:

Now we don't have to be hovering precisely over the object to select it! This algorithm never has occlusion issues, and it lets you more easily select distant objects.

However, it's kind of weird too. One of these objects is very nearby, and the others are futher away. It's not intuitive that a very distant object should have equal priority as the closest. The player is more likely to be trying to select the closer object, and ignoring it in favor of something deep in the background would be super frustrating.

The solution is to multiply the angle by the distance to the object. Thus, if an object is twice as far away as another, your pointing (i.e. the angle) needs to be twice as precise. With the weighted angle algorithm, nearer objects are easier to select, but you can stil always select any object if you're pointing directly at it (i.e. if the angle is near zero).

This might seem pretty basic so far, or like more trouble than it's worth, but most scenes in my game are not so simple. What happens if we have dozens upon dozens of objects to choose from?

If we just use large invisible colliders and raycast against them, you get a suboptimal solution with a lot of potentially frustrating occusion problems:

If we pick based on smallest angle, selection is a lot more forgiving, but distant objects are overprivileged, especially on the edges:

If we use weighted angles (multiplying by distance), we prioritize the nearer objects again, and the selection radiuses for each object smooth out and become a little more sensible. It feels a bit more intuitive:

There are several  other ways we can improve this. First, this algorithm completely ignores occlusion; this is better than having objects blocked by invisible walls, but in some cases, this leads to the ability to select an object that is completely hidden behind another. We could solve this by first performing a direct raycast upon realistically-sized collision geometry; if there's a perfect hit, then we can forget about selecting by angle.

Second, multiplying by distance is just a way of giving added weight to that particular variable. We could also add a bonus for physicially larger objects. Or, if a certain UI element is being highlighted, we could give it extra weight so it's easier than other objects to select.

Third, we can weigh the variables differently. If distance is especially important (like if distant objects are very unlikely to be the player's intended target), we could multiply the angle by the square of the distance.

Lastly, we can add a minimum angle threshold, so that the player isn't always considered to be hovering over something.

For my game, I decided to add a minimum angle threshold, multiply the angle by distance, give double weight to blue spheres (the planets owned by the player), and scale the angle based on the sphere's size. The result is that A) the player doesn't need to be pointing directly at an object to select it, B) nearby spheres, larger spheres, and blue spheres are much easier to select, and C) careful selection of smaller and more distance objects is still possible:

In my experience, this works very well!

I highly recommend this method, especially combined with very obvious highlighting when "hovering" over an object. The danger of a "forgiving" selection algorithm is that the player might accidentally select something that they aren't directly pointing at. Using a more intuitive-feeling algorithm is enormously helpful, but you still need to communicate clearly what is being selected.

Let me know if you use the weighted angle algorithm in your game! I'm curious to know if this helped.  :)

E McNeill (@E_McNeill)

Related Jobs

innogames — Hamburg, Germany

(Senior) UI Designer for a New Mobile Game
Skydance Interactive
Skydance Interactive — Marina Del Rey, California, United States

Systems Designer
Sucker Punch Productions
Sucker Punch Productions — Bellevue, Washington, United States

Narrative Writer
Sucker Punch Productions
Sucker Punch Productions — Bellevue, Washington, United States

Environment Artist

Loading Comments

loader image