I've been interested in game development for almost 20 years, but have never yet completed any game project. While much of my code is fairly useless, this algorithm may be of interest. I consider it to be original work (I certainly haven't seen it anywhere else and developed it alone), but I can't guarantee that it hasn't been developed independently elsewhere.

In a perspective game engine, we generally have a viewpoint located within an environment containing a potentially large number of objects. If the environment is very large, the majority of objects may be too far from the viewpoint to be visible. This is independent of the observation that only a small fraction of objects will actually lie within the field of view.

In order to avoid slow-down from processing a large number of objects that are too far away to be visible, we require a fast algorithm to eliminate distant objects. Ideally, we would like to be able to set a maximum view distance independently for each object. This allows the environment to contain a vast number of very small objects that are visible only at close range, with little or no performance penalty.

Our algorithm has runtime logarithmic in the dimensions of the environment, and (subject to an approximately uniform distribution of local objects) linear in the number of objects returned. Its runtime is completely independent of arbitrarily dense clusters of remote objects, provided they are located at least twice their maximum view distance from the viewpoint in any one dimension.

While the algorithm as presented applies only to a single dimension, it extends trivially to more dimensions. The sample code is for the three-dimensional case.

We have a collection of spheres, each with a centre and a radius. Obviously in the one and two-dimensional cases these are intervals and circles, respectively. The centre may be a vector with arbitrary dimensions, although for this discussion we consider only a single dimension. The radius is a simple scalar. We place these spheres in a container based on a binary tree (in the two and three dimensional cases, we use a quadtree or octree, respectively).

In addition to position and radius, each sphere keeps a reference to the node it currently belongs to (if any), and a linked list node object that is required for constant-time deletion - exact details depend on the language used. These are only needed if objects are to be removed from the structure.

Each node in the tree stores spheres with centres within a given range of positions, and a specific range of radii. These are stored in a doubly-linked list, which allows for constant-time insertion and removal once the correct node is located. Suppose a node can store spheres with centres between *xmin* and *xmax*. Then it is allowed to contain spheres with radius between *(xmax - xmin) / 4* and *(xmax - xmin) / 2*. If a sphere is smaller than the minimum allowed radius, then it is instead added to a child node with half the range (and therefore half the minimum and maximum radius). If a sphere is larger than the maximum allowed radius, it must be stored at a higher level in the tree. Note that this means we can never store a sphere with radius greater than half the size of the root node. Child nodes are created recursively and lazily.

A sphere may be inserted into an arbitrary node of the structure. If the position of the object is outside the range *xmin* to *xmax*, or its radius is greater than *(xmax - xmin) / 2*, then it is passed to the parent node (if one exists). Otherwise, if the radius is less than *(xmax - xmin) / 4*, it is passed to an appropriate child node with a range of positions either from *xmin* to *(xmin + xmax) / 2*, or from *(xmin + xmax) / 2* to *xmax*, as appropriate. If the child does not exist, then it is created. If the sphere is not passed to another node in one of these two ways, then it is added to a doubly linked list of spheres belonging to this node.

Runtime of insertion is *O(log(S) - log(r))* if the sphere is inserted into the root node, where *S* is the size of the root node and *r* is the radius of the sphere to be inserted. If the sphere is inserted into a different node *N*, the runtime is constant in the best case, and *O(2 log(S) - log(r) - log(r'))* in the worst case, where *r'* is the radius allowed by node *N*.

Querying the data structure requires a single point input *p*. The query returns all spheres that contain *p*, as well as an arbitrary number of nearby spheres that do not contain *p*. In typical use, the number of additional spheres will be a small constant factor of the number of useful spheres. These may be quickly filtered out by computing the distance between *p* and the centre of each sphere.

When a node is queried, all spheres that it contains are returned. For each child node, let *xmin'* and *xmax'* be the range of the child. A child node is queried if and only if *p* lies between *xmin' - rmin* and *xmax' + rmax* (to clarify: *xmin'* and *xmax'* are the minimum and maximum coordinates of the *child* node; *rmin* is the minimum radius allowed by the *parent* node). This means that there may be up to two recursive calls from this node. If we look at the child nodes of adjacent parent nodes, we can easily see that there is no input value for which more than two child nodes will be queried from any node (provided we use strict inequality for one comparison). Therefore, no more than two nodes will ever be queried at any given level: if a node recursively queries two child nodes, then no other node will make recursive queries, and if a node recursively queries one child node, then at most one other node will make at most one recursive query. Generalising to the *d*-dimensional case, at most *2^d* nodes may be queried at any given level.

This limitation on the number of recursive calls means that the runtime of the algorithm is linear in the number of levels of recursion, and in the total number of objects contained in all the nodes that are queried. Let *S* be the size of the top-level node, *s* the radius of the smallest sphere within twice its radius of *p* (in any one dimension), and *n* the number of objects returned by the query (before any filtering). The runtime of the query is *O(max{n, log(S) - log(s)})*.

Since each sphere contains a reference to the node it belongs to and a linked list node, it can be trivially deleted in constant time. After the sphere has been removed, some cleanup is also necessary: if the node contains no spheres and has no children, then it is deleted from its parent node. This process is repeated for the parent node. This cleanup is not necessary for correct operation of the data structure, but reduces memory usage. It is also required to ensure that the runtime of querying is based on the smallest sphere locally instead of the smallest globally.

Runtime of deletion is logarithmic in the worst case (when removing a small sphere from a very sparsely populated structure), and constant in the best case (when the structure is densely populated so no nodes are removed). It is never worse than *O(log(S) - log(r))*. In the particular application of view distance, there are usually a vast number of overlapping spheres, so deletion is almost always constant time.

Spheres can be modified trivially by removing, mutating and reinserting them. This is generally a logarithmic time operation. However, better performance can be obtained by allowing in-place modifications. The object is removed from its node without performing any cleanup. Its position and radius are modified, and then it is reinserted into the same node. Cleanup is performed as the final step.

The advantage of this algorithm is that minor adjustments to a sphere are very fast. If the updated sphere still belongs in the same node, it is a constant time operation. Provided the updated sphere is the same size and close to its original position (at most a constant multiple of its radius), the average runtime is still constant (proof by induction omitted, due to the limited ability of HTML to display formulae).

This data structure was developed as a way of quickly determining visibility of objects in a very large 3D environment. Each object is stored in the data structure with a radius corresponding to the maximum distance at which it is visible. The query allows objects that are close enough to be visible to be found very quickly.

A similar application is that of player detection ("aggro radius"). In this situation, each object represents a creature, and the radius represents the maximum distance at which it can sense the player. This is more useful in a single-player environment than multi-player, where the distance at which a creature can sense a player may depend on the player (we can still use the algorithm by setting the radius maximum possible aggro radius for any player, it will just have a larger fraction of unwanted results).

Some applications may also be found in collision detection. If we set the radius of each sphere to be the sum of the radii of the object's and the player's bounding spheres, then we can query the algorithm to find candidate objects that the player might have hit. In principle we could generalise this to, say, all creatures in a game, provided they each have the same radius bounding sphere. Once again, we can get around this problem by using the maximum bounding sphere of any creature, and relying on CPU power to filter out the extra unwanted results.

Generally, this algorithm is useful in any situation where we want to test a point against a very large number of spheres.

There is no reason we cannot consider the objects in the data structure to be bounding cubes rather than bounding spheres, but they need to be aligned with the axes of the data structure. In practice, it is likely to be quicker to check intersection with a sphere than a cube (and at least part of it can be easily vectorised), but this may be processor-dependent. We cannot generalise to other bounding shapes without loss of efficiency: in principle we could use three independent one-dimensional structures to store bounding cuboids, with the result set being the intersection of three queries, but the proportion of unwanted results from each structure would cease to be constant.

If the minimum radius is known, and a large number of minimum radius objects will be distributed uniformly (and quite densely) across the entire range of the data structure, then a series of arrays can be used instead of this recursive structure. An array implementation still has logarithmic query time, but always has constant insertion, removal and modification time. The trade-off is a vast increase in memory usage if there are a relatively small number of small objects.

Of course, to some extent an algorithm this complex is unnecessary. Modern processors are so fast that they can use much simpler structures with a much higher proportion of false positives, and rely on brute force to solve the problem.

Please note that this code is intended purely to clarify the above description: it is neither public domain nor open source. You may use this code as a reference or for educational purposes, but you may not further distribute it or incorporate it directly into your own projects. The algorithms are not copyrighted, but the code itself is.

**A Fast Algorithm for Adaptive View Distance**