API Overview

The Leap Motion system detects and tracks hands, fingers and finger-like tools. The device operates in an intimate proximity with high precision and tracking frame rate.

The Leap Motion software analyzes the objects observed in the device field of view. It recognizes hands, fingers, and tools, reporting both discrete positions, gestures, and motion. The Leap Motion field of view is an inverted pyramid centered on the device. The effective range of the Leap Motion Controller extends from approximately 25 to 600 millimeters above the device (1 inch to 2 feet).

Coordinate system

The Leap Motion system employs a right-handed Cartesian coordinate system. The origin is centered at the top of the Leap Motion Controller. The x- and z-axes lie in the horizontal plane, with the x-axis running parallel to the long edge of the device. The y-axis is vertical, with positive values increasing upwards (in contrast to the downward orientation of most computer graphics coordinate systems). The z-axis has positive values increasing toward the user.


The Leap Motion right-handed coordinate system.

The Leap Motion API measures physical quantities with the following units:

Distance: millimeters
Time: microseconds (unless otherwise noted)
Speed: millimeters/second
Angle: radians

Motion tracking data

As the Leap Motion Controller tracks hands, fingers, and tools in its field of view, it provides updates as a set, or frame, of data. Each Frame object representing a frame contains lists of the basic tracking data, such as hands, fingers, and tools, as well as recognized gestures and factors describing the overall motion in the scene. The Frame object is essentially the root of the Leap Motion data model.

To read more about Frames, see Getting Frame Data.


The hand model provides information about the position, characteristics, and movement of a detected hand as well as lists of the fingers and tools associated with the hand.

More than two hands can appear in the hand list for a frame if more than one person’s hands or other hand-like objects are in view. However, we recommend keeping at most two hands in the Leap Motion Controller’s field of view for optimal motion tracking quality.

Fingers and Tools

The Leap Motion Controller detects and tracks both fingers and tools within its field of view. The Leap Motion software classifies finger-like objects according to shape. A tool is longer, thinner, and straighter than a finger.

In the Leap Motion model, the physical characteristics of fingers and tools are abstracted into a Pointable object. Fingers and tools are types of Pointable objects.


Finger tipPosition and direction vectors provide the positions of the finger tips and the directions in which the fingers are pointing.

The Leap Motion software classifies a detected pointable object as either a finger or a tool.


A tool is longer, thinner, and straighter than a finger.


The Leap Motion software recognizes certain movement patterns as gestures which could indicate a user intent or command. Gestures are observed for each finger or tool individually. The Leap Motion software reports gestures observed in a frame the in the same way that it reports other motion tracking data like fingers and hands.

The following movement patterns are recognized by the Leap Motion software:

Circle — A finger tracing a circle.Swipe — A long, linear movement of a finger.
Key Tap — A tapping movement by a finger as if tapping a keyboard key.Screen Tap — A tapping movement by the finger as if tapping a vertical computer screen.

Important: before using gestures in your application, you must enable recognition for each gesture you intend to use. The Controller class has an enableGesture() method that you can use to enable recognition for the types of gestures you use.


Motions are estimates of the basic types of movements inherent in the change of a user’s hands over a period of time. Motions include scale, rotation, and translation (change in position).


Motions are computed between two frames. You can get the motion factors for the scene as a whole from a Frame object. You can also get factors associated with a single hand from a Hand object.

You can use the reported motion factors to design interactions within your application. For example, instead of tracking the change in position of individual fingers across several frames of data, you could use the scale factor computed between two frames to let the user change the size of an object.

Motion Type Frame Hand
Scale Frame scaling reflects the motion of scene objects toward or away from each other. For example, one hand moves closer to the other. Hand scaling reflects the change in finger spread.
Rotation Frame rotation reflects differential movement of objects within the scene. For example, one hand up and the other down. Hand rotation reflects change in the orientation of a single hand.
Translation Frame translation reflects the average change in position of all objects in the scene. For example, both hands move to the left, up, or forward. Hand translation reflects the change in position of that hand.