Leap Motion C API  3.1
The API to the LeapC library.
Setting Up Virtual and Augmented Reality Scenes

Setting up a 3D scene to display hands controlled by the Leap Motion device in the different available Virtual Reality (VR) APIs involves similar steps – though there are differences in how much an API or engine does for you. In general, these steps include:

  • Mounting the Leap Motion device on the HMD, if using the peripheral device.
  • Calculating the transforms that describe the difference in position and orientation between mounting the Leap Motion device on the HMD and placing it on a table.
  • Accessing the Leap Motion API to get tracking data.
  • Calculating the transforms necessary to convert from the Leap Motion right-handed coordinate system to your target coordinate system (if the coordinate systems are different).
  • Translating coordinates for positions, directions, and orientations from the Leap Motion coordinate system into the target world coordinate system.
  • For Augmented Reality (AR), rendering the Leap Motion sensor images.

Mounting the Leap Motion Device

To use the older Leap Motion peripheral in a VR context, you must first devicse a way to mount it to your HMD.


For the resourceful, there are no end of ways to attach the Leap Motion sensor to a head-mounted display: double-sided tape, velcro, rubber bands. For the rest of us, Leap Motion sells a custom mount. The 3D printer files are also available (free) if you prefer to print your own. You can download the files from Thingivers and Grabcad. The Leap Motion mount was designed with the Oculus Rift DK1 and DK2 in mind, but since it uses double-sided tape, it could work with other HMD's as well – as long as they have a fairly flat, Leap-sized area in the front middle.

Note: We recommend that you mount the Leap Motion device with the green power indicator LED facing downward. While the Leap Motion software will flip the coordinate system and images if mounted indicator upwards, it can only do so after a hand enters the view. In the meantime (or if auto-orientation is turned off in the Control Panel), the images from the cameras can appear upside down.

Interpupillary Offset

Whether you mount a peripheral device on an HMD or use a HMD with an embedded device, you measure the offsets between the Leap Motion origin and the midpoint of the line running from pupil to pupil in all dimensions (this line is refered to as the interpupillary line). These measurements will be required to correctly place 3D hands in the scene. The device should be mounted square to the device so that the cameras are facing straight ahead and level to the horizon when you are looking straight ahead and level.


The matrix that represents the offset of the Leap Motion device from the user's interpupillary and medial lines looks like the following, where \(t_{x}\), \(t_{y}\), and \(t_{z}\) are your measured offsets:

\[ T_{mounted} = \begin{bmatrix} 1 & 0 & 0 & t_x \\ 0 & 1 & 0 & t_y \\ 0 & 0 & 1 & t_z \\ 0 & 0 & 0 & 1 \end{bmatrix} \]

Typical values for the translation components on an Oculus Rift are \(t_{x} = 0\), \(t_{y} = 0\), and \(t_{z} = -80\). When mounting your device, strive to keep \(t_{x}\), \(t_{y}\) as close to zero as possible.

If your Leap Motion device is mounted square to the HMD with the y-axis projecting forward, you can specify the rotation of the device from upward-facing desktop mode to the forward-facing HMD mode with this matrix, which represents a -90 degree rotation around the x-axis and a 180 degree rotation around the z-axis:

\[ R_{mounted} = \begin{bmatrix} -1 & 0 & 0 & 1 \\ 0 & 0 &-1 & 1 \\ 0 &-1 & 0 & 1 \\ 0 & 0 & 0 & 1 \end{bmatrix} \]

Combine the two matrices into one transform by multiplication:

\[ M_{tabletop{\mapsto}mounted} = T_{mounted} \times R_{mounted} \]

Using a Proxy Object

An alternate method of achieving this goal is to place a proxy object representing the Leap Motion device at the same relative position and orientation to the virtual world cameras as the device has to your eyes in the real world. This method is easier to use in graphic development environments like Unity and Unreal. The VR assets in the Leap Motion Unity asset package take this approach.


A proxy object for the Leap Motion device placed in relationship to the left- and right-eye cameras.

You can then transform the Leap Motion coordinates using the model matrix of the proxy object.

Note that this doesn't account for coordinate system and unit conversions, which we will discuss later.

Angling the Mount

If you angle your mount relative to the HMD, you must measure the angles and apply the same rotations to the tracking data. Angling the Leap Motion device downward can provide a more comfortable working space, but this also causes visual disorientation when using the video passthrough from the device cameras. Thus, an angled mount could restrict the types of applications you can develop or use.

You can represent mount rotations as the following matrices, where \(\alpha\), \(\beta\), \(\gamma\) are your angles measured counterclockwise (right-hand rule) around the x, y, and z axes:

\[ R_x = \begin{bmatrix} 1 & 0 & 0 & 1 \\ 0 & cos(\alpha) &-sin(\alpha) & 1 \\ 0 & sin(\alpha) & cos(\alpha) & 1 \\ 0 & 0 & 0 & 1 \end{bmatrix} \; R_y = \begin{bmatrix} cos(\beta) & 0 & sin(\beta) & 1 \\ 0 & 1 & 0 & 1 \\ -sin(\beta) & 0 & cos(\beta) & 1 \\ 0 & 0 & 0 & 1 \end{bmatrix} \; R_z = \begin{bmatrix} cos(\gamma) & -sin(\gamma) & 0 & 1 \\ sin(\gamma) & cos(\gamma) & 0 & 1 \\ 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 1 \end{bmatrix} \]

Combine the rotations by multiplying these matrices:

\[ R_{mounted} = R_{x} \times R_{y} \times R_{z} \]

Finally, you can combine the translations for the mount offset and rotations for the mount angles into one transform by multiplying the matrices:

\[ M_{tabletop{\mapsto}mounted} = T_{mounted} \times R_{mounted} \]

Converting Coordinate Systems

The Leap Motion coordinate system uses a right-handed convention and units of millimeters. Positive y is up; z is front-to-back; x is right-to-left. Your world might use a different set of conventions and units. For example, Unity3D uses a left-handed convention and units of meters, but also oriented with positive y facing up. Unreal Engine uses a left-handed convention and units of centimeters, and with positive z facing up. Three.js, a popular WebGL library uses the same conventions as the Leap Motion, but has no intrinsic convention for units.

*Note:* the scripts in the Unity asset package and the Unreal Engine plugin transform the coordinate systems automatically.

Scaling from Leap Motion coordinates to a unit system in meters can be achieved with the following matrix (change the scale factor from .001 to the correct value to scale to different linear units):

\[ S_{mm{\mapsto}m} = \begin{bmatrix} 0.001 & 0 & 0 & 0 \\ 0 & 0.001 & 0 & 0 \\ 0 & 0 & 0.001 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \]

Note that the scale factors used assume that your game is using a 1:1 scale with reality. This is the default for Unity3D and Unreal Engine, for example, the standard Unity character controller prefabs are human scale (between 1 and 2 meters tall). Matching the scale accurately is important in a VR scene since the user has a much better sense of where their virtual hands should be in comparison to their real hands. In an AR scene, this can be even more critical.

Changing the basis of the coordinate system can also be accomplished with a matrix transform. For example, the following matrix changes Leap coordinates and directions to the Unity left-handed convention (by scaling the z-axis by -1):

\[ S_{RH{\mapsto}LH} = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \]

Head Motion

In addition to the physical offset of the device from the user's eyes, you must compensate for the motion of the user's head while the scene is playing. Otherwise, the hands will appear to move when the head moves. Head tracking information can be accessed from the Head tracking APIs of your HMD or VR SDK. You want to anchor the Leap tracking data to the midpoint of the interpupillary line – the line between the two cameras in the virtual world. Typically, the head tracking information will be in the form of another transform matrix. For example, in the Oculus API, you can get the head tracking data relative to this point using GetTrackingState().HeadPose.

1 ovrPosef headPose = ovr_GetTrackingState(...).HeadPose.ThePose;
2 OVR::Matrix4f translation = OVR::Matrix4f::Translation(headPose.Position);
3 OVR::Matrix4f rotation = OVR::Matrix4f(headPose.Orientation);
4 OVR::Matrix4f hmdToWorld = translation * rotation;

If you are using a proxy object which is parented to a scene object that is already driven by the HMD head tracking, then this step has already been done.

Accessing Tracking Data

Accessing the basic tracking data from the Leap Motion service is no different in a VR app than any other. Do note that when the Leap Motion sensor is mounted on an HMD, you should set the eLeapPolicyFlag_OptimizeHMD policy. This policy essentially instructs the Leap Motion software to expect hands to enter the field of view with their backs toward the cameras rather than their palms.

Transforming Tracking Data

Finally, once the Leap Motion hardware is mounted and measured, the scene is setup and basic design decisions made, you can transform the Leap Tracking data into the scene so hands (or interaction if not actually showing hands) appear in the correct place. Essentially, you take all the matrices defined above that are relevant to your graphics system, multiply them together and use the result to transform the Leap Motion tracking data. Obviously, you want to precompute as much of this transformation as possible since most of the matrices do not change from frame to frame.

\[ M_{Leap{\mapsto}World} = M_{HMD{\mapsto}World} \times S_{unit conversion} \times S_{axis conversion} \times M_{tabletop{\mapsto}mounted} \]

For example, if your Leap device is mounted on an Oculus Rift DK2 without any odd angles and your world model uses a left-handed coordinate convention and units of meters, then you can take \(M_{tabletop{\mapsto}mounted}\) as defined above, with \(t_x = t_y = 0\) and \(t_z = -80\) and multiply it by the coordinate unit (mm to m) and basis change (right-handed to left-handed) matrices, and finally by the HMD transform:

\[ \begin{equation} M_{Leap{\mapsto}World} = M_{HMD{\mapsto}World} \times \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \times \begin{bmatrix} 0.001 & 0 & 0 & 0 \\ 0 & 0.001 & 0 & 0 \\ 0 & 0 & 0.001 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \times \begin{bmatrix} -1 & 0 & 0 & 0 \\ 0 & 0 &-1 & 0 \\ 0 &-1 & 0 & -80 \\ 0 & 0 & 0 & 1 \end{bmatrix} \end{equation} \]

Transform a Position

To transform a position in Leap Motion coordinates to world coordinates you essentially multiply the coordinate vector by the \(M_{Leap{\mapsto}World}\) matrix.

Transform a Direction

Transforming a direction vector is very similar to transforming a position, except that you do not want to apply the coordinate scaling used to change the linear unit of measurement.

Note: If you are converting units, you must create a different transform that does not include the unit scaling matrix:

\[ \begin{equation} M_{Leap{\mapsto}World} = M_{HMD{\mapsto}World} \times S_{axis conversion} \times M_{tabletop{\mapsto}mounted} \end{equation} \]

Camera Placement in an AR scene

For a VR scene (without camera images), proper placement of the cameras is straightforward: each camera should be at the user's corresponding eyepoint.

For an AR scene (using camera images), proper placement is a judgement call. Images from the Leap Motion cameras are 2D representations taken from a particular point of view and, unlike 3D data, you cannot simply apply a transformation to change the view to a different vantage point. In order to make the camera images match the 3D tracking data, you must move the cameras forward by the same amount that the physical Leap device is forward of the user's eyes (i.e. 8cm for the Oculus). If using the older Leap Motion peripheral device, you must also move them closer together, since the peripheral cameras are 40mm apart, while the typical distance between human pupils is 64mm. However, moving the cameras closer together changes the stereo disparity and will make objects appear to be larger then they are in real life.

You can put the scene cameras at the user's pupils to maximize 3D accuracy or you can move them forward and closer to maximize alignment between the images and the 3D objects. You can also compromise and place the cameras between these two extremes. It really depends on which aspect is more important to your content.