So now we have the 3D assets placed on top of the backplate to make up an AR scene.
The phone uses its camera to shoot a live video into the processor. It then determines the 3D depth and placement with either a visual target or internal sensors, and it overlays 3D assets over the video feed or backplate for each frame. Each frame is reassessed to find any change to the target within the image feed. Now this cycle is repeated frame by frame and places the 3D asset into the correct location and merges it with the backplate. All this needs to happen without perceivable time lag.
The Vuforia toolset is a set of Unity prefabs that have many different types of AR tracking. This includes targets, such as images, logos with specific design elements, and objects such as toys or models, and text. While there are limitations to each of these targets, there's so many of them one will probably fit your needs.
For those who don't know a prefab, A prefab acts as a template from which you can create new object instances in the scene. Any edits made to a prefab asset are immediately reflected in all instances produced from it but you can also override components and settings for each instance individually.
We're going to use Vuforia for our AR tools, but there are many different software tools and techniques that could be used for AR tracking. The three main techniques are the image targets, the device sensors, and depth cameras.
コメント