diff options
author | Peter Bengtsson <mail@peterbe.com> | 2020-12-08 14:40:17 -0500 |
---|---|---|
committer | Peter Bengtsson <mail@peterbe.com> | 2020-12-08 14:40:17 -0500 |
commit | 33058f2b292b3a581333bdfb21b8f671898c5060 (patch) | |
tree | 51c3e392513ec574331b2d3f85c394445ea803c6 /files/zh-cn/web/api/webvr_api | |
parent | 8b66d724f7caf0157093fb09cfec8fbd0c6ad50a (diff) | |
download | translated-content-33058f2b292b3a581333bdfb21b8f671898c5060.tar.gz translated-content-33058f2b292b3a581333bdfb21b8f671898c5060.tar.bz2 translated-content-33058f2b292b3a581333bdfb21b8f671898c5060.zip |
initial commit
Diffstat (limited to 'files/zh-cn/web/api/webvr_api')
5 files changed, 1315 insertions, 0 deletions
diff --git a/files/zh-cn/web/api/webvr_api/concepts/index.html b/files/zh-cn/web/api/webvr_api/concepts/index.html new file mode 100644 index 0000000000..6cd552a0d4 --- /dev/null +++ b/files/zh-cn/web/api/webvr_api/concepts/index.html @@ -0,0 +1,406 @@ +--- +title: WebVR concepts +slug: Web/API/WebVR_API/Concepts +tags: + - Apps + - FOV + - Guide + - VR + - WebVR + - 位置 + - 加速度 + - 概念 + - 立体 + - 速度 +translation_of: Web/API/WebVR_API/Concepts +--- +<p>{{draft("The WebVR API documentation is currently being updated to cover the v1.0 spec, therefore some of this information will be out of date. Contact ~~chrisdavidmills if you have any questions about this work.")}}</p> + +<p class="summary">This article discusses some of the concepts and theory behind virtual reality (VR). If you are a newcomer to the area, it is worthwhile getting an understanding of these topics before you start diving into code.</p> + +<p class="summary">这篇文章探讨了一些关于虚拟现实(VR)的概念及其背后的理论基础。如果你是一个进入这个领域的新手,在你深入学习相关代码知识前,非常有必要对于以下的话题做一定的了解。<strong>【K】</strong></p> + +<h2 id="sect1"> </h2> + +<h2 id="The_history_of_VR_关于VR的历史【K】">The history of VR 关于VR的历史<strong>【K】</strong></h2> + +<p>Virtual reality is nothing new — the concept goes way further back than the Oculus Rift Kickstarter campaign of 2012. People have been experimenting with it for decades.</p> + +<p><strong>虚拟现实(VR)并不是一件新生的事物:这个概念甚至能追溯到,比2012年Oculus Rift在Kickstarter campaig上发起众筹,还要更早的时候。人们已经持续研发这种技术长达数十年。</strong></p> + +<p> </p> + +<p> </p> + +<p>In 1939 the <a href="https://en.wikipedia.org/wiki/View-Master">View-Master device</a> was created, allowing people to see 3D pictures. The device displayed images stored on cardboard disks containing stereoscopic 3D pairs of small color photographs. After years of development the military got interested in using such technology, and Project Headsight was born in 1961 — this involved a helmet incorporating a video screen with a head-tracking system.</p> + +<p><strong>1939年<a href="https://en.wikipedia.org/wiki/View-Master">View-Master device</a>问世,它允许人们通过它观看3D成像的照片。这款设备播放的是,存储在圆盘硬纸板上的,成对的立体3D的彩色小照片。经过了许多年的研发,军方开始对使用这项技术产生了浓厚的兴趣,终于在1961年,名为HEADSIGHT的项目诞生了:它包含了一个连接头部追踪系统的显示屏的头盔。</strong></p> + +<p> </p> + +<p><img alt="" src="http://end3r.com/tmp/vr/view-master.jpg" style="display: block; margin: 0px auto;"></p> + +<p>There were various experiments conducted over the next few decades, but it wasn't resricted to science labs and battlefields anymore. Eventually pop culture took over with movie directors showing their visions of virtual reality. Movies like Tron (1982) and The Matrix (1999) were created, where people could transfer themselves into a whole new cyber world or were trapped in one without even knowing, accepting it as the real world.</p> + +<p><strong>在接下来的数十年中,出现了许多具有指导性作用的实验,但是它不再像从前那样只对科学实验室和战场开放。最终大众文化通过电影导演展现他们的视角,从而接过了虚拟现实的大旗。像【创:战纪 TRON: Legacy(1982)】 和【黑客帝国The Matrix (1999)】那样的电影被拍摄出来,在那里人们能够轻易的将自己置身于一个完全由信息构成的世界,又或者,接受让自己进入一个从未认识过的新世界,并且将它当做一个真实的存在。【K】</strong></p> + +<p><img alt="" src="http://end3r.com/tmp/vr/matrix.jpg" style="display: block; margin: 0px auto;"></p> + +<p>The first VR gaming attempts were big and expensive — in 1991 Virtuality Group created a VR-ready arcade machine with goggles and ported popular titles like Pac-Man to virtual reality. Sega introduced their VR glasses at the Consumer Electronics Show in 1993. Companies were experimenting, but the market and consumers weren't convinced — we had to wait until 2012 to see real example of a successful VR project.</p> + +<p><strong>世界上第一次VR游戏的尝试是既大又昂贵的:1991年Virtuality Group制造了一款名为VR-ready的商业街机,其中装有护目镜,并且美其名曰 Pac-Man to virtual reality。随后,世嘉株式会社(SEGA)在1993的 Consumer Electronics Show上引进了他们的VR眼睛设备。当时的公司都在努力尝试,但是市场和消费者并不关注和买账:之后,我们再见到真正成功的VR项目,就要等到最近的2012年了。【K】</strong></p> + +<p> </p> + +<p> </p> + +<p> </p> + +<h3 id="VR_in_recent_times_最近的VR发展【K】">VR in recent times 最近的VR发展<strong>【K】</strong></h3> + +<p>So what's new? Virtual Reality hardware needs to deliver high-precision, low-latency data to deliver an acceptable user experience; computers running VR applications need to be powerful enough to handle all this information. It has not been until recently that such accuracy and power has been available at an afforable cost, if at all. Early VR prototypes cost tens of thousands of dollars, whereas the latest <a href="https://www.oculus.com/rift/">Oculus Rift</a> developer kit is available for $350, and cheaper solutions are available, such as mobile device-based solutions like <a href="https://www.google.com/get/cardboard/">Google Cardboard</a>.</p> + +<p><strong>那么有什么值得我们期待的呢?VR硬件需要传输高精度的信息,在保证低延迟的情况下传递可接受的用户的体感信息;运行VR设备和程序的电脑,必需强大到足以维持这些庞大的信息。直到最近的这几年,如此高精度并且能量强大的设备,才能通过大众可以接受的价格被购买到。早期的VR原型设备,需要花费数万美元,然而最近出现的<a href="https://www.oculus.com/rift/">Oculus Rift</a> developer kit却仅售350$, 并且还有更加便宜的解决方案,比如基于手机的VR设备像是<a href="https://www.google.com/get/cardboard/">Google Cardboard</a>.【K】</strong></p> + +<p> </p> + +<p> </p> + +<p>By 2015, such VRDevices gained commercial support for VR technology. Sony are developing a VR hardware kit for the PS4 (codename <a href="http://www.cnet.com/products/sony-project-morpheus/">Project Morpheus</a>), Facebook bought Oculus Rift for $2 billion, Valve has created <a href="http://store.steampowered.com/universe/vr">SteamVR</a> software that works with HTC's <a href="http://www.htcvr.com/">Vive VR headset</a>, and Google has launched a 2.0 version of its Cardboard that supports up to 6 inch phones (it is also fully compatible with iOS devices because it has a piece of conductive foam that works as a tap over the screen.)</p> + +<p><strong>到了2015年,类似的VR设备吸引了大量的商业投资,进入到VR科技的研发中。SONY正在PS4中开发一项针对VR的硬件工具(编程代号 <a href="http://www.cnet.com/products/sony-project-morpheus/">Project Morpheus</a>), FACRBOOK花费20亿美元买下了 <a href="https://www.oculus.com/rift/">Oculus Rift</a>, Valve开发了 <a href="http://store.steampowered.com/universe/vr">SteamVR</a> 软件系统,能够应用于HTC的<a href="http://www.htcvr.com/">Vive VR headset</a>, 随后,谷歌发布了能够最多支持6英寸手机屏幕的CARDBOARD的2.0版本(它同时完全兼容了IOS的设备,因为在它屏幕的背后有一块传感海绵凸起作为触碰点。)</strong></p> + +<p> </p> + +<p>Samsung also launched a headset associated with Oculus called <a href="http://www.samsung.com/global/microsite/gearvr/gearvr_features.html">GearVR</a>, which works by connecting its Note 4 and 6S devices. This however only works with native apps, so it is not very interesting for the specific area of WebVR.</p> + +<p><strong>三星公司同Oculus 合作,也推出了它的头戴设备<a href="http://www.samsung.com/global/microsite/gearvr/gearvr_features.html">GearVR</a>, 这款设备可以连接旗下的NOTE4以及6S等手机。然而这款设备仅仅能够运行几款纯粹的APP应用,因而相对于WEBVR的特效领域而言,显得不是那么的有意思。【K】</strong></p> + +<p> </p> + +<p> </p> + +<p>The technology itself is here, and the more expensive headsets will only get cheaper over time so more people can experience virtual reality on their own in the future.</p> + +<p><strong>科技已经发展到了今天,随着时间的推移,只会有更多的昂贵的头显设备变得越来越便宜,从而另更多的人在将来能够亲自体验虚拟现实的乐趣。【K】</strong></p> + +<p> </p> + +<p> </p> + +<h3 id="Input_devices_传入设备【K】">Input devices 传入设备<strong>【K】</strong></h3> + +<p>Handling input for virtual reality applications is an interesting topic — it's a totally new experience for which dedicated user interfaces have to be designed. There are various approaches right now from classic keyboard and mouse, to new ones like Leap Motion. It's a matter of trial and error to see what works in given situations and what inputs fit best for your type of game.</p> + +<p><strong>针对虚拟显示应用的手持传入设备,是一个非常有意思的话题:这是一种全新的体验,从而必须要设计出沉浸式的用户界面来适应它。目前为止,在这方面,从传统的键盘鼠标,一直到LEAP MOTION这样的新兴设备,有多种多样的途径来实现它。只有通过【试错法】最终才能窥见,哪种方式最有利于创造情景以及哪种输入设备最适合于你所玩的游戏的类型。【K】</strong></p> + +<p> </p> + +<p><img alt="" src="http://end3r.com/tmp/vr/oculus-touch.jpg" style="display: block; margin: 0px auto;"></p> + +<h2 id="VR_Hardware_setup_创建VR设备的硬件环境【K】" style="line-height: 30px; font-size: 2.14285714285714rem;">VR Hardware setup 创建VR设备的硬件环境<strong>【K】</strong></h2> + +<p> </p> + +<p>There are two main types of setup, mobile or computer-connected. Their minimum hardware set ups are as follows:</p> + +<p><strong>主要有两种创建VR环境的类型,手机或者是PC。以下是实现这两种环境所需要的最少的硬件支持:</strong></p> + +<p> </p> + +<ul> + <li>Mobile: A Head-mounted display (HMD) is created using a smartphone — which acts as the VR display — mounted in a VR mount such as Google Cardboard, which contains the required lenses to provide stereoscopic vision of what is projected on the mobile screen.</li> + <li><strong>手机:通过使用一部智能手机可以营造一部头显设备(HMD)--扮演VR显示器的角色--安装在一个像谷歌CARDBOARD那样的VR框架中,其中包含了必需要有的透镜,用以提供投射在手机屏幕上的立体视觉效果。【K】</strong><img alt="Mobile based VR setup" src="https://mdn.mozillademos.org/files/11085/mobileBasedVRSetup.png" style="width: 100%;"></li> + <li>Computer-connected: A VR setup is connected to your computer — this consists of a Head Mounted Display (HMD) containing a high resolution landscape-oriented screen onto which the visuals for both the left and right eye are displayed, which also includes a lens for each eye to promote separation of the left and right eye scene (stereoscopic vision.) The setup also includes a separate position sensor that works out the position/orientation/velocity/acceleration of your head and constantly passes that information the computer.</li> + <li><strong>电脑:将一部VR设备连接到你的电脑上--它是由一部包含了能分别为左眼和右眼显示图像的高分辨率全景镜头所组成的头显设备所组成,同时它还包含了两块分别为双眼配备的,可以提升左右眼图像分离(立体视觉)的透镜。这套设备还包含了一套分离式的感应设备,它能够测算出你的头部的位置/方向/速度/加速度等信息,并实时的把这些信息传输给计算机。【K】</strong><img alt="Computer based VR Setup" src="https://mdn.mozillademos.org/files/11089/computerBasedVRSetup.png" style="width: 100%;"></li> +</ul> + +<div class="note"> +<p><strong>Note</strong>: Computer-connected systems sometimes don't include a position sensor, but they usually do.</p> + +<p><strong>注释:通过电脑连接的系统有时候不会包含定位传感装置,但是通常情况下都会有。【K】</strong></p> + +<p> </p> +</div> + +<p>Other hardware that complements the VR experience includes: </p> + +<p><strong>其余的帮助补充完整的VR体验的硬件包括:【K】</strong></p> + +<p> </p> + +<p> </p> + +<ul> + <li>A hand recognition sensor: A sensor that tracks the position and movement of your hand, allowing it to become an interesting controller, and an object in VR gameworlds. The most advanced to date is the <a href="https://www.leapmotion.com/">Leap Motion</a>, which works with the computer (connected to the Oculus Rift) and can also work connected to a mobile device (the latter is in an experimental phase.)</li> + <li><strong>手持传感识别器:一个可以追踪你的手部位置和运动的传感器,这使得它变成了一个非常有趣的控制器,以及一件存在于VR游戏世界种的物体。迄今为止,最先进的这类设备首属<a href="https://www.leapmotion.com/">Leap Motion</a>,它可以同电脑配合使用(同Oculus Rift设备连接) 并且同时还可以和手机兼容 (暂时处于实验阶段。)</strong></li> + <li>A gamepad: We can configurate an XBox controller or similar to work as a keyboard in the browser — this offers further possibilities of interaction with a VR webpage. There are some gamepads that work with a mobile setup — like the <a href="http://www.mergevr.com/">MergeVR headset</a> — but these are connected via Bluetooth so don't work with WebVR. </li> + <li><strong>(手机)游戏手柄:我们可以配置一套XBOX控制器或者类似的设备作为浏览器的键盘--这种方法提供了另一种同VR网页互动的形式。有一些游戏手柄更可以和手机协同使用--就像</strong><a href="http://www.mergevr.com/">MergeVR headset</a>--<strong>但是这些方法都是通过蓝牙连接设备的方法,并不能完全等同于和WEBVR结合。</strong></li> + <li>An eye tracking sensor (experimental): The FOVE project is the first headset that reads subtle eye movements.</li> + <li><strong>眼部追踪传感器(实验产品):FOVE项目是第一个研究用于追踪和读取人眼运动设备的项目。</strong></li> + <li>A facial expression tracker (experimental): Researchers at the University of Southern California and Facebook’s Oculus division have been testing new ways of tracking facial expressions and transferring them to a virtual character.</li> + <li><strong>面部表情追踪设备(实验产品):位于南加州大学和Facebook’s Oculus部门的研究人员,已经开始测试更多新的追踪人类面部表情并且能把他们转换成虚拟现实角色的方法。</strong></li> + <li>A more complex positional sensor system: The SteamVR controller, combined with the <a href="http://www.roadtovr.com/steamvr-beta-update-brings-lighthouse-support-and-vr-tracking-app/">Lighthouse</a> tracking system aims to achieve an experience in which you can move through a space of 10x10 square feet in a VR world.</li> + <li><strong>更加复杂的位置传感系统:SteamVR控制器,结合了<a href="http://www.roadtovr.com/steamvr-beta-update-brings-lighthouse-support-and-vr-tracking-app/">Lighthouse</a>追踪系统,旨在实现帮助我们能在一个10x10 square feet的VR空间范围内自由活动的目的。</strong></li> +</ul> + +<h2 id="Position_and_orientation_velocity_and_acceleration">Position and orientation, velocity and acceleration </h2> + +<h2 id="位置和方向,速度和加速度【K】">位置和方向,速度和加速度<strong>【K】</strong></h2> + +<p>As mentioned above, the position sensor detects information concerning the HMD and constantly outputs it, allowing you to continually update a scene according to head movement, rotation, etc. But what exactly is the information?</p> + +<p><strong>正如上文所提到的,那个位置传感器通过检测与HMD相关的信息并且实时的输出这些数据,从而允许你持续的通过改变你的头部移动,更新你身处的虚拟场景,包括旋转等动作。但是我们所说的这些信息包括些什么呢?【K】</strong></p> + +<p> </p> + +<p><img alt="Position and Orientation VR setup" src="https://mdn.mozillademos.org/files/11083/positionOrientationVR.png" style="width: 100%;"></p> + +<p>The output information falls into four categories:</p> + +<p><strong>通过HMD输出的信息包含一下四种类别:【K】</strong></p> + +<ol> + <li>Position — The position of the HMD along three axes in a 3D coordinate space. x is to the left and right, y is up and down, and z is towards and away from the position sensor. In WebVR:<br> + <br> + <strong>位置--HMD设备的位置基于一个3D坐标空间中的三个轴--X代表左右移动,Y代表上下移动,Z代表朝向和远离位置传感设备。在WEBVR中: </strong> + + <ul> + <li>x position is represented by {{domxref("VRPositionState.position")}}.x.</li> + <li>y position is represented by {{domxref("VRPositionState.position")}}.y.</li> + <li>z position is represented by {{domxref("VRPositionState.position")}}.z.</li> + </ul> + </li> + <li>Orientation — The rotation of the HMD around three axes in a 3D coordinate space. Pitch is rotation around the x axis, yaw is rotation around the y axis, and roll is rotation around the z axis. In WebVR:<br> + <br> + <strong>方位--HMD设备的协同是绕着一个3D坐标空间中的三个轴。PITCH负责协同X轴,YAW负责协同Y轴,还有ROLL入则系统Z轴。在WEBVR中: </strong> + <ul> + <li>Pitch is represented by {{domxref("VRPositionState.orientation")}}.x.</li> + <li>Yaw is represented by {{domxref("VRPositionState.orientation")}}.y.</li> + <li>Roll is represented by {{domxref("VRPositionState.orientation")}}.z.</li> + </ul> + </li> + <li>Velocity — There are two types of velocity to consider in VR:<br> + <br> + <strong>速度--在VR中有两种需要被考虑的速度: </strong> + <ul> + <li>Linear — The speed along any one of the axes that the HMD is traveling. This information can be accessed using {{domxref("VRPositionState.linearVelocity")}} (x, y, and z.)</li> + <br> + <li><strong>线速度--HMD追踪的沿着三种轴向之一的速度。这类的信息可以被接收通过 {{domxref("VRPositionState.linearVelocity")}} (x, y, and z.)</strong></li> + <li>Angular — The speed at which the HMD is rotating around any one of the axes. This information can be accessed using {{domxref("VRPositionState.angularVelocity")}} (x, y, and z.)</li> + <br> + <li><strong>角速度--就是HMD设备绕着三种轴向之一旋转的速度。这类的信息可以被接收通过{{domxref("VRPositionState.angularVelocity")}} (x, y, and z.)</strong></li> + </ul> + </li> + <li>Acceleration — There are two types of acceleration to consider in VR:<br> + <br> + <strong>加速度--在VR中有两种需要被考虑的加速度:</strong> + <ul> + <li>Linear — The acceleration of travel along any one of the axes that the HMD is traveling. This information can be accessed using {{domxref("VRPositionState.linearAcceleration")}} (x, y, and z.)</li> + <br> + <li><strong>线性加速度--HMD设备沿着轴向追踪的加速度。这类的信息可以被接收通过{{domxref("VRPositionState.linearAcceleration")}} (x, y, and z.)</strong></li> + <li>Angular — The acceleration of rotation of the HMD around any one of the axes. This information can be accessed using {{domxref("VRPositionState.angularAcceleration")}} (x, y, and z.)</li> + <br> + <li><strong>角度加速度--HMD设备绕着轴旋转的加速度。这类的信息可以被接收通过 {{domxref("VRPositionState.angularAcceleration")}} (x, y, and z.)</strong></li> + </ul> + </li> +</ol> + +<h2 id="Field_of_view_视野【K】">Field of view 视野<strong>【K】</strong></h2> + +<p>The field of view (FOV) is the area that each of the user's eyes can reasonably be expected to see. It roughly takes the form of a pyramid shape, laid down on one side, with the apex inside the user's head, and the rest of the pyramid eminating from the user's eye. Each eye has it's own FOV, one slightly overlapping the other.</p> + +<p><strong>VR的视野(FOV)就是我们的双眼理论上预期能看到的区域。这个区域大致上呈现一个倒置的金字塔形状,金字塔的中轴线穿过使用者的头部,金字塔剩余的部分从使用者的双眼部位发散出去。每只眼睛都有自己的FOV,同时其中的一个稍微同另一个重叠。</strong></p> + +<p><img alt="FOV related properties" src="https://mdn.mozillademos.org/files/11091/FOVrelatedProperties.png" style="width: 100%;"></p> + +<p>The FOV is defined by the following values:</p> + +<p>FOV是通过下列的值来定义的:</p> + +<ul> + <li>{{domxref("VRFieldOfViewReadOnly.upDegrees")}}: The number of degrees upwards that the field of view extends in.</li> + <li>{{domxref("VRFieldOfViewReadOnly.rightDegrees")}}: The number of degrees to the right that the field of view extends in.</li> + <li>{{domxref("VRFieldOfViewReadOnly.downDegrees")}}: The number of degrees downwards that the field of view extends in.</li> + <li>{{domxref("VRFieldOfViewReadOnly.leftDegrees")}}: The number of degrees to the left that the field of view extends in.</li> + <li>zNear: The distance from the middle of the user's head to the start of the visible FOV. </li> + <li>zFar: The distance from the middle of the user's head to the end of the visible FOV.</li> +</ul> + +<p>The default values for these properties will differ slightly by VR hardware, although they tend to be around 53° up and down, and 47° left and right, with zNear and zFar coming in at around 0.1m and 10000m respectively.</p> + +<p><strong>更具VR硬件的不同,这些特性的值会略有不同,然而他们基本上分别都趋向于上下53<span style="font-size: 12px; line-height: 17.5px;">°,左右47°,</span>zNear和zFar两个值可以在0.1m到10000m之间变换。</strong></p> + +<p>Different users will also require slightly different values for optimal viewing. It therefore makes sense to be able to calibrate these when a user starts using an app. You can detect the current value of these using the methods of the {{domxref("VREyeParameters")}} interface, and set new values using the {{domxref("HMDVRDevice.setFieldOfView()")}} method.</p> + +<p><strong>不同的使用者将会为了达成尽量完美的视觉体验,而要求略有不同的特性数值。因此,我们有理由在使用者开始使用一个APP之前,对这些特性进行测算。你可以使用{{domxref("VREyeParameters")}} interface, and set new values using the {{domxref("HMDVRDevice.setFieldOfView()")}} method.这个方法来侦测现行的特性值。</strong></p> + +<div class="note"> +<p><strong>Note</strong>: The user can potentially see all the way around them, which is a brand new concept for apps and games. Try to give people a reason to look around and see what's behind them — make them reach out and find things that are not visible at the very beginning. Describe what's behind their backs.</p> + +<p><strong>注释:使用者潜在的可以看到所有他们身边的事物,这是一个APP和游戏中出现的全新的概念。那就是,试着传达给人们一个发现他们身后事物的理由--引导他们去发现那些在一开始并没有出现在他们视野中的事物。描述他们身后的世界。【K】</strong></p> + +<p>【可以说这一点是VR概念中同其他3D技术区别开,非常重要的特性,就是创建和引导使用者去发现他们视野中看不见的部分】</p> +</div> + +<p> </p> + +<p> </p> + +<h2 id="Concepts_for_VR_apps_VR_APP的概念【K】">Concepts for VR apps VR APP的概念<strong>【K】</strong></h2> + +<p>This section discusses concepts to be aware of when developing VR apps that you've probably not had to consider before when developing regular apps for mobile or desktop.</p> + +<p><strong>这个部分讨论的是,从前在我们开发普通的APP和手机或者PC应用时,不必考虑的,但是在我们开发VR APP的时候必须被意识到的概念。</strong></p> + +<p> </p> + +<p> </p> + +<p> </p> + +<h3 id="Stereoscopic_vision【K】">Stereoscopic vision<strong>【K】</strong></h3> + +<p><strong>立体视觉</strong></p> + +<p>Stereoscopic vision is the normal vision humans and (most) animals have — <span class="hvr">the</span> <span class="hvr">perception</span> of two <span class="hvr">slightly</span> <span class="hvr">differing</span> <span class="hvr">images</span> (one <span class="hvr">from</span> <span class="hvr">each</span> <span class="hvr">eye) as a single image. This</span> <span class="hvr">results</span> in <span class="hvr">depth</span> <span class="hvr">perception, helping us to see the world in glorious 3D. To recreate this in VR apps, you need to render two very slightly different views side by side, which will be taken in by the left and right eyes when the user is using the HMD.</span></p> + +<p><strong>立体视觉是大多数的动物以及人类拥有的正常的视觉效果--也就是将来自每只眼睛的略有差别的图像,通过大脑的处理,感知为一张立体的图片。这种对深度的感知结果,帮助我们通过一种神奇的3D的视角看世界。为了在APP中重现这种视觉效果,你需要渲染两幅略有不同的场景,当使用者在通过使用HMD观看时能分别被左右眼所调用。</strong></p> + +<p> </p> + +<p><img alt="How to create stereoscopic 3D images" src="https://mdn.mozillademos.org/files/11095/createStereoscopicImages.png" style="width: 100%;"></p> + +<h3 id="Head_tracking_头部追踪【K】" style="line-height: 24px; font-size: 1.71428571428571rem;">Head tracking <strong>头部追踪【K】</strong></h3> + +<p>The primary technology used to make you feel present in a 360º scene, thanks to the gyroscope, accelerometer, and magnetometer (compass) included in the HMD.</p> + +<p><strong>首要的使我们能够感到置身于360°场景中的科技,要感谢包括在HMD设备中的陀螺仪、加速剂、磁力计等装置。</strong><br> + It has primary relevance because it makes our eyes believe we are in front of a spherical screen, giving users realistic immersion inside the app canvas.</p> + +<p><strong>这种技术对于VR有非常重要的关联性,因为它让我们的眼睛相信我们置身于一个球形的屏幕前,它提供给使用者一种在APP画布中沉浸式的体验。</strong></p> + +<h3 id="Eye_strain【K】眼部拉伤">Eye strain<strong>【K】眼部拉伤</strong></h3> + +<p>A term commonly used in VR because it is a major handicap of using an HMD — we are constantly fooling the eye with what we are showing in the app canvas, and this leads to the eyes doing a lot more work than they normally would, so using VR apps for any extended period of time can lead to eye strain.</p> + +<p><strong>这是一个通常使用在VR中的术语,因为这也是使用HMD设备的一个副作用--我们的视线历来会追踪我们在APP画布中被展示的内容,然而这将导致我们的眼睛超负荷的工作,因此假如我们使用VR APP时,有任何超时的行为,都有可能导致眼部的拉伤。</strong></p> + +<p> </p> + +<p>To minimize this unwanted effect, we need to:</p> + +<p><strong>为了将这些可能的影响最小化,我们可以:</strong></p> + +<ul> + <li>Avoid focusing on different depths (e.g. avoid using a lot of particles with differents depths.)</li> + <li><strong>避免聚焦不同的深度(也就是要避免使用大量的具有不同深度特性的颗粒)</strong></li> + <li>Avoid eye convergion (e.g. if you have an object that comes towards the camera your eyes will follow and converge on it.)</li> + <li><strong>避免长期的视线集中(如果有一个物体朝着摄像机移动过来,你的眼睛将会跟随和聚焦在它之上)</strong></li> + <li>Use darker backgrounds with more subdued colors where possible; a bright screen will make the eyes more tired.</li> + <li><strong>尽量使用带有舒缓颜色的深色背景;假如屏幕太亮会增加眼睛的负担。</strong></li> + <li>Avoid rapid brightness changes.</li> + <li><strong>避免迅速的视线的改变。</strong></li> + <li>Avoid presenting the user with large amounts of text to read. You should also be careful with the distance between the eyes/camera and the text to read. 0.5m is uncomfortable, whereas at more than 2m the stereo effect starts to break down, so somewhere in between is advisable.</li> + <li><strong>避免给使用者展现大量的文本内容。你应该非常注意眼睛/摄像机同文本之间的距离。0.5m太近了,然而假如超过2m的话,那么立体的效果将会崩溃,所以在0.5--2之间的距离是合适的。</strong></li> + <li>Be careful with the distance between objects and the camera in general. Oculus recommends 0.75m as a minimum distance of focus.</li> + <li><strong>注意设定物体到摄像机之间的一般距离。OCULUS建议的最小距离是0.75m。</strong></li> + <li>Use a pointer if the user needs to interact with an object in the scene — this will help them point to it correctly with less effort.</li> +</ul> + +<p>In general, the path of least visual effort will give the user a less tiring experience.</p> + +<ul> + <li><strong>如果使用者需要和场景中的物体进行互动,那么尽量使用一个指针--这将帮助他们精准和更加容易的指向那个物体。一般情况下,最少的视觉上的动作能提供给使用者最轻松的体验。</strong></li> +</ul> + +<h3 id="Motion_sickness_晕动病【K】">Motion sickness 晕动病<strong>【K】</strong></h3> + +<p>If developers do not take utmost care, VR apps can actually cause their users to feel sick. This effect is produced when the stimuli their eyes are receiving is not what the body expects to receive.</p> + +<p><strong>如果开发者没有非常注意的话,VR APP将会很有可能引起它的使用者的反感。这种反应的产生是因为我们的眼睛受到了,我们的身体并不准备接收的刺激。</strong></p> + +<p>To avoid bringing on motion sickness in our users (or at least minimize the effects), we need to:</p> + +<p><strong>为了避免带来晕动效果给我们的使用者(或者说最大限度的减小这种反应),我们可以:</strong></p> + +<ul> + <li>Always maintain head tracking (this is the most important of all, especially if it occurs in middle of the experience.)</li> + <li><strong>总是保证头部的追踪(这是最重要的,特别是在体验过程中头部移动的时候)</strong></li> + <li>Use constant velocity; avoid acceleration or decceleration camera movements (use linear acceleration, and avoid vs easing if you can.)</li> + <li><strong>使用稳定的速率;避免摄像机的加速和减速运动(使用线性加速度,同时假如有可能避免VS EASING)</strong></li> + <li>Keep the framerate up (less than 30fps is uncomfortable.)</li> + <li><strong>尽量提高帧速率(低于30FPS是不舒适的体验)</strong></li> + <li>Avoid sharp and/or unexpected camera rotations.</li> + <li><strong>避免尖锐的或者突然的摄像机的转动。</strong></li> + <li>Add fixed points of reference for fixed objects (otherwise the user will believe they are on the move.)</li> + <li><strong>为固定位置的物体添加固定的参照物(否者使用者会误认为他们在移动)</strong></li> + <li>Do not use Depth of Field or Motion Blur post processing because you do not know where the eyes will focus.</li> + <li><strong>不要使用景深视角和动态模糊的后期处理,因为你不知道使用者的视线会聚焦在哪里。</strong></li> + <li>Avoid brightness changes (use low frecuency textures or fog effects to create smooth lighting transitions).Overall your eyes should not send signals to the brain that cause reflex actions in other parts of the body.</li> + <li><strong>避免光线突然的改变(使用低频率的质感或者迷雾效果来制造光线的平滑的转变效果)。总之,就是要使你的眼睛尽量不要传输那种会引起你身体其他部位强烈反应的信号。【这是为什么呢?VR不就是为了制造特效,求刺激么,为什么要有这么多的限制。求解释。】</strong></li> +</ul> + +<h3 id="Latency_延迟【K】">Latency 延迟<strong>【K】</strong></h3> + +<p>Latency is the time between the physical head movement and the visual display reaching the user's eyes from the screen of an HMD being updated. This is one of the most critical factors in providing a realistic experience. Humans can detect very small delays — we need to keep the latency below 20 milliseconds if they are to be imperceptible (for example a 60Hz monitor has a 16 ms response.)</p> + +<p><strong>延迟指的是,头部的物理转动动作,显示设备在接收了HMD的信息更新后,这两者之间的时间间隔。这是一个在提供虚拟现实体验的过程中非常关键的因素。人体能感知到非常细微的延迟--如果我们要让人体感知不到这种延迟,我们需要将延迟保持在20微妙以下(例如一个60HZ的显示器拥有16ms的返回速度。)</strong></p> + +<p>The Oculus Rift headset has a letency of 20 ms or less, but woth mobile device-based setups it will depend heavily on the smartphone CPU power and other capabilities. </p> + +<p><strong>Oculus Rift headset的延迟在20ms甚至比这更低,但是目前这都非常依赖于智能手机的CPU性能和其他性能。</strong></p> + +<h3 id="Framerate_(_Frames_per_second_FPS_)_帧率【K】">Framerate ( Frames per second / FPS ) 帧率<strong>【K】</strong></h3> + +<p>Based on the Wikipedia definition, framerate is the frequency at which an imaging device produces unique consecutive images, called frames. A rate of 60fps is an acceptable rate for a smooth user experience, but depending on the performance of the machine the app is running on, or the complexity of the content you want to show, it can drastically lower. Less than 30fps is generally considered juddery, and annoying to the user.、</p> + +<p><strong>根据维基百科的定义,帧率指的是一个设备产生单一连贯的图像的速率,叫做框架。60FPS的帧率足够提供给使用者一个平稳的体验,但是更加要取决于APP运行的设备的表现,或者是你想要体验的VR内容,也有可能大大的降低。假如帧率小于了30FPS,通常会发生严重的颤抖,并且使使用者产生厌恶感。</strong></p> + +<p>One of the most difficult tasks is to maintain a constant and high framerate value, so we must optimize our code to make it as efficient as possible. It is preferable to have a decent framerate that doesn't constantly or suddenly change; for this you need to as few necessary objects moving into the scene as possible and (in the case of WebGL) try to reduce draw calls. </p> + +<p><strong>最困难的任务之一就是保持一个稳定的和高帧率的值,所以我们必须优化代码从而使它发挥最大的功效。假如能够有一个合适的帧率并且不会规律的或者突然的改变,那将会是非常好的体验;因此你需要在一个场景中设置尽量少的物体(例如在WEBGL中)并且减小DRAW CALLS的值。</strong></p> + +<h3 id="Interpupillary_distance_(_IPD_)_瞳孔间距【K】">Interpupillary distance ( IPD ) 瞳孔间距<strong>【K】</strong></h3> + +<p>Based on the Wikipedia definition, IPD is the distance between the centers of the pupils of the two eyes. IPD is critical for the design of binocular viewing systems, where both eye pupils need to be positioned within the exit pupils of the viewing system.</p> + +<p><strong>根据维基百科的定义,IPD是指两眼瞳孔之间的距离。IPD对于双目视觉系统是非常重要的,因为双眼的瞳孔都必须对准这套视觉系统的瞳孔出口。</strong></p> + +<p>Interpupillary distace ( IPD ) is represented by {{domxref("VREyeParameters.eyeTranslation")}} in WebVR.</p> + +<p><strong>瞳孔间距(IPD)可用 {{domxref("VREyeParameters.eyeTranslation")}} 来表示.</strong></p> + +<p>This value is returned by the HMD and its value may be around 60 to 70 mm; in the case of some HMDs like Oculus Rift's, you can set your own IPD. Normally we don't change this value but you can play with it to change the scale of the entire scene. For example, if your IPD is set to 6000 mm, the user would view the scene like a giant looking at a Lilliputian world.</p> + +<p><strong>这个值是通过HMD来返回的并且它的值一般在60-70mm之间;在像是Oculus Rift这样的HMD设备中,你可以设置你自己的IPD。一般我们不会去改变这个值,但是你可以通过有意的改变它从而改变你所身处的整个场景。例如,如果你将IPD调整到6000mm,使用者将会看到一个好像巨人身处小人国中一样的世界。</strong></p> + +<h3 id="Degrees_of_Freedom_(_DoF_)_自由度【K】">Degrees of Freedom ( DoF ) 自由度<strong>【K】</strong></h3> + +<p>DoF refers to the movement of a rigid body inside space. There is no uniformity in creating acronyms for this term — we can find references to 3DoF in the context of sensors that detect only rotational head tracking, and 6DoF when an input allows us to control position and orientation simultaneously. We even sometimes find 9DoF references when the hardware contains three sensors like gyroscope, accelerometer and magnetometer, but the results of the 3 x 3DoF values will actually return a 6 degrees of freedom tracking.</p> + +<p><strong>DOF指向的是空间中固态物体移动的自由度。并没有一个统一的对这个专业术语的缩写--我们可以找到,在关于侦测头部转动的传感器的那篇文章中的3DOF的参考,还有当一个传入设备允许我们同时控制位置和方位信息时的6DOF。我们有时候甚至会看到9DOF的案例,那就是当硬件中包含了三个感应装置时,如陀螺仪、加速计和磁力计,但是3 x 3DoF的值的接过实际上返回的,还是一个6纬的自由度跟踪结果。</strong></p> + +<p>DoF is directly related to the tracking of the user's head movement.</p> + +<p><strong>DOF直接和使用者的头部运动追踪相关联。</strong></p> + +<h3 id="Cone_of_focus_锥形焦点【K】">Cone of focus 锥形焦点<strong>【K】</strong></h3> + +<p>Although our field of view is much larger (approximately 180º), we need to be aware that only in a small portion of that field can you perceive symbols (the center 60º) or read text (the center 10º). If you do not have an eye tracking sensor we assume that the center of the screen is where the user is focusing their eyes.</p> + +<p><strong>虽然我们的视野非常的旷阔(最大可以达到180°),但是我们必须要意识到只有在一个小范围内,你才可以察觉到一些标识的存在(中心向外60°)或者读取文本(中心向外10°)。如果你没有一个眼部追踪器,那么我们建议你将使用者的视角尽量控制在屏幕中心点附近。</strong></p> + +<p>This limitation is important to consider when deciding where place visuals on the app canvas — too far towards the edge of the cone of focus can lead to eye strain much ore quickly. There is a very interesting post about this (amongst other things) at MozVR.com — see <a href="http://mozvr.com/posts/quick-vr-prototypes/">Quick VR Mockups with Illustrator</a>.</p> + +<p>这样的限制对于在考虑如何在APP画布上设置视角的时候,是非常重要的--假如太过于远离锥形焦点的边缘,就可能更快更容易的导致眼部的拉伤。想要阅读MozVR.com 上的关于这个问题的有意思的文章(还包含其他内容)--请点击<a href="http://mozvr.com/posts/quick-vr-prototypes/">Quick VR Mockups with Illustrator</a>.</p> + +<h3 id="3D_Positional_Audio_3D定位音效【K】【如ECHO回声APP】">3D Positional Audio 3D定位音效<strong>【K】【如ECHO回声APP】</strong></h3> + +<p>3D positional audio refers to a group of effects that manipulate audio to simulate how it would sound in a three dimensional space.</p> + +<p><strong>3D定位音效,指的是一组控制声音去实现怎样模拟它在一个三维空间中播放的效果。</strong>,</p> + +<p>This directly related to the <a href="/en-US/docs/Web/API/Web_Audio_API">Web Audio API</a>, which allows us to place sounds on objects we have in the canvas or launch audio depending on the part of the scene the user is traveling towards or looking at.</p> + +<p><strong>这项技术直接关系到<a href="/en-US/docs/Web/API/Web_Audio_API">Web Audio API</a>,它可以让我们将一段声音附加到,一个我们在VANVAS中或者launch audio中的物体上,并且基于一个用户在其中可以移动或者观看的场景的一部分。</strong></p> diff --git a/files/zh-cn/web/api/webvr_api/index.html b/files/zh-cn/web/api/webvr_api/index.html new file mode 100644 index 0000000000..9c4577ccff --- /dev/null +++ b/files/zh-cn/web/api/webvr_api/index.html @@ -0,0 +1,211 @@ +--- +title: WebVR API +slug: Web/API/WebVR_API +tags: + - API + - VR + - WebVR + - 虚拟现实 +translation_of: Web/API/WebVR_API +--- +<div>{{DefaultAPISidebar("WebVR API")}}{{SeeCompatTable}}</div> + +<p class="summary"><strong>WebVR API 能为虚拟现实设备的渲染提供支持 — 例如像Oculus Rift或者</strong>HTC Vive <strong>这样的头戴式设备与 Web apps 的连接。它能让开发者将位置和动作信息转换成3D场景中的运动。基于这项技术能产生很多有趣的应用, 比如虚拟的产品展示,可交互的培训课程,以及超强沉浸感的第一人称游戏。</strong></p> + +<h2 id="概念及使用方法">概念及使用方法</h2> + +<p><strong>【K】</strong></p> + +<p><img alt='Sketch of a person in a chair with wearing goggles labelled "Head mounted display (HMD)" facing a monitor with a webcam labelled "Position sensor"' src="https://mdn.mozillademos.org/files/11035/hw-setup.png" style="display: block; height: 78px; margin: 0px auto; width: 60%;"></p> + +<p>Any VR devices attached to your computer will be returned by the {{domxref("Navigator.getVRDevices()")}} method. This returns an array of objects to represent the attached devices, which inherit from the general {{domxref("VRDevice")}} object — generally a head mounted display will have two devices — the head mounted display itself, represented by {{domxref("HMDVRDevice")}}, and a position sensor camera that keeps track of your head position, represented by {{domxref("PositionSensorVRDevice")}}.</p> + +<p><strong>连接到电脑的所有VR设备都将由 {{domxref("Navigator.getVRDevices()")}} 方法返回。 这个方法将返回一个包含了所有已连接设备的对象数组,每个设备对应一个对象, 该对象继承自 {{domxref("VRDevice")}} — 通常一个头显将包含两个设备 — 头显自身由 {{domxref("HMDVRDevice")}} 表示, 和一个跟踪头部位置的位置捕捉传感器,由 {{domxref("PositionSensorVRDevice")}} 表示。</strong></p> + +<p>The {{domxref("PositionSensorVRDevice")}} object contains the {{domxref("PositionSensorVRDevice.getState","getState()")}} method, which returns a {{domxref("VRPositionState")}} object — this represents the sensor’s state at a given timestamp, and includes properties containing useful data such as current velocity, acceleration, and orientation, useful for updating the rendering of a scene on each frame according to the movement of the VR head mounted display.</p> + +<p><strong>{{domxref("PositionSensorVRDevice")}} 对象有一个 {{domxref("PositionSensorVRDevice.getState","getState()")}} 方法, 该方法返回一个{{domxref("VRPositionState")}} 对象 — 这个对象代表位置传感器在指定时刻的状态,包含了一些十分有用的信息,例如速度、加速度以及运动方向,可用于根据头部运动刷新画面显示。</strong></p> + +<p>The {{domxref("HMDVRDevice.getEyeParameters()")}} method returns a {{domxref("VREyeParameters")}} object, which can be used to return field of view information — how much of the scene the head mounted display can see. The {{domxref("VREyeParameters.currentFieldOfView")}} returns a {{domxref("VRFieldOfView")}} object that contains 4 angles describing the current view from a center point. You can also change the field of view using {{domxref("HMDVRDevice.setFieldOfView()")}}.</p> + +<p><strong>{{domxref("HMDVRDevice.getEyeParameters()")}} 方法返回一个 {{domxref("VREyeParameters")}} 对象, 可用于获取显示区域的信息 — 头显可以看到多少画面。 {{domxref("VREyeParameters.currentFieldOfView")}} 返回一个 {{domxref("VRFieldOfView")}} 对象 ,该对象包含了4个角度信息来描述当前的显示区域. 你可以用 {{domxref("HMDVRDevice.setFieldOfView()")}} 来改变当前的显示区域。</strong></p> + +<p> </p> + +<div class="note"> +<p><strong>Note</strong>: To find out more about using these interfaces in your own app, read <a href="/en-US/docs/Web/API/WebVR_API/Using_the_WebVR_API">Using the WebVR API</a>. To learn more about the basic concepts behind VR, read <a href="/en-US/docs/Web/API/WebVR_API/WebVR_concepts">WebVR concepts</a>.</p> + +<p><u><strong>注释:</strong>: 要了解更多关于如何在你的应用程序中使用这些接口,请阅读文章<a href="/en-US/docs/Web/API/WebVR_API/Using_the_WebVR_API">Using the WebVR API</a>. 要学习更多关于VR技术背后的基础概念,请阅读文章 <a href="/en-US/docs/Web/API/WebVR_API/WebVR_concepts">WebVR concepts</a>.</u></p> +</div> + +<h3 id="Using_controllers_Combining_WebVR_with_the_Gamepad_API">Using controllers: Combining WebVR with the Gamepad API </h3> + +<h3 id="使用控制器:将WebVR与Gamepad_API相结合">使用控制器:将WebVR与Gamepad API相结合</h3> + +<p>Many WebVR hardware setups feature controllers that go along with the headset. These can be used in WebVR apps via the <a href="https://developer.mozilla.org/en-US/docs/Web/API/Gamepad_API">Gamepad API</a>, and specifically the <a href="https://developer.mozilla.org/en-US/docs/Web/API/Gamepad_API#Experimental_Gamepad_extensions">Gamepad Extensions API</a> that adds API features for accessing <a href="https://developer.mozilla.org/en-US/docs/Web/API/GamepadPose">controller pose</a>, <a href="https://developer.mozilla.org/en-US/docs/Web/API/GamepadHapticActuator">haptic actuators</a>, and more.</p> + +<div class="note"><strong>Note</strong>: Our <a href="https://developer.mozilla.org/en-US/docs/Web/API/WebVR_API/Using_VR_controllers_with_WebVR">Using VR controllers with WebVR</a> article explains the basics of how to use VR controllers with WebVR apps. + +<p> </p> +</div> + +<h2 id="WebVR接口">WebVR接口</h2> + +<dl> + <dt>{{domxref("Navigator.getVRDevices")}}</dt> + <dd>Returns a promise that resolves to an array of objects representing the VR devices attached to the computer.<br> + <strong>返回一个Promise对象,并通过resolve方式返回参数,参数为链接到电脑的VR设备数组。</strong></dd> + <dt>{{domxref("VRDevice")}}</dt> + <dd>A generic VR device, includes information such as device IDs and descriptions. Inherited by <code>HMDVRDevice</code> and <code>PositionSensorVRDevice</code>.<br> + <strong>返回一个包括了VR设备IDs,描述等信息的类。HMDVRDevice 和 PositionSensorVRDevice 继承了 VRDevice。</strong></dd> + <dt>{{domxref("HMDVRDevice")}}</dt> + <dd>Represents a head mounted display, providing access to information about each eye, and the current field of view.<br> + <strong>头戴设备。提供设备双眼、当前FOV(field of view)信息。</strong></dd> + <dt>{{domxref("PositionSensorVRDevice")}}</dt> + <dd>Represents the position sensor for the VR hardware, allowing access to information such as position and orientation.<br> + <strong>VR设备的位置传感器。获取位置、方向信息。</strong></dd> + <dt>{{domxref("VRPose")}}</dt> + <dd>Represents the position state at a given timestamp (which includes orientation, position, velocity, and acceleration.)<br> + <strong>根据一个时间戳返回包括(方向、位置、速度、加速度)的状态。</strong></dd> + <dt>{{domxref("VREyeParameters")}}</dt> + <dd>Provides access to all the information required to correctly render a scene for each given eye, including field of view information.<br> + <strong>给双眼提供正确渲染场景的所有信息,包括FOV。</strong></dd> + <dt>{{domxref("VRFieldOfView")}}</dt> + <dd>Represents a field of view defined by 4 different degree values describing the view from a center point.<br> + <strong>返回以视窗的中心点为基点的,表示FOV的4个角度值(downDegrees, leftDegrees, rightDegrees, upDegrees)。</strong></dd> + <dt>{{domxref("VRFieldOfViewReadOnly")}}</dt> + <dd>Contains the raw definition for the degree value properties required to define a field of view. Inherited by <code>VRFieldOfView</code>.<br> + <strong>定义一个FOV必须的角度属性。VRFieldOfView 继承了 VRFieldOfViewReadOnly。</strong></dd> +</dl> + +<h2 id="示例">示例</h2> + +<p><strong>【K】</strong></p> + +<p>You can find a number of examples at these Github repos:</p> + +<p><strong>你可以在Github的协议中找到一系列的案例:</strong></p> + +<ul> + <li><a href="https://github.com/mdn/webvr-tests">mdn/webvr-tests</a>: Simple demos built to illiustrate basic feature usage.</li> + <li><strong><a href="https://github.com/mdn/webvr-tests">mdn/webvr-tests</a>: 简单构建的demos用于阐明基本的使用方法。</strong></li> + <li> </li> + <li><a href="https://github.com/MozVR/">MozVR team</a>: More advanced demos, the WebVR spec source, and more!</li> + <li><strong><a href="https://github.com/MozVR/">MozVR team</a>: 更多复杂的demos,关于WebVR特别的资源,以及更多的内容!</strong></li> + <li><strong>【K】</strong></li> +</ul> + +<h2 id="规范">规范</h2> + +<p><strong>【K】</strong></p> + +<table class="standard-table"> + <tbody> + <tr> + <th scope="col">Specification</th> + <th scope="col">Status</th> + <th scope="col">Comment</th> + </tr> + <tr> + <td>{{SpecName('WebVR')}}</td> + <td> + <p>{{Spec2('WebVR')}}</p> + + <p><strong>草稿阶段</strong></p> + </td> + <td> + <p>Initial definition</p> + + <p><strong>最初的定义</strong></p> + </td> + </tr> + </tbody> +</table> + +<p><strong>【K】</strong></p> + +<h2 id="浏览器兼容性">浏览器兼容性</h2> + +<p>{{CompatibilityTable}}</p> + +<p><strong>【K】</strong></p> + +<div id="compat-desktop"> +<table class="compat-table"> + <tbody> + <tr> + <th>Feature</th> + <th>Chrome</th> + <th>Firefox (Gecko)</th> + <th>Internet Explorer</th> + <th>Opera</th> + <th>Safari (WebKit)</th> + </tr> + <tr> + <td>Basic support</td> + <td>{{CompatVersionUnknown}}<sup>[1]</sup></td> + <td>{{CompatGeckoDesktop(39)}}<sup>[2]</sup></td> + <td>{{CompatNo}}</td> + <td>{{CompatNo}}</td> + <td>{{CompatNo}}</td> + </tr> + </tbody> +</table> +</div> + +<div id="compat-mobile"> +<table class="compat-table"> + <tbody> + <tr> + <th>Feature</th> + <th>Android</th> + <th>Firefox Mobile (Gecko)</th> + <th>Firefox OS (Gecko)</th> + <th>IE Phone</th> + <th>Opera Mobile</th> + <th>Safari Mobile</th> + <th>Chrome for Android</th> + </tr> + <tr> + <td>Basic support</td> + <td>{{CompatNo}}</td> + <td>{{CompatGeckoMobile(39)}}<sup>[2]</sup><br> + {{CompatGeckoMobile(44)}}<sup>[3]</sup></td> + <td>{{CompatNo}}</td> + <td>{{CompatNo}}</td> + <td>{{CompatNo}}</td> + <td>{{CompatNo}}</td> + <td>{{CompatNo}}</td> + </tr> + </tbody> +</table> +</div> + +<ul> + <li>[1] The support in Chrome is currently experimental. To find information on Chrome's WebVR implementation status including supporting builds, check out <a href="http://blog.tojicode.com/2014/07/bringing-vr-to-chrome.html">Bringing VR to Chrome</a> by Brandon Jones.</li> + <li><strong>[1]在谷歌浏览器中的支持目前尚处于实验阶段。要查找谷歌WebVR的执行状况的相关信息,包括相关技术支持的建设情况,请查看<a href="http://blog.tojicode.com/2014/07/bringing-vr-to-chrome.html">将VR带进Chrome</a> 作者:Brandon Jones.</strong></li> + <li> </li> + <li>[2] The support for this feature is currently disabled by default in Firefox. To enable WebVR support in Firefox Nightly/Developer Edition, you can go to <code>about:config</code> and enable the <code>dom.vr*</code> prefs. A better option however is to install the <a href="http://www.mozvr.com/downloads/webvr-addon-0.1.0.xpi">WebVR Enabler Add-on</a>, which does this for you and sets up other necessary parts of the <a href="/en-US/docs/Web/API/WebVR_API/WebVR_environment_setup">environment</a>.</li> + <li>[2] <strong>这项功能的技术支持,目前在火狐的浏览器中并不能被执行。要想使用火狐浏览器Nightly/开发者版本中WebVR的相关功能,你可以选择:配置并提升<code>dom.vr*</code> prefs的优先级. 另一个更好的选择是安装<a href="http://www.mozvr.com/downloads/webvr-addon-0.1.0.xpi">WebVR</a><a href="http://www.mozvr.com/downloads/webvr-addon-0.1.0.xpi"> Enabler Add-on</a>, 它能代替你完成这些工作并建立其他运行环境中必需的部分<a href="/en-US/docs/Web/API/WebVR_API/WebVR_environment_setup">environment</a>.</strong></li> + <li>[3] The <code>dom.vr*</code> prefs are enabled by default at this point, in Nightly/Aurora editions.</li> + <li>[3]<strong> The <code>dom.vr*</code> prefs 在Nightly/Aurora的版本中,目前并不能被执行。</strong></li> +</ul> + +<h2 id="相关文章">相关文章</h2> + +<ul> + <li><a href="/en-US/docs/Web/API/WebVR_API/WebVR_environment_setup">WebVR environment setup</a></li> + <li><strong>建立WEBVR的运行环境。</strong></li> + <li><a href="/en-US/docs/Web/API/WebVR_API/WebVR_concepts">WebVR concepts</a></li> + <li><strong>WEBVR 的相关概念。</strong></li> + <li><a href="/en-US/docs/Web/API/WebVR_API/Using_the_WebVR_API">Using the WebVR API</a></li> + <li><strong>怎样使用WEBVR API。</strong></li> + <li><a href="http://mozvr.com/">MozVr.com</a> — demos, downloads, and other resources from the Mozilla VR team.</li> + <li><strong><a href="http://mozvr.com/">MozVr.com</a> — demos,下载,以及其他的来自MOZILLA VR团队的资源。</strong></li> + <li><a href="http://dsmu.me/ConsoleGameOnWeb/">Console Game on Web</a> — a collection of interesting game concept demos, some of which include WebVR.</li> + <li><strong><a href="http://dsmu.me/ConsoleGameOnWeb/">Console Game on Web</a> — 一系列有趣的概念游戏DEMO的收集,其中有些包括了WEBVR。</strong></li> + <li><a href="https://github.com/MozVR/vr-web-examples/tree/master/threejs-vr-boilerplate">threejs-vr-boilerplate</a> — a very useful starter template for writing WebVR apps into.</li> + <li><strong><a href="https://github.com/MozVR/vr-web-examples/tree/master/threejs-vr-boilerplate">threejs-vr-boilerplate</a> — 一个当你编写WEBVR APP时,非常有用的用于开始编程的模板。</strong></li> + <li><a href="https://developer.oculus.com/">Oculus Rift homepage</a> </li> + <li><strong><a href="https://developer.oculus.com/">Oculus Rift</a> </strong><strong>主页</strong></li> +</ul> diff --git a/files/zh-cn/web/api/webvr_api/using_the_webvr_api/index.html b/files/zh-cn/web/api/webvr_api/using_the_webvr_api/index.html new file mode 100644 index 0000000000..0333120187 --- /dev/null +++ b/files/zh-cn/web/api/webvr_api/using_the_webvr_api/index.html @@ -0,0 +1,329 @@ +--- +title: Using the WebVR API +slug: Web/API/WebVR_API/Using_the_WebVR_API +tags: + - WebVR + - 入门指引 +translation_of: Web/API/WebVR_API/Using_the_WebVR_API +--- +<p>{{draft("The WebVR API documentation is currently being updated to cover the v1.0 spec, therefore some of this information will be out of date. Contact ~~chrisdavidmills if you have any questions about this work.")}}</p> + +<p>{{draft("最新的WebVR API文档已经更新到1.0版,因此本页的一些信息很可能已经过时。如果你有任何跟本页内容有关的需要,请联系 ~~chrisdavidmills ")}}</p> + +<p class="summary">The <a href="/en-US/docs/Web/API/WebVR_API">WebVR API</a> is a fantastic addition to the web developer's toolkit, allowing access to virtual reality hardware such as the <a href="https://developer.oculus.com/">Oculus Rift</a>, and converting outputted movement and orientation data into view rendering updates on a web app. But how do you get started in developing VR apps for the Web? This article will guide you through the basics.<br> + <a href="/en-US/docs/Web/API/WebVR_API">WebVR API</a> 对于web开发者来说,是一个令人心动的功能包,允许你连接到类似于<a href="https://developer.oculus.com/">Oculus Rift </a>这样的虚拟现实硬件,并且能够在你的web app中,将从硬件获取到的位置移动数据和姿态角数据,实时更新你的渲染显示输出。具体要如何在Web上开始开发你的VR app呢?这篇文章将会提供基础的引导信息。</p> + +<div class="note"> +<p><strong>Note</strong>: Currently WebVR is at an experimental stage (you can find the <a href="http://mozvr.github.io/webvr-spec/webvr.html">latest spec here</a>); it currently works best in Firefox Nightly/Developer Edition, with some aspects of it also working in Google Chrome. Read <a class="external external-icon" href="http://blog.tojicode.com/2014/07/bringing-vr-to-chrome.html">Bringing VR to Chrome</a> by Brandon Jones for more details on that.<br> + 注意:当前WebVR 还是体验实验阶段(你可以从<a href="http://mozvr.github.io/webvr-spec/webvr.html">这里</a>找到最新规格说明);它已经在Firefox Nightly/Developer Edition的版本上工作的很好了,部分功能也在Google Chrome上可以正常工作了。详细请访问由Brandon Jones在 <a class="external external-icon" href="http://blog.tojicode.com/2014/07/bringing-vr-to-chrome.html">Bringing VR to Chrome</a>提供的更多内容。</p> +</div> + +<h2 id="起步">起步</h2> + +<p>To get started, you need to have your VR hardware set up as recommended in the owner's manual, and your computer set up as indicated in <a href="/en-US/docs/Web/API/WebVR_API/WebVR_environment_setup">WebVR environment setup</a>. A dedicated GPU is recommended for smoother performance.<br> + 你需要先准备好一个已经配置好VR硬件,并且还需要完成 <a href="/en-US/docs/Web/API/WebVR_API/WebVR_environment_setup">WebVR环境的安装</a>。 当然,若想要保证很平滑的体验,你需要配置一个足够好的GPU显卡。</p> + +<p>You also need to have <a href="https://nightly.mozilla.org/">Firefox Nightly</a> (or <a href="https://www.mozilla.org/en-US/firefox/developer/">Developer Edition</a>) installed, along with the <a href="http://www.mozvr.com/downloads/webvr-addon-0.1.0.xpi">WebVR Enabler Add-on</a><br> + 安装好 <a href="https://nightly.mozilla.org/">Firefox Nightly</a> (或 <a href="https://www.mozilla.org/en-US/firefox/developer/">Developer Edition</a>),以及 <a href="http://www.mozvr.com/downloads/webvr-addon-0.1.0.xpi">WebVR Enabler Add-on</a></p> + +<p>Once your environment is set up, try visiting one of our <a href="http://mozvr.com/projects/">MozVR projects</a> and clicking on the "Enter VR" button to test it out.<br> + 设置好环境后,请尝试访问我们直接可在线运行的工程项目 <a href="http://mozvr.com/projects/">MozVR projects</a> ,点击“Enter VR” 按钮,就可以开始测试你的环境了。</p> + +<div class="note"> +<p><strong>Note</strong>: For more in depth information, be sure to check out <a href="/en-US/docs/Web/API/WebVR_API/WebVR_environment_setup">WebVR environment setup</a>.<br> + 注意:更深层次的信息,请check out <a href="/en-US/docs/Web/API/WebVR_API/WebVR_environment_setup">WebVR environment setup</a> 以获取更详细的内容。</p> +</div> + +<div class="note"> +<p><strong>Note</strong>: There are also cheaper options available such as using a mobile device for the head mounted display (in this case you won't have a position sensor available, so you might have to fake the orientation data using the <a href="/en-US/Apps/Build/gather_and_modify_data/responding_to_device_orientation_changes">deviceorientation API</a> instead perhaps.)<br> + 注意:你也可以使用更便宜的方式,比如使用一个手机设备来实现头部显示功能(只是这种情况下,你将没有空间位置追踪传感器相关的功能,将只能使用姿态角数据相关的API <a href="/en-US/Apps/Build/gather_and_modify_data/responding_to_device_orientation_changes">deviceorientation API</a> 。)</p> +</div> + +<h2 id="Introducing_a_simple_demo_简单示例介绍">Introducing a simple demo<br> + 简单示例介绍</h2> + +<p>There are a number of WebVR demos available at the <a href="https://github.com/MozVR/">MozVR team repo</a>, and the <a href="https://github.com/mdn/webvr-tests">MDN webvr-tests repo</a>, but the main one we will be focusing on in this article is our <a href="https://github.com/mdn/webvr-tests/tree/gh-pages/positionsensorvrdevice">positionsensorvrdevice</a> demo (<a href="http://mdn.github.io/webvr-tests/positionsensorvrdevice/">view it live</a>):<br> + 在<a href="https://github.com/MozVR/">MozVR team repo</a>和<a href="https://github.com/mdn/webvr-tests">MDN webvr-tests repo</a>提供了一定数量的WebVR示例,在这篇文章里,我们将着重关注我们的 <a href="https://github.com/mdn/webvr-tests/tree/gh-pages/positionsensorvrdevice">positionsensorvrdevice</a> 提供的示例 (点此访问live <a href="http://mdn.github.io/webvr-tests/positionsensorvrdevice/">view it live</a>)</p> + +<p><img alt="" src="https://mdn.mozillademos.org/files/10797/vrpositionsensor-demo.png" style="display: block; height: 396px; margin: 0px auto; width: 800px;"></p> + +<p>This is a simple 2.5D demo showing a Firefox logo seen on a left and right eye view, rendered on <a href="/en-US/docs/Web/HTML/Element/canvas">HTML5 Canvas</a>. When you view the demo with a VR HMD and click the canvas, the demo will go fullscreen, and you'll be able to approach the Firefox logo. It will move realistically as you move your head towards and away from it, up and down and side to side, and rotate your head in any direction.<br> + 这是一个简单的2.5D的示例,在左右眼两个区域,以<a href="/en-US/docs/Web/HTML/Element/canvas">HTML5 Canvas</a>的方式,同时渲染了Firefox的LOGO。当你使用VR头显来观看这个示例时,点击画面,这个DEMO就会切换到全屏形式,可以尝试靠近Firefox图标。将会非常真实地同步你的头部运动后应该看到的内容,包括可以上下移动、左右移动、转动你的头部看想看的方向。</p> + +<p>The demo was deliberately kept simple so that you can easily see what is going on with the WebVR code. The API is simple enough that you can easily apply WebVR-controlled movement to any app you like, from simple DOM-based interfaces to complex WebGL scenes.<br> + 这个示例程序特意做成足够地简单,以便于你可以更容易地了解WebVR代码的工作过程。从代码中你可以看到,这些API足够的简单,你可以轻松地将这些WebVR控制移动和转动的能力,移植到新的应用功能中,比如一个简单的使用WebGL来显示的基于DOM接口的应用。</p> + +<h2 id="How_does_the_app_work_app是怎样工作的呢?">How does the app work?<br> + app是怎样工作的呢?</h2> + +<p><br> + In this section we'll go through the different parts of the code that make the app work, so you can see what's required at a basic level.<br> + 本章节,我们将通过不同形式的代码来运行,从而你可以了解到哪些东西是最基础的。</p> + +<h3 id="Accessing_the_VR_devices_连接并访问VR设备">Accessing the VR devices<br> + 连接并访问VR设备</h3> + +<p>The first thing to do is get a programmatic reference to the VR hardware connected to your computer. This is done using {{domxref("Navigator.getVRDevices")}}, which returns a promise that resolves to an array of all the vr devices connected to your computer.<br> + 首先,你需要获取连接到你当前电脑的VR硬件的程序对象的引用。通过调用{{domxref("Navigator.getVRDevices")}}这个API,可以获取到已经连接到当前电脑的VR设备的数组。</p> + +<p>There are two kinds of object that may be returned:<br> + 可能返回两种类型的对象:</p> + +<ul> + <li>{{domxref("PositionSensorVRDevice")}}: A position sensor camera.</li> + <li>{{domxref("PositionSensorVRDevice")}}:带空间位置定位的传感器摄像头。</li> + <li>{{domxref("HMDVRDevice")}}: A VR head mounted display.</li> + <li>{{domxref("HMDVRDevice")}}:VR头显设备。</li> +</ul> + +<p>You can see some very simple code showing the kind of basic device information that can be returned in our <a href="https://github.com/mdn/webvr-tests/tree/gh-pages/vrdevice">vrdevice demo</a>.<br> + 在 <a href="https://github.com/mdn/webvr-tests/tree/gh-pages/vrdevice">vrdevice demo</a> 中,使用简单代码即可以获取设备基础信息。</p> + +<p>However, what you really want is something that grabs a pair of devices (perhaps many pairs in multiplayer VR games of the future). The following code taken from the WebVR spec (and also used in the <a href="https://github.com/mdn/webvr-tests/tree/gh-pages/positionsensorvrdevice">positionsensorvrdevice</a> demo) does the trick pretty well:<br> + 当然,若你需要同时获取多套VR设备的信息(可能是将来多个玩家的多套设备),WebVR说明书中包含的以下代码会更适合你来参考(在 <a href="https://github.com/mdn/webvr-tests/tree/gh-pages/positionsensorvrdevice">positionsensorvrdevice</a> 示例代码中也有使用这段代码逻辑)。</p> + +<pre class="brush: js">var gHMD, gPositionSensor; + +navigator.getVRDevices().then(function(devices) { + for (var i = 0; i < devices.length; ++i) { + if (devices[i] instanceof HMDVRDevice) { + gHMD = devices[i]; + break; + } + } + + if (gHMD) { + for (var i = 0; i < devices.length; ++i) { + if (devices[i] instanceof PositionSensorVRDevice && devices[i].hardwareUnitId === gHMD.hardwareUnitId) { + gPositionSensor = devices[i]; + break; + } + } + } +});</pre> + +<p>Here we grab the first instance we find of an {{domxref("HMDVRDevice")}} and store it in the <code>gHMD</code> variable. Next, we grab the first instance we find of a {{domxref("PositionSensorVRDevice")}} and store it in the <code>gPositionSensor</code> variable, but only if its {{domxref("VRDevice.hardWareUnitId")}} property matches that of the <code>gHMD</code> object. Separate devices that are part of the same overall hardware unit will share a hardware unit ID — this is how you check that you've got references to two matching devices.<br> + 这段代码,先获取第一个找到 {{domxref("HMDVRDevice")}} 类型的对象引用,赋值给gHMD变量。若获取到了,然后,再找到一个 {{domxref("PositionSensorVRDevice")}} 类型的对象引用,并且它与gHMD的 {{domxref("VRDevice.hardWareUnitId")}} 属性值相同时,即找到配对的对象,赋值给<code>gPositionSensor变量。同一套设备单元中的多个分离的设备会共享他们的</code> hardware unit ID,可以依此来检测两个设备对象是否是同一套。</p> + +<h3 id="Initialising_the_app_初始化APP">Initialising the app 初始化APP</h3> + +<p>The scene is rendered on a {{htmlelement("canvas")}} element, created and placed as follows:<br> + 场景最终是通过 {{htmlelement("canvas")}} 标记元素来显示。canvas画布可通过以下JS代码来创建。</p> + +<pre class="brush: js">var myCanvas = document.createElement('canvas'); +var ctx = myCanvas.getContext('2d'); +var body = document.querySelector('body'); +body.appendChild(myCanvas);</pre> + +<p>Next, we create a new <a href="/en-US/docs/Web/API/HTMLImageElement">image</a> and use a {{event("load")}} event to check that the image is loaded before running <code>draw()</code>, the <a href="/en-US/docs/Games/Anatomy#Building_a_main_loop_in_JavaScript">main loop</a> for our app:<br> + 然后,我们在主渲染循环控制中,先创建一个图片对象,并且在draw()方法运行前,监听 {{event("load")}} 事件回调,以检查图片是否已经被正常装载成功。</p> + +<pre class="brush: js">var image = new Image(); +image.src = 'firefox.png'; +image.onload = draw;</pre> + +<h3 id="The_main_loop_渲染显示主循环">The main loop 渲染显示主循环</h3> + +<p><code>draw()</code> looks like this:<br> + draw()方法的实现代码参考如下:</p> + +<pre class="brush: js">function draw() { + WIDTH = window.innerWidth; + HEIGHT = window.innerHeight; + lCtrOffset = WIDTH*0.25; + rCtrOffset = WIDTH*0.25; + + myCanvas.width = WIDTH; + myCanvas.height = HEIGHT; + + setView(); + drawImages(); + drawCrosshairs(); + + requestAnimationFrame(draw); +}</pre> + +<p>The <a href="/en-US/docs/Web/API/Window">window</a> <code>WIDTH</code> and <code>HEIGHT</code> is resampled on each frame then used to set:<br> + 在控制每一帧显示输出时,都有重新获取 <a href="/en-US/docs/Web/API/Window">window</a> 窗口当前的宽、高,并依此来调整输出显示:</p> + +<ul> + <li>A left and right offset value used to keep the image drawn relative to the center of the left and right eye view. Because we are drawing two half-width copies of the scene, the center of each copy is actually a quarter of the total canvas width in from the edge, in each case.</li> + <li>The <a href="/en-US/docs/Web/API/HTMLCanvasElement/width">width</a> and <a href="/en-US/docs/Web/API/HTMLCanvasElement/height">height</a> of the canvas.</li> +</ul> + +<p>This is done so that the scene will resize correctly whenever the browser window is resized by the user.</p> + +<p>Next in the loop we run three functions:</p> + +<ul> + <li><code>setView()</code> retrieves position and orientation information from the VR hardware, ready for use in drawing the updated image positions in the scene.</li> + <li><code>drawImages()</code> actually draws the updated image positions in the scene.</li> + <li><code>drawCrosshairs()</code> draws the crosshairs that remain in the center of the scene at all times.</li> +</ul> + +<p>You'll learn more about these later on.</p> + +<p>Finally for the loop, we run <a href="/en-US/docs/Web/API/window/requestAnimationFrame">requestAnimationFrame(draw)</a> so that the <code>draw()</code> loop is continually run.</p> + +<h3 id="Retrieving_position_and_orientation_information_提取位置与姿态">Retrieving position and orientation information 提取位置与姿态</h3> + +<p>Now lets study the <code>setView()</code> function in detail. We'll step through each part of the code, explaining what it all does:</p> + +<pre class="brush: js">function setView() { + var posState = gPositionSensor.getState();</pre> + +<p>First we call {{domxref("PositionSensorVRDevice.getState")}} on the reference to our position sensor. This method returns everything you might want to know about the current state of the HMD — accessible through a {{domxref("VRPositionState")}} object — including its position, orientation, and more advanced information such as linear and angular velocity/acceleration.</p> + +<pre class="brush: js"> if(posState.hasPosition) { + posPara.textContent = 'Position: x' + roundToTwo(posState.position.x) + " y" + + roundToTwo(posState.position.y) + " z" + + roundToTwo(posState.position.z); + xPos = -posState.position.x * WIDTH * 2; + yPos = posState.position.y * HEIGHT * 2; + if(-posState.position.z > 0.01) { + zPos = -posState.position.z; + } else { + zPos = 0.01; + } + }</pre> + +<p>In the next part, we first check to make sure valid position information is available for the HMD using {{domxref("VRPositionState.hasPosition")}}, so that we don't return an error and stop the app working (if the HMD is switched off, or not pointing at the position sensor.)</p> + +<p>Then we output the current position information to a paragraph in the app UI for information purposes (rounded to two decimal places using a custom function to make it more readable.)</p> + +<p>Last up, we set our <code>xPos</code>, <code>yPos</code>, and <code>zPos</code> variables relative to the position information stored in {{domxref("VRPositionState.position")}}. You'll notice that we have used an <code>if ... else</code> block to make sure the <code>zPos</code> value stays at 0.01 or above — the app was throwing an error if it went below 0.</p> + +<pre class="brush: js"> if(posState.hasOrientation) { + orientPara.textContent = 'Orientation: x' + roundToTwo(posState.orientation.x) + " y" + + roundToTwo(posState.orientation.y) + " z" + + roundToTwo(posState.orientation.z); + xOrient = posState.orientation.x * WIDTH; + yOrient = -posState.orientation.y * HEIGHT * 2; + zOrient = posState.orientation.z * 180; + + }</pre> + +<p>Next, we use a similar process to update the scene according to the HMD's orientation — check that valid orientation data is available using {{domxref("VRPositionState.hasOrientation")}}, display orientation data in the UI for informational purposes, and then set the <code>xOrient</code>, <code>yOrient</code>, and <code>zOrient</code> values relative to the orientation information stored in {{domxref("VRPositionState.orientation")}}.</p> + +<pre> timePara.textContent = 'Timestamp: ' + Math.floor(posState.timeStamp); +}</pre> + +<p>Finally, we output the current timeStamp stored in {{domxref("VRPositionState.timeStamp")}} to the UI for information. This value can be useful for determining if position data has been updated, and what order updates have occured in.</p> + +<h3 id="Updating_the_scene_更新场景输出画画">Updating the scene 更新场景输出画画</h3> + +<p>The <code>xPos</code>, <code>yPos</code>, <code>zPos</code>, <code>xOrient</code>, <code>yOrient</code> and <code>zOrient</code> value retrieved by <code>setView()</code> are all used as modifiers for updating the scene rendering done by <code>drawImages()</code>. We'll look at how below, although we'll only walk through the code for drawing the left eye view (the other is very similar, but shifted over to the right):</p> + +<pre class="brush: js">function drawImages() { + ctx.fillStyle = 'white'; + ctx.fillRect(0,0,WIDTH,HEIGHT);</pre> + +<p>First we draw a white {{domxref("CanvasRenderingContext2D.fillRect","fillRect()")}} to clear the scene before the next frame is drawn.</p> + +<pre class="brush: js"> ctx.save(); + ctx.beginPath(); + ctx.translate(WIDTH/4,HEIGHT/2); + ctx.rect(-(WIDTH/4),-(HEIGHT/2),WIDTH/2,HEIGHT);</pre> + +<p>Next, we save the context state with {{domxref("CanvasRenderingContext2D.save","save()")}} so we can treat the left eye view as a separate image and not have its code affect the right eye view.</p> + +<p>We then {{domxref("CanvasRenderingContext2D.beginPath","begin a path")}}, {{domxref("CanvasRenderingContext2D.translate","translate the canvas")}} so that the origin is now in the center of the left eye view (a quarter of the width across and half the height down) — which is needed so that the rotation works correctly (rotation happens around the origin of the canvas) — and draw a {{domxref("CanvasRenderingContext2D.rect","rect()")}} around the whole left eye view.</p> + +<p>Note that the <code>rect()</code> has to be drawn starting from minus a quarter of the width and minus half the height, because of the translation applied earlier.</p> + +<pre> ctx.clip();</pre> + +<p>Now we {{domxref("CanvasRenderingContext2D.clip","clip()")}} the canvas. Because we called this just after the <code>rect()</code> was drawn, anything else that we do on the canvas will be constrained inside the <code>rect()</code>, with any overflow hidden until a <code>restore()</code> call is made (see later on.) This ensures that the whole left eye view will remain separate from the right eye view.</p> + +<pre class="brush: js"> ctx.rotate(zOrient * Math.PI / 180);</pre> + +<p>A rotation is now applied to the image, related to the current value of <code>zOrient</code>, so that the scene rotates as you rotate your head.</p> + +<pre class="brush: js"> ctx.drawImage(image,-(WIDTH/4)+lCtrOffset-((image.width)/(2*(1/zPos)))+xPos-yOrient,-((image.height)/(2*(1/zPos)))+yPos+xOrient,image.width*zPos,image.height*zPos);</pre> + +<p>Now for the actual image drawing! This rather nasty line of code needs breaking down, so here it is, argument by argument:</p> + +<ul> + <li><code>image</code>: The image to be drawn</li> + <li><code>-(WIDTH/4)+lCtrOffset-((image.width)/(2*(1/zPos)))+xPos-yOrient</code>: The horizontal coordinate of the image origin. This first needs to be reduced by <code>WIDTH/4</code> to compensate for the translation done earlier. Then, we add the left center offset to put it back in the middle, then we subtract the image width divided by 2 times the reciprocal of <code>zPos</code> — so as the image is drawn smaller/larger the amount subtracted will get smaller/larger, again keeping the image in the center. Finally, we add the <code>xPos</code> and subtract the <code>yOrient</code> values to update the image position as the HMD is moved or rotated horizontally (rotation around the y axis moves the image horizontally.)</li> + <li><code>-((image.height)/(2*(1/zPos)))+yPos+xOrient</code>: The vertical coordinate of the image origin. In this case the "subtract HEIGHT/2" and "add right center offset" exactly cancel each other out, so I've just removed them from the equation. That just leaves subtracting the image width divided by 2 times the reciprocal of zPos to keep the image in the center, as above, and modifying the drawn position by <code>yPos</code> and <code>xOrient</code>.</li> + <li><code>image.width*zPos</code>: The width to draw the image; this is modified by <code>zPos</code> so it will be drawn bigger as you get closer to it.</li> + <li><code>image.height*zPos</code>: The height to draw the image; this is modified by <code>zPos</code> so it will be drawn bigger as you get closer to it.</li> +</ul> + +<pre class="brush: js"> ctx.strokeStyle = "black"; + ctx.stroke();</pre> + +<p>Next we draw a black {{domxref("CanvasRenderingContext2D.stroke","stroke()")}} around the left eye view, just to aid the view separation a bit more.</p> + +<pre class="brush: js"> ctx.restore();</pre> + +<p>Finally, we {{domxref("CanvasRenderingContext2D.restore","restore()")}} the canvas so we can then go on to draw the right eye view.</p> + +<pre class="brush: js"> ... +}</pre> + +<div class="note"> +<p><strong>Note</strong>: We are kind of cheating here, using a 2D canvas to approximate a 3D scene. But it keeps things simple for learning purposes. You can use the position and orientation data discussed above to modify the view rendering on any app written with web technologies. For example, our <a href="https://github.com/mdn/webvr-tests/tree/gh-pages/3Dpositionorientation">3Dpositionorientation</a> demo uses very similar code to that shown above to control the view of a WebGL scene created using <a href="http://threejs.org/">Three.js</a>.</p> +</div> + +<div class="note"> +<p><strong>Note</strong>: The <a href="https://github.com/mdn/webvr-tests/blob/gh-pages/positionsensorvrdevice/index.html#L106-L119">code for <code>drawCrosshairs()</code></a> is very simple in comparison to <code>drawImages()</code>, so we'll leave you to study that for yourself if you're interested!</p> +</div> + +<h3 id="Fullscreen_全屏控制">Fullscreen 全屏控制</h3> + +<p>The VR effect is much more effective if you set your app runnning in <a href="/en-US/docs/Web/Guide/API/DOM/Using_full_screen_mode">fullscreen mode</a> — this generally means setting your {{htmlelement("canvas")}} element to fullscreen when a specific event occurs — such as double-clicking the display or pressing a specific button.</p> + +<p>In this case I have just kept things simple, running a <code>fullScreen()</code> function when the canvas is clicked:</p> + +<pre class="brush: js">myCanvas.addEventListener('click',fullScreen,false);</pre> + +<p>The <code>fullScreen()</code> function checks which version of the <code>requestFullscreen()</code> method is present on the canvas (this will differ by browser) and then calls the appropriate one, for maximum compatibility:</p> + +<pre class="brush: js">function fullScreen() { + if (myCanvas.requestFullscreen) { + myCanvas.requestFullscreen(); + } else if (myCanvas.msRequestFullscreen) { + myCanvas.msRequestFullscreen(); + } else if (myCanvas.mozRequestFullScreen) { + myCanvas.mozRequestFullScreen(); + } else if (myCanvas.webkitRequestFullscreen) { + myCanvas.webkitRequestFullscreen(); + } +}</pre> + +<h2 id="Calibrating_field_of_view_and_device_orientation_对FOV与设备姿态进行归零显示">Calibrating field of view and device orientation 对FOV与设备姿态进行归零显示</h2> + +<p>I've not given much thought to this in my current demo, but in commercial apps you'll need to do some user calibration to make sure your app is working for the user and their particular VR hardware. The WebVR API has a number of features to aid in this.</p> + +<p>First of all, you can use the {{domxref("PositionSensorVRDevice.resetSensor")}} method to reset the HMD position orientation. Effectively what it does is to set the current position/orientation of the headset to 0. So you need to ensure it is held in a sensible 0 position before running the function. In our positionsensorvrdevice demo***, you can play with it using our "Reset Sensor" button:</p> + +<pre class="brush: html"><button>Reset Sensor</button></pre> + +<pre class="brush: js">document.querySelector('button').onclick = function() { + gPositionSensor.resetSensor(); +}</pre> + +<p>The other thing to calibrate is the field of view (FOV) of your headset — how much of the scene can be seen in the up, right, down and left directions. This information can be retrieved for each eye separately using the {{domxref("HMDVRDevice.getEyeParameters")}} method, which returns parameters for each eye separately (you need to call it twice, once with a parameter of <code>left</code>, and once with a parameter of <code>right</code>.) This returns a {{domxref("VREyeParameters")}} object for each eye.</p> + +<p>As an example, you could retrieve the current field of view for an eye using {{domxref("VREyeParameters.currentFieldOfView")}}. This returns a {{domxref("VRFieldOfView")}} object containing four properties:</p> + +<ul> + <li>{{domxref("VRFieldOfViewReadOnly.upDegrees","upDegrees")}}: The number of degrees upwards that the field of view extends in.</li> + <li>{{domxref("VRFieldOfViewReadOnly.rightDegrees","rightDegrees")}}: The number of degrees to the right that the field of view extends in.</li> + <li>{{domxref("VRFieldOfViewReadOnly.downDegrees","downDegrees")}}: The number of degrees downwards that the field of view extends in.</li> + <li>{{domxref("VRFieldOfViewReadOnly.leftDegrees","leftDegrees")}}: The number of degrees to the left that the field of view extends in.</li> +</ul> + +<p>The field of view created is a pyramid shape, the apex of which is emanating from the eye.</p> + +<p>You could check whether the user has a suitable field of view for your app, and if not, set a new field of view using {{domxref("HMDVRDevice.setFieldOfView")}} method. A simple function to handle this might look like so:</p> + +<pre class="brush: js">function setCustomFOV(up,right,down,left) { + var testFOV = new VRFieldOfView(up,right,down,left); + + gHMD.setFieldOfView(testFOV,testFOV,0.01,10000.0); +}</pre> + +<p>This function accepts the four degree values as arguments, then creates a new {{domxref("VRFieldOfView")}} object using the VRFieldOfView() constructor. This is then fed into <code>setFieldOfView()</code> as the first two arguments (the FOV for the left eye and the right eye). The third and fourth arguments are the <code>zNear</code> and <code>zFar</code> values — how close and far away from the eye an object can be in the direction of the FOV and still be inside it.</p> diff --git a/files/zh-cn/web/api/webvr_api/using_vr_controllers_with_webvr/index.html b/files/zh-cn/web/api/webvr_api/using_vr_controllers_with_webvr/index.html new file mode 100644 index 0000000000..57a983875b --- /dev/null +++ b/files/zh-cn/web/api/webvr_api/using_vr_controllers_with_webvr/index.html @@ -0,0 +1,259 @@ +--- +title: Using VR controllers with WebVR +slug: Web/API/WebVR_API/Using_VR_controllers_with_WebVR +translation_of: Web/API/WebVR_API/Using_VR_controllers_with_WebVR +--- +<div>{{APIRef("WebVR API")}}</div> + +<p class="summary">Many WebVR hardware setups feature controllers that go along with the headset. These can be used in WebVR apps via the <a href="https://developer.mozilla.org/en-US/docs/Web/API/Gamepad_API">Gamepad API</a>, and specifically the <a href="https://developer.mozilla.org/en-US/docs/Web/API/Gamepad_API#Experimental_Gamepad_extensions">Gamepad Extensions API</a> that adds API features for accessing <a href="https://developer.mozilla.org/en-US/docs/Web/API/GamepadPose">controller pose</a>, <a href="https://developer.mozilla.org/en-US/docs/Web/API/GamepadHapticActuator">haptic actuators</a>, and more. This article explains the basics.</p> + +<p class="summary">许多WebVR硬件的功能设置控制器与头戴设备在一起,实际这些功能可以通过手柄控制器在WebVR软件中实现,尤其对于添加了姿态控制器,触觉驱动器,等拓展性API的手柄控制器。本篇文章介绍了一些基本的内容。</p> + +<h2 id="The_WebVR_API">The WebVR API</h2> + +<p>The <a href="/en-US/docs/Web/API/WebVR_API">WebVR API</a> is a nascent, but very interesting new feature of the web platform that allows developers to create web-based virtual reality experiences. It does this by providing access to VR headsets connected to your computer as {{domxref("VRDisplay")}} objects, which can be manipulated to start and stop presentation to the display, queried for movement data (e.g. orientation and position) that can be used to update the display on each frame of the animation loop, and more.</p> + +<p>Before you read this article, you should really be familiar with the basics of the WebVR API already — go and read <a href="/en-US/docs/Web/API/WebVR_API/Using_the_WebVR_API">Using the WebVR API</a> first, if you haven't already done so, which also details browser support and required hardware setup.</p> + +<h2 id="The_Gamepad_API">The Gamepad API</h2> + +<p>The <a href="/en-US/docs/Web/API/Gamepad_API">Gamepad API</a> is a fairly well-supported API that allows developers to access gamepads/controllers connected to your computer and use them to control web apps. The basic Gamepad API provides access to connected controllers as {{domxref("Gamepad")}} objects, which can then be queried to find out what buttons are being pressed and thumbsticks (axes) are being moved at any point, etc.</p> + +<p>You can find more about basic Gamepad API usage in <a href="/en-US/docs/Web/API/Gamepad_API/Using_the_Gamepad_API">Using the Gamepad API</a>, and <a href="/en-US/docs/Games/Techniques/Controls_Gamepad_API">Implementing controls using the Gamepad API</a>.</p> + +<p>However, in this article we will mainly be concentrating on some of the new features provided by the {{specname("GamepadExtensions")}} API, which allows access to advanced controller information such as position and orientation data, control over haptic actuators (e.g. vibration hardware), and more. This API is very new, and currently is only supported and enabled by default in Firefox 55+ Beta/Nightly channels.</p> + +<h2 id="Types_of_controller">Types of controller</h2> + +<p>There are two types of controller you'll encounter with VR hardware:</p> + +<ul> + <li>6DoF (six-degrees-of-freedom) controllers provide access to both positional and orientation data — they can manipulate a VR scene and the objects it contains with movement but also rotatation. A good example is the HTC VIVE controllers.</li> + <li>3DoF (three-degrees-of-freedom) controllers provide orientation but not positional data. A good example is the Google Daydream controller, which can be rotated to point to different things in 3D space like a laser pointer, but can't be moved inside a 3D scene.</li> +</ul> + +<h2 id="Basic_controller_access">Basic controller access</h2> + +<p>Now onto some code. Let's look first at the basics of how we get access to VR controllers with the Gamepad API. There are a few strange nuances to bear in mind here, so it is worth taking a look.</p> + +<p>We've written up a simple example to demonstrate — see our <a href="https://github.com/mdn/webvr-tests/blob/master/vr-controller-basic-info/index.html">vr-controller-basic-info</a> source code (<a href="https://mdn.github.io/webvr-tests/vr-controller-basic-info/">see it running live here also</a>). This demo simply outputs information on the VR displays and gamepads connected to your computer.</p> + +<h3 id="Getting_the_display_information">Getting the display information</h3> + +<p>The first notable code is as follows:</p> + +<pre class="brush: js">var initialRun = true; + +if(navigator.getVRDisplays && navigator.getGamepads) { + info.textContent = 'WebVR API and Gamepad API supported.' + reportDisplays(); +} else { + info.textContent = 'WebVR API and/or Gamepad API not supported by this browser.' +}</pre> + +<p>Here we first use a tracking variable, <code>initialRun</code>, to note that this is the first time we have loaded the page. You'll find out more about this later on. Next, we detect to see if the WebVR and Gamepad APIs are supported by cheking for the existence of the {{domxref("Navigator.getVRDisplays()")}} and {{domxref("Navigator.getGamepads()")}} methods. If so, we run our <code>reportDisplays()</code> custom function to start the process off. This function looks like so:</p> + +<pre class="brush: js">function reportDisplays() { + navigator.getVRDisplays().then(function(displays) { + console.log(displays.length + ' displays'); + for(var i = 0; i < displays.length; i++) { + var cap = displays[i].capabilities; + // cap is a VRDisplayCapabilities object + var listItem = document.createElement('li'); + listItem.innerHTML = '<strong>Display ' + (i+1) + '</strong>' + + '<br>VR Display ID: ' + displays[i].displayId + + '<br>VR Display Name: ' + displays[i].displayName + + '<br>Display can present content: ' + cap.canPresent + + '<br>Display is separate from the computer\'s main display: ' + cap.hasExternalDisplay + + '<br>Display can return position info: ' + cap.hasPosition + + '<br>Display can return orientation info: ' + cap.hasOrientation + + '<br>Display max layers: ' + cap.maxLayers; + list.appendChild(listItem); + } + + setTimeout(reportGamepads, 1000); + // For VR, controllers will only be active after their corresponding headset is active + }); +}</pre> + +<p>This function first uses the promise-based {{domxref("Navigator.getVRDisplays()")}} method, which resolves with an array containing {{domxref("VRDisplay")}} objects representing the connected displays. Next, it prints out each display's {{domxref("VRDisplay.displayId")}} and {{domxref("VRDisplay.displayName")}} values, and a number of useful values contained in the display's associated {{domxref("VRCapabilities")}} object. The most useful of these are {{domxref("VRCapabilities.hasOrientation","hasOrientation")}} and {{domxref("VRCapabilities.hasPosition","hasPosition")}}, which allow you to detect whether the device can return orientation and position data and set up your app accordingly.</p> + +<p>The last line contained in this function is a {{domxref("WindowOrWorkerGlobalScope.setTimeout()")}} call, which runs the <code>reportGamepads()</code> function after a 1 second delay. Why do we need to do this? First of all, VR controllers will only be ready after their associated VR headset is active, so we need to invoke this after <code>getVRDisplays()</code> has been called and returned the display information. Second, the Gamepad API is much older than the WebVR API, and not promise-based. As you'll see later, the <code>getGamepads()</code> method is synchronous, and just returns the <code>Gamepad</code> objects immediately — it doesn't wait for the controller to be ready to report information. Unless you wait for a little while, returned information may not be accurate (at least, this is what we found in our tests).</p> + +<h3 id="Getting_the_Gamepad_information">Getting the Gamepad information</h3> + +<p>The <code>reportGamepads()</code> function looks like this:</p> + +<pre class="brush: js">function reportGamepads() { + var gamepads = navigator.getGamepads(); + console.log(gamepads.length + ' controllers'); + for(var i = 0; i < gamepads.length; ++i) { + var gp = gamepads[i]; + var listItem = document.createElement('li'); + listItem.classList = 'gamepad'; + listItem.innerHTML = '<strong>Gamepad ' + gp.index + '</strong> (' + gp.id + ')' + + '<br>Associated with VR Display ID: ' + gp.displayId + + '<br>Gamepad associated with which hand: ' + gp.hand + + '<br>Available haptic actuators: ' + gp.hapticActuators.length + + '<br>Gamepad can return position info: ' + gp.pose.hasPosition + + '<br>Gamepad can return orientation info: ' + gp.pose.hasOrientation; + list.appendChild(listItem); + } + initialRun = false; +}</pre> + +<p>This works in a similar manner to <code>reportDisplays()</code> — we get an array of {{domxref("Gamepad")}} objects using the non-promise-based <code>getGamepads()</code> method, then cycle through each one and print out information on each:</p> + +<ul> + <li>The {{domxref("Gamepad.displayId")}} property is the same as the <code>displayId</code> of the headet the controller is associated with, and therefore useful for tying controller and headset information together.</li> + <li>The {{domxref("Gamepad.index")}} property is unique numerical index that identifies each connected controller.</li> + <li>{{domxref("Gamepad.hand")}} returns which hand the controller is expected to be held in.</li> + <li>{{domxref("Gamepad.hapticActuators")}} returns an array of the haptic actuators available in the controller. Here we are returning its length so we can see how many each has available.</li> + <li>Finally, we return {{domxref("GamepadPose.hasPosition")}} and {{domxref("GamepadPose.hasOrientation")}} to show whether the controller can return position and orientation data. This works just the same as for the displays, except that in the case of gamepads these values are available on the pose object, not the capabilities object.</li> +</ul> + +<p>Note that we also gave each list item containing controller information a class name of <code>gamepad</code>. We'll explain what this is for later.</p> + +<p>The last thing to do here is set the <code>initialRun</code> variable to <code>false</code>, as the initial run is now over.</p> + +<h3 id="Gamepad_events">Gamepad events</h3> + +<p>To finish off this section, we'll look at the gamepad-associated events. There are two we need concern ourselves with — {{event("gamepadconnected")}} and {{event("gamepaddisconnected")}} — and it is fairly obvious what they do.</p> + +<p>At the end of our example we first include the <code>removeGamepads()</code> function:</p> + +<pre class="brush: js">function removeGamepads() { + var gpLi = document.querySelectorAll('.gamepad'); + for(var i = 0; i < gpLi.length; i++) { + list.removeChild(gpLi[i]); + } + + reportGamepads(); +}</pre> + +<p>This function simply grabs references to all list items with a class name of <code>gamepad</code>, and removes them from the DOM. Then it re-runs <code>reportGamepads()</code> to populate the list with the updated list of connected controllers.</p> + +<p><code>removeGamepads()</code> will be run each time a gamepad is connected or disconnected, via the following event handlers:</p> + +<pre class="brush: js">window.addEventListener('gamepadconnected', function(e) { + info.textContent = 'Gamepad ' + e.gamepad.index + ' connected.'; + if(!initialRun) { + setTimeout(removeGamepads, 1000); + } +}); + +window.addEventListener('gamepaddisconnected', function(e) { + info.textContent = 'Gamepad ' + e.gamepad.index + ' disconnected.'; + setTimeout(removeGamepads, 1000); +});</pre> + +<p>We have <code>setTimeout()</code> calls in place here — like we did with the initialization code at the top of the script — to make sure that the gamepads are ready to report their information when <code>reportGamepads()</code> is called in each case.</p> + +<p>But there's one more thing to note — you'll see that inside the <code>gamepadconnected</code> handler, the timeout is only run if <code>initialRun</code> is <code>false</code>. This is because if your gamepads are connected when the document first loads, <code>gamepadconnected</code> is fired once for each gamepad, therefore <code>removeGamepads()</code>/<code>reportGamepads()</code> will be run several times. This could lead to inaccurate results, therefore we only want to run <code>removeGamepads()</code> inside the <code>gamepadconnected</code> handler after the initial run, not during it. This is what <code>initialRun</code> is for.</p> + +<h2 id="Introducing_a_real_demo">Introducing a real demo</h2> + +<p>Now let's look at the Gamepad API being used inside a real WebVR demo. You can find this demo at <a href="https://github.com/mdn/webvr-tests/tree/master/raw-webgl-controller-example">raw-webgl-controller-example</a> (<a href="https://mdn.github.io/webvr-tests/raw-webgl-controller-example/">see it live here also</a>).</p> + +<p>In exactly the same way as our <a href="https://github.com/mdn/webvr-tests/tree/master/raw-webgl-example">raw-webgl-example</a> (see <a href="/en-US/docs/Web/API/WebVR_API/Using_the_WebVR_API">Using the WebVR API</a> for details), this renders a spinning 3D cube, which you can choose to present in a VR display. The only difference is that, while in VR presenting mode, this demo allows you to move the cube by moving a VR controller (the original demo moves the cube as you move your VR headset).</p> + +<p>We'll explore the code differences in this version below — see <a href="https://github.com/mdn/webvr-tests/blob/master/raw-webgl-controller-example/webgl-demo.js">webgl-demo.js</a>.</p> + +<h3 id="Accessing_the_gamepad_data">Accessing the gamepad data</h3> + +<p>Inside the <code>drawVRScene()</code> function, you'll find this bit of code:</p> + +<pre class="brush: js">var gamepads = navigator.getGamepads(); +var gp = gamepads[0]; + +if(gp) { + var gpPose = gp.pose; + var curPos = gpPose.position; + var curOrient = gpPose.orientation; + if(poseStatsDisplayed) { + displayPoseStats(gpPose); + } +}</pre> + +<p>Here we get the connected gamepads with {{domxref("Navigator.getGamepads")}}, then store the first gamepad detected in the <code>gp</code> variable. As we only need one gamepad for this demo, we'll just ignore the others.</p> + +<p>The next thing we do is to get the {{domxref("GamepadPose")}} object for the controller stored in gpPose (by querying {{domxref("Gamepad.pose")}}), and also store the current gamepad position and orientation for this frame in variables so they are easuy to access later. We also display the post stats for this frame in the DOM using the <code>displayPoseStats()</code> function. All of this is only done if <code>gp</code> actually has a value (if a gamepad is connected), which stops the demo erroring if we don't have our gamepad connected.</p> + +<p>Slightly later in the code, you can find this block:</p> + +<pre class="brush: js">if(gp && gpPose.hasPosition) { + mvTranslate([ + 0.0 + (curPos[0] * 15) - (curOrient[1] * 15), + 0.0 + (curPos[1] * 15) + (curOrient[0] * 15), + -15.0 + (curPos[2] * 25) + ]); +} else if(gp) { + mvTranslate([ + 0.0 + (curOrient[1] * 15), + 0.0 + (curOrient[0] * 15), + -15.0 + ]); +} else { + mvTranslate([ + 0.0, + 0.0, + -15.0 + ]); +}</pre> + +<p>Here we alter the position of the cube on the screen according to the {{domxref("GamepadPose.position","position")}} and {{domxref("GamepadPose.orientation","orientation")}} data received from the connected controller. These values (stored in <code>curPos</code> and <code>curOrient</code>) are {{domxref("Float32Array")}}s containing the X, Y, and Z values (here we are just using [0] which is X, and [1] which is Y).</p> + +<p>If the <code>gp</code> variable has a <code>Gamepad</code> object inside it and it can return position values (<code>gpPose.hasPosition</code>), indicating a 6DoF controller, we modify the cube position using position and orientation values. If only the former is true, indicating a 3DoF controller, we modify the cube position using the orientation values only. If there is no gamepad connected, we don't modify the cube position at all.</p> + +<h3 id="Displaying_the_gamepad_pose_data">Displaying the gamepad pose data</h3> + +<p>In the <code>displayPoseStats()</code> function, we grab all of the data we want to display out of the {{domxref("GamepadPose")}} object passed into it, then print them into the UI panel that exists in the demo for displaying such data:</p> + +<pre class="brush: js">function displayPoseStats(pose) { + var pos = pose.position; + var orient = pose.orientation; + var linVel = pose.linearVelocity; + var linAcc = pose.linearAcceleration; + var angVel = pose.angularVelocity; + var angAcc = pose.angularAcceleration; + + if(pose.hasPosition) { + posStats.textContent = 'Position: x ' + pos[0].toFixed(3) + ', y ' + pos[1].toFixed(3) + ', z ' + pos[2].toFixed(3); + } else { + posStats.textContent = 'Position not reported'; + } + + if(pose.hasOrientation) { + orientStats.textContent = 'Orientation: x ' + orient[0].toFixed(3) + ', y ' + orient[1].toFixed(3) + ', z ' + orient[2].toFixed(3); + } else { + orientStats.textContent = 'Orientation not reported'; + } + + linVelStats.textContent = 'Linear velocity: x ' + linVel[0].toFixed(3) + ', y ' + linVel[1].toFixed(3) + ', z ' + linVel[2].toFixed(3); + angVelStats.textContent = 'Angular velocity: x ' + angVel[0].toFixed(3) + ', y ' + angVel[1].toFixed(3) + ', z ' + angVel[2].toFixed(3); + + if(linAcc) { + linAccStats.textContent = 'Linear acceleration: x ' + linAcc[0].toFixed(3) + ', y ' + linAcc[1].toFixed(3) + ', z ' + linAcc[2].toFixed(3); + } else { + linAccStats.textContent = 'Linear acceleration not reported'; + } + + if(angAcc) { + angAccStats.textContent = 'Angular acceleration: x ' + angAcc[0].toFixed(3) + ', y ' + angAcc[1].toFixed(3) + ', z ' + angAcc[2].toFixed(3); + } else { + angAccStats.textContent = 'Angular acceleration not reported'; + } +}</pre> + +<h2 id="Summary">Summary</h2> + +<p>This article has given you a very basic idea of how to use the Gamepad Extensions to use VR controllers inside WebVR apps. In a real app you'd probably have a much more complex control system in effect, with controls assigned to the buttons on the VR controllers, and the display being affected by both the display pose and the controller poses simultaneously. Here however, we just wanted to isolate the pure Gamepad Extensions parts of that.</p> + +<h2 id="See_also">See also</h2> + +<ul> + <li><a href="/en-US/docs/Web/API/WebVR_API">WebVR API</a></li> + <li><a href="/en-US/docs/Web/API/Gamepad_API">Gamepad API</a></li> + <li><a href="/en-US/docs/Web/API/WebVR_API/Using_the_WebVR_API">Using the WebVR API</a></li> + <li><a href="/en-US/docs/Games/Techniques/Controls_Gamepad_API">Implementing controls using the Gamepad API</a></li> +</ul> diff --git a/files/zh-cn/web/api/webvr_api/webvr_environment_setup/index.html b/files/zh-cn/web/api/webvr_api/webvr_environment_setup/index.html new file mode 100644 index 0000000000..d3ede8add1 --- /dev/null +++ b/files/zh-cn/web/api/webvr_api/webvr_environment_setup/index.html @@ -0,0 +1,110 @@ +--- +title: WebVR环境配置 +slug: Web/API/WebVR_API/WebVR_environment_setup +translation_of: Archive/WebVR/WebVR_environment_setup +--- +<p>{{draft("WebVR API文档目前正在更新中以涵盖1.0版本规范, 因此这些信息中的一部分将会过时。如果你对此有任何疑问请联系 ~~chrisdavidmills。")}}</p> + +<p class="summary">在这篇文章中, 我们将带你了解配置你的WebVR测试环境所需要做的工作 — 包括硬件和软件配置以及一些常见的错误的解决方法.</p> + +<h2 id="硬件">硬件</h2> + +<p>首先来看WebVR的硬件需求。</p> + +<h3 id="头戴式显示器与位置追踪器">头戴式显示器与位置追踪器</h3> + +<p>目前有几款产品可作为VR头戴式显示器,其中最好的是<a href="https://developer.oculus.com/">Oculus Rift</a>,它具有坚固的头戴式显示器和安装在三脚架或监视器上的位置追踪相机。Oculus Rift DK2 目前的零售价是350美元(约2410人民币),但是随着技术的进步和越来越多的头戴设备的出现,预计Oculus Rigt的价格会下降。</p> + +<p><img alt="" src="https://mdn.mozillademos.org/files/11037/hw-setup.png" style="display: block; height: 78px; margin: 0px auto; width: 70%;"></p> + +<p>对于那些没有能力购买整套VR设备的人,也有其他便宜的产品可以选择。一个VR头戴式显示器其实就是一个高分辨率的屏幕,这个屏幕前面有一组眼镜。显示屏显示的是两个并排的有些偏移和渐晕的屏幕影像的副本,人的两眼各看其中一个,这样就给用户带来立体感,这些对于创建VR景象都是至关重要的。</p> + +<p><img alt="" src="https://mdn.mozillademos.org/files/10695/stereoscopic.png" style="display: block; height: 540px; margin: 0px auto; width: 960px;"></p> + +<p>你可以使用支持的浏览器去感受几近相同的体验,比如使用 Android <a href="https://nightly.mozilla.org/">Nightly</a>版的火狐浏览器—— 如同谷歌的<a href="https://www.google.com/get/cardboard/">Google Cardboard</a>的想法那样,你可以用任何能固定在头部的装置将手机固定在双眼前面,然后通过手机运行VR软件。这里主要的缺点就是没有位置追踪器,手机处理器没有桌面PC的处理器强大,所以体验上相对就没有那么真实(你转动头部的时候你可能得不到和PC上相同的体验,它有可能比较卡),但是,作为一个便宜的入门的设备,它还是不错的。</p> + +<h3 id="一台计算机:用于渲染VR场景">一台计算机:用于渲染VR场景</h3> + +<p>VR硬件需要提供高精度,低延迟的数据,来提供令人满意的用户体验 — 显示刷新需要达到60fps,否则,用户会觉得卡顿、抖动。为了保证达到这点,单位时间有大量的数据需要处理。因此,运行VR应用的计算机的配置要求比较高。最理想的是,你有台带独显的高配的笔记本或台式电脑,如新版MacBook Pro 15“/ 17”或Mac Pro,或Windows游戏本。如果你的电脑运行比较慢,你的体验会比较糟糕。</p> + +<h2 id="软件">软件</h2> + +<p>要运行WebVR软件,你需要如下描述的软件设置。</p> + +<h3 id="Oculus_Rift_SDK">Oculus Rift SDK</h3> + +<p>如果你使用Oculus Rift,你需要在你的系统上下载并安装 <a href="https://developer.oculus.com/downloads/">Oculus Rift SDK</a> 。它包含了VR软件所需的运行环境和<em>OculusWorldDemo</em>示例软件,它对排除故障很有用。</p> + +<h3 id="Firefox_Nightly与WebVR_Enabler_Add-on_(或其他可替代的)">Firefox Nightly与WebVR Enabler Add-on (或其他可替代的)</h3> + +<p>要设置浏览器,请按照下列步骤操作:</p> + +<ol> + <li>Firefox <a href="https://nightly.mozilla.org/">Nightly</a> 和<a href="https://www.mozilla.org/en-US/firefox/developer/">Developer Edition</a> 都支持WebVR。如果你还没装选择其中之一安装,注意安装最新的版本。</li> + <li>然后,安装 <a class="external external-icon" href="http://www.mozvr.com/downloads/webvr-addon-0.1.0.xpi">WebVR Enabler Add-on</a> — 这将启用WebVR并禁用多处理浏览(<a class="basiclink-blue" href="https://wiki.mozilla.org/Electrolysis">E10S</a>),这是一种新的Firefox浏览功能,目前与WebVR不兼容。</li> + <li>最后,重启浏览器。</li> +</ol> + +<div class="note"> +<p><strong>Note</strong>: 手动开启对WebVR的支持,你可以进入 <a>about:config</a> 然后打开dom.vr*选项。 WebVR Enabler Add-on更加的fang方便,它一次可以完成您所需要的一切。</p> +</div> + +<div class="note"> +<p><strong>Note</strong>: 对于移动端用户,Android版Firefox在<a href="http://nightly.mozilla.org/">Nightly builds</a>中也支持WebVR,但是现在还没优化,欢迎反馈意见。</p> +</div> + +<div class="note"> +<p><strong>Note</strong>: 还有可用的WebVR支持的实验性Chrome产品。 要了解更多,请查看Brandon Jones的 <a class="external external-icon" href="http://blog.tojicode.com/2014/07/bringing-vr-to-chrome.html">Bringing VR to Chrome</a>。</p> +</div> + +<h3 id="显示配置">显示配置</h3> + +<p>为了获得最佳性能,以下步骤的显示器配置非常重要。 不这样做会导致过度抖动和延迟。 我们正在努力改进这些方面,使WebVR真正的即插即用,但是现在最好的结果需要手动配置。</p> + +<h4 id="Windows">Windows</h4> + +<p>在控制面板中,先进入<em>Display > Screen Resolution(显示 > 屏幕分辨率)</em>. 设置<em>:</em></p> + +<ul> + <li><em>Orientation</em> to <em>Landscape (flipped). // 横向(翻转)</em></li> + <li><em>Multiple displays</em> to <em>Extend these displays</em>. // 扩展这些显示器</li> +</ul> + +<p><img alt="" src="https://mdn.mozillademos.org/files/10683/win-screen-resolution.png" style="display: block; height: 573px; margin: 0px auto; width: 644px;"></p> + +<p>然后,进入 <em>Advanced Settings > Monitor > Monitor Settings(高级显示设置 > 监视器 > 监视器设置),</em> 设置屏幕刷新频率为 <em>60Hz</em>.</p> + +<p><img alt="" src="https://mdn.mozillademos.org/files/10685/win-monitor.png" style="display: block; height: 573px; margin: 0px auto; width: 644px;"></p> + +<h4 id="Mac">Mac</h4> + +<p>首先,进入System Preferences > Displays > Display. 设置:</p> + +<ul> + <li><em>Optimize for</em> to <em>Rift</em></li> + <li><em>Rotation</em> to <em>90°</em></li> + <li><em>Refresh</em> to <em>60Hz</em></li> +</ul> + +<p><em><img alt="" src="https://mdn.mozillademos.org/files/10691/mac-displays.png" style="display: block; height: 528px; margin: 0px auto; width: 1342px;"></em></p> + +<p>然后,进入 <em>System Preferences <span class="gray5 light px1">> </span>Displays <span class="gray5 light px1">> </span>Arrangement</em> 设置<em>Arrangement</em>为<em>Mirrored</em>.</p> + +<p><img alt="" src="https://mdn.mozillademos.org/files/10693/mac-displays-mirrored.png" style="display: block; height: 528px; margin: 0px auto; width: 668px;"></p> + +<h2 id="故障排除">故障排除</h2> + +<p>在这个部分,我们提供一些故障排除方法。</p> + +<dl> + <dt>我的头戴式显示器或者位置追踪器相机不工作</dt> + <dd>尝试使用Oculus Rift SDK附带的OculusWorldDemo测试系统,如果您使用的是其他的VR硬件设备,则使用配套的测试系统。 如果您的硬件设备完全不工作,请确保已完全按照随附手册中的说明进行设置。 常见的错误包括将镜头盖留在追踪相机上和忘记插入USB电缆。</dd> + <dt>我的头戴式显示器或者位置追踪器相机还是不工作</dt> + <dd>一个常见的问题是追踪摄像机停止工作,所以你仍然可以看到图像,但它不会跟着你的头一起旋转。 提示:如果摄像机工作,相机的蓝色指示灯将亮起。 如果WebVR应用程序仍然不工作,并且OculusWorldDemo正常运行,请尝试重新启动浏览器 —— Nightly仍然处于实验性阶段,有时会出现异常。</dd> + <dt>即使我正确的配置了 {{anch("Display configuration")}},我看到显示的图像卡顿抖动</dt> + <dd>有可能是您的显卡太慢,您没有独显,或者当Oculus Rift打开时,您的计算机没有切换到显卡。 但我们不能确定适用于所有的电脑。无论哪种情况,你可以通过测试看看发生了什么,比如在Mac上使用<a href="https://gfx.io/">gfxCardStatus</a>软件来测试。 它会让你看到在什么时候集成或独显会切换,或强制使用某一个。 如果它返回消息“您正在使用gfxCardStatus不支持的系统,请确保您使用的是具有双GPU的MacBook Pro。 那么你可能没有GPU,你需要一个更快的处理器或选择容忍。 对于Windows,目前没有类似的应用程序,您必须手动进行更改。</dd> + <dt>我的VR设备旁的第二个监视器表现很奇怪。</dt> + <dd>如果你有第二个监视器(或者笔记本的外接显示器),当你使用 VR设备的时候最好将它断开,否则,有时候它会造成奇怪的问题。</dd> + <dt>Linux系统可以使用吗?</dt> + <dd>WebVR在Linux系统上目前不能使用。未完待续</dd> +</dl> |