<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://vrarwiki.com/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Paulo+Pacheco</id>
	<title>VR &amp; AR Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://vrarwiki.com/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Paulo+Pacheco"/>
	<link rel="alternate" type="text/html" href="https://vrarwiki.com/wiki/Special:Contributions/Paulo_Pacheco"/>
	<updated>2026-04-18T03:38:00Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.0</generator>
	<entry>
		<id>https://vrarwiki.com/index.php?title=File:Stability_AI_projects.png&amp;diff=25895</id>
		<title>File:Stability AI projects.png</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=File:Stability_AI_projects.png&amp;diff=25895"/>
		<updated>2023-01-27T11:15:08Z</updated>

		<summary type="html">&lt;p&gt;Paulo Pacheco: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Stability AI communities and projects&lt;/div&gt;</summary>
		<author><name>Paulo Pacheco</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=File:Stability_AI_motto.png&amp;diff=25894</id>
		<title>File:Stability AI motto.png</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=File:Stability_AI_motto.png&amp;diff=25894"/>
		<updated>2023-01-27T11:13:27Z</updated>

		<summary type="html">&lt;p&gt;Paulo Pacheco: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Stability AI motto&lt;/div&gt;</summary>
		<author><name>Paulo Pacheco</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=File:Model_comparison_Stable_diffusion.png&amp;diff=25680</id>
		<title>File:Model comparison Stable diffusion.png</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=File:Model_comparison_Stable_diffusion.png&amp;diff=25680"/>
		<updated>2023-01-06T22:43:15Z</updated>

		<summary type="html">&lt;p&gt;Paulo Pacheco: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Stable Diffusion model comparison&lt;/div&gt;</summary>
		<author><name>Paulo Pacheco</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=File:Model_comparison_2.png&amp;diff=25679</id>
		<title>File:Model comparison 2.png</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=File:Model_comparison_2.png&amp;diff=25679"/>
		<updated>2023-01-06T22:41:52Z</updated>

		<summary type="html">&lt;p&gt;Paulo Pacheco: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Model&lt;/div&gt;</summary>
		<author><name>Paulo Pacheco</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=File:7._Architecture.png&amp;diff=25678</id>
		<title>File:7. Architecture.png</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=File:7._Architecture.png&amp;diff=25678"/>
		<updated>2023-01-06T22:15:08Z</updated>

		<summary type="html">&lt;p&gt;Paulo Pacheco: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Stable Diffusion architecture. Source: Tensorflow&lt;/div&gt;</summary>
		<author><name>Paulo Pacheco</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=File:6._Overview_of_DM.png&amp;diff=25677</id>
		<title>File:6. Overview of DM.png</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=File:6._Overview_of_DM.png&amp;diff=25677"/>
		<updated>2023-01-06T22:14:15Z</updated>

		<summary type="html">&lt;p&gt;Paulo Pacheco: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Overview of a DM. Source: Rombach (2022)&lt;/div&gt;</summary>
		<author><name>Paulo Pacheco</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=File:5._Wide_aspect_ratio.png&amp;diff=25676</id>
		<title>File:5. Wide aspect ratio.png</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=File:5._Wide_aspect_ratio.png&amp;diff=25676"/>
		<updated>2023-01-06T22:12:23Z</updated>

		<summary type="html">&lt;p&gt;Paulo Pacheco: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Wide aspect ratio. Source: StabilityAI&lt;/div&gt;</summary>
		<author><name>Paulo Pacheco</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=File:4._Stability_Upscaler.png&amp;diff=25675</id>
		<title>File:4. Stability Upscaler.png</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=File:4._Stability_Upscaler.png&amp;diff=25675"/>
		<updated>2023-01-06T22:10:57Z</updated>

		<summary type="html">&lt;p&gt;Paulo Pacheco: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Stability Upscaler. Left: 128x128 low-resolution image. Right: 512x512 resolution image produced by Upscaler. Source: StabilityAI&lt;/div&gt;</summary>
		<author><name>Paulo Pacheco</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=File:3._Examples_of_Stability_2.png&amp;diff=25674</id>
		<title>File:3. Examples of Stability 2.png</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=File:3._Examples_of_Stability_2.png&amp;diff=25674"/>
		<updated>2023-01-06T22:09:53Z</updated>

		<summary type="html">&lt;p&gt;Paulo Pacheco: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Examples of images produced using Stable Diffusion 2.0, at 768x768 image resolution. Source: StabilityAI&lt;/div&gt;</summary>
		<author><name>Paulo Pacheco</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=File:2._Depth_to_image.png&amp;diff=25673</id>
		<title>File:2. Depth to image.png</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=File:2._Depth_to_image.png&amp;diff=25673"/>
		<updated>2023-01-06T22:07:18Z</updated>

		<summary type="html">&lt;p&gt;Paulo Pacheco: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Examples of images produced using Stable Diffusion 2.0, at 768x768 image resolution. Source: StabilityAI&lt;/div&gt;</summary>
		<author><name>Paulo Pacheco</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=File:1._Stable_Diffusion_developer_adoption.png&amp;diff=25672</id>
		<title>File:1. Stable Diffusion developer adoption.png</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=File:1._Stable_Diffusion_developer_adoption.png&amp;diff=25672"/>
		<updated>2023-01-06T22:00:50Z</updated>

		<summary type="html">&lt;p&gt;Paulo Pacheco: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Stable Diffusion developer adoption. Source: A16S and GitHub.&lt;/div&gt;</summary>
		<author><name>Paulo Pacheco</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=File:Github_copilot.png&amp;diff=25659</id>
		<title>File:Github copilot.png</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=File:Github_copilot.png&amp;diff=25659"/>
		<updated>2022-12-08T15:43:22Z</updated>

		<summary type="html">&lt;p&gt;Paulo Pacheco: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;GitHub Copilot general overview. Source: QACaffe&lt;/div&gt;</summary>
		<author><name>Paulo Pacheco</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=File:GPT_Training_process.png&amp;diff=25657</id>
		<title>File:GPT Training process.png</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=File:GPT_Training_process.png&amp;diff=25657"/>
		<updated>2022-12-07T14:15:18Z</updated>

		<summary type="html">&lt;p&gt;Paulo Pacheco: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;General overview of the training process using reinforcement learning from human feedback. Source: OpenAI&lt;/div&gt;</summary>
		<author><name>Paulo Pacheco</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=File:Gpt4-122.jpg&amp;diff=25656</id>
		<title>File:Gpt4-122.jpg</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=File:Gpt4-122.jpg&amp;diff=25656"/>
		<updated>2022-12-07T14:12:36Z</updated>

		<summary type="html">&lt;p&gt;Paulo Pacheco: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;ChatGPT user interface. Source: OpenAI&lt;/div&gt;</summary>
		<author><name>Paulo Pacheco</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=File:Multilingual_wer.png&amp;diff=25642</id>
		<title>File:Multilingual wer.png</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=File:Multilingual_wer.png&amp;diff=25642"/>
		<updated>2022-12-06T15:29:37Z</updated>

		<summary type="html">&lt;p&gt;Paulo Pacheco: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;WER languages&lt;/div&gt;</summary>
		<author><name>Paulo Pacheco</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=File:Whisper_Word-Error-Rate.png&amp;diff=25641</id>
		<title>File:Whisper Word-Error-Rate.png</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=File:Whisper_Word-Error-Rate.png&amp;diff=25641"/>
		<updated>2022-12-06T15:28:13Z</updated>

		<summary type="html">&lt;p&gt;Paulo Pacheco: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;WER&lt;/div&gt;</summary>
		<author><name>Paulo Pacheco</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=File:Whisper_models.png&amp;diff=25640</id>
		<title>File:Whisper models.png</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=File:Whisper_models.png&amp;diff=25640"/>
		<updated>2022-12-06T15:25:11Z</updated>

		<summary type="html">&lt;p&gt;Paulo Pacheco: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Whisper models&lt;/div&gt;</summary>
		<author><name>Paulo Pacheco</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=File:Asr-summary-of-model-architecture.png&amp;diff=25639</id>
		<title>File:Asr-summary-of-model-architecture.png</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=File:Asr-summary-of-model-architecture.png&amp;diff=25639"/>
		<updated>2022-12-06T15:23:40Z</updated>

		<summary type="html">&lt;p&gt;Paulo Pacheco: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Whisper architecture&lt;/div&gt;</summary>
		<author><name>Paulo Pacheco</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Apple_Vision_Pro&amp;diff=25628</id>
		<title>Apple Vision Pro</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Apple_Vision_Pro&amp;diff=25628"/>
		<updated>2022-12-05T14:14:50Z</updated>

		<summary type="html">&lt;p&gt;Paulo Pacheco: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Apple]] is working on a headset with [[virtual reality]] ([[VR]]) and [[augmented reality]] ([[AR]]) capabilities. While the company hasn&#039;t confirmed it officially, it has been heavily rumored and seems probable that it will be revealed in 2023 &amp;lt;ref name=”1”&amp;gt; Rice-Jones, J (2022). Apple VR headset: release date, features, and price. &#039;&#039;KnowTechie&#039;&#039;. https://knowtechie.com/apple-vr-headset-release-date-features-and-price/&amp;lt;/ref&amp;gt; &amp;lt;ref name=”2”&amp;gt; MacRumours Staff (2022). Apple Glasses. &#039;&#039;MacRumors&#039;&#039;. https://www.macrumors.com/roundup/apple-glasses/&amp;lt;/ref&amp;gt;. The [[mixed reality]] ([[MR]]) headset is expected to be in line with current [[VR headsets]] albeit with several cameras and sensors that provide bonus functionality. According to Bloomberg, several names have been suggested for this new headset such as Reality One, Reality Pro, and Reality Processor. These trademarked names might not apply to the final product but they have, nevertheless, been giving way to speculation about different [[VR]] and [[AR]] device models &amp;lt;ref name=”3”&amp;gt; Pritchard, T (2022). Apple VR/AR headset - everything we know so far. &#039;&#039;Tom&#039;s Guide&#039;&#039;. https://www.tomsguide.com/news/apple-vr-and-mixed-reality-headset-release-date-price-specs-and-leaks&amp;lt;/ref&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
One of those other related products could be what has been called the [[Apple Glass]], see-through lenses that will provide a fully AR experience. According to the available information, they would be a lightweight pair of glasses able to project imagery and information onto the real world &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;. Tim Cook, Apple’s CEO, has mentioned that AR has more potential than VR on the long-term. This product, however, is expected to become reality after Apple’s VR/AR headset since current VR technology is more mature and easier to produce &amp;lt;ref name=”4”&amp;gt; Apple Insider. Apple VR. &#039;&#039;Apple Insider&#039;&#039;. https://appleinsider.com/inside/apple-vr&amp;lt;/ref&amp;gt;. The name Apple Glass most likely won’t be used for the final product due to its association with [[Google Glass]]. A possible release date for this device in 2025 has been rumored &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt; Blake, A (2022). Apple mixed-reality headset: Everything we know about Apple&#039;s VR headset. &#039;&#039;Digital Trends&#039;&#039;. https://www.digitaltrends.com/computing/apple-mixed-reality-headset-rumors-news-price-release-date/&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Apparently, the intent of Apple’s headset is for short trips into VR, with users being able to use the headset for communication and viewing content and gaming but not as a constant all-immersive experience &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;. The headset will have as its main feature mixed reality, including several external cameras to provide features like hand-tracking and gesture control. Some reports claim that games are not a priority for Apple’s VR/AR headset &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
After the release of Apple’s first headset, a cheaper version is expected to be launched, with less features than the premium model. If true, Apple would have two headsets with different price points focused on mixed reality and the Apple lenses for augmented reality. This line of products could be a game-changer for the headset industry, inspiring a new wave of demands and products on the VR and AR space &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Release date and price==&lt;br /&gt;
&lt;br /&gt;
While, initially, some were expecting the headset’s reveal and release date information during Apple’s 2022 Worldwide Developers Conference (WWDC), such did not occur. Nevertheless, references to a headset on the beta versions of iOS 16 indicate that a release date is not far away &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
All information available hints that 2023 should be the release year for Apple’s new device, with some suggesting January for the announcement and the product launch during the second quarter of 2023 &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”6”&amp;gt; Priday, R (2022). Apple VR/AR headset, 15-inch Macbook Air, HomePod 2 and more could arrive in 2023. &#039;&#039;Tom&#039;s Guide&#039;&#039;. https://www.tomsguide.com/news/apple-vrar-headset-15-inch-macbook-air-homepod-2-and-more-could-arrive-in-2023&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Several price points have been proposed, from $2000 to $3000, seemingly indicating that the first-generation model will be a product aimed at industry use &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”7”&amp;gt; McMillan, M (2022). Apple AR-VR headset just tipped for January launch - and it could be $2,000. &#039;&#039;Tom&#039;s Guide&#039;&#039;. https://www.tomsguide.com/news/apple-arvr-headset-just-tipped-for-january-launch-and-it-could-be-dollar2000&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Design and specifications==&lt;br /&gt;
&lt;br /&gt;
[[File:AppleVR-AR.png|thumb|Figure 1: Apple’s mixed reality headset concept design. Creator: Antonio de Rosa]]&lt;br /&gt;
&lt;br /&gt;
[[File:Apple MR side view.png|thumb|Figure 2: Side view of Apple’s mixed reality headset (concept design). Creator: Ian Zelbo]]&lt;br /&gt;
&lt;br /&gt;
Since it is expected that it will be a mixed reality headset, combining VR and AR, the proposed designs are of a full wraparound set using straps that look similar to those on the Apple Watch Sport Band (figures 1 and 2). Also, different weights have been suggested for the headset going from as little as 150 grams (0.33 pounds) to between 300 and 400 grams (0.66 - 0.88 pounds). It has also been rumored that it will be a wireless device, giving the user complete freedom to move around without being disturbed by cables &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
To achieve the augmented reality side of the dual nature headset, cameras will be needed to capture the outside world and feed it back to the user. Reports have given a number of up to 12 cameras and lidar sensors mounted on the device. However, this number as changed to 14 and then 15 cameras as new reports and information have been made available &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;. According to Digital Trends, from the 15 cameras, 8 would be for AR, “one for environmental detection, and six for ‘innovative biometrics&#039; &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;.”&lt;br /&gt;
&lt;br /&gt;
===Resolution===&lt;br /&gt;
&lt;br /&gt;
Journalistic reports about the Apple headset have suggested two 8K displays, an unprecedented level of detail when compared to the [[HTC Vive Cosmos Elite]] that has a resolution per eye of 1440 x 1700 &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;&amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;. This, however, has been contradicted by others that have provided a resolution for the device of 4000 x 4000 for the front-facing lenses &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;. Regarding pixels-per-inch, it seems that there was an increase from the initial report of 2,800 ppi to 3,500 ppi. This came about by Apple asking Samsung Display and LG Display to produce displays with the increased ppi. However, these updated displays are not expected to be used in the first-generation headset &amp;lt;ref name=”8”&amp;gt; Lee, G (2022). Apple request development of 3500ppi OLEDoS to Samsung and LG. &#039;&#039;The Elec&#039;&#039;. https://www.thelec.net/news/articleView.html?idxno=4220&amp;lt;/ref&amp;gt; &amp;lt;ref name=”9”&amp;gt; Fathi, S (2022). Apple looking to make its AR/VR headsets more immersive with sharper displays. &#039;&#039;MacRumours&#039;&#039;. https://www.macrumors.com/2022/09/28/apple-ar-vr-headsets-more-immersive/&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Further information reveals that the headset’s front panels will be micro-LED displays with a third panel for peripheral vision being an AMOLED display running at a lower resolution, allowing for a foveated display &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Prescription lenses===&lt;br /&gt;
&lt;br /&gt;
A feature that has been speculated about is to allow users to order custom prescription lenses that could be inserted into the headset. This could be related to a trademarked name by Apple, “Optica” &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Refresh rate and chipset===&lt;br /&gt;
&lt;br /&gt;
There hasn’t been a lot of information about the refresh rate that will be used. Normally, VR headsets aim for 90 Hz or higher in order to minimize lag and motion sickness &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Regarding the chip that powers the headset, it is expected to be a custom-designed Apple Silicon chip and one of the most advanced and powerful processors. This would be the new M2 chip with 16 GB of RAM. The power would be balanced by the efficiency of Apple’s ARM-based chip architecture which is ideal for compact devices, reducing or nullifying the need for cooling &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Digital Trends and The Information have both reported an alternative to the single M2 chip: two chips on the headset with one offering the main computing power and the other managing the device’s sensors &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”6”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
As a power source, Apple’s 96 W adapters will probably be used &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Wi-Fi===&lt;br /&gt;
&lt;br /&gt;
The VR/AR headset is expected to have Wi-Fi 6E instead of the Wi-Fi 6 of the iPhone 13. This would allow for lower latency and transferring large amounts of data. It could also mean that the processing hard-work could be done with a connection to a separate device (Mac or iPhone) without the need for a physical cable &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Hand-tracking===&lt;br /&gt;
&lt;br /&gt;
[[File:Apple watch VR.png|thumb|Figure 3. Apple VR patent showing hand-tracking with two Apple watches. Source: Digital Trends.]]&lt;br /&gt;
&lt;br /&gt;
Different approaches to the hand-tracking system of the headset have been rumored based on patents submitted by Apple. One of such could be a “clothespin-like finger clip” that would serve as the input device. Based on patents, finger-mounted devices could detect movement and provide haptic feedback. Another possibility would be using a pair of Apple Watches, allowing the user to interact with the virtual world using gesture controls (figure 3). However, this tracking system is not likely to be implemented first since Apple watches have a high cost &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;. Other patents mention the use of smart rings to track the movements of the fingers and hands and the ability to detect objects that the user is holding, like an Apple Pencil &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Operating system===&lt;br /&gt;
&lt;br /&gt;
The operating system for the device seems to be internally called by Apple as [[xrOS]] (extended reality OS). While not much information has been released, rumours suggest that it will include new versions of core apps from the company.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
[[Category:Platforms]] [[Category:Virtual Reality Platforms]] [[Category:Virtual Reality]] [[Category:Virtual Reality Headsets]] [[Category:Devices]] [[Category:Virtual Reality Devices]]&lt;/div&gt;</summary>
		<author><name>Paulo Pacheco</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=VisionOS&amp;diff=25627</id>
		<title>VisionOS</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=VisionOS&amp;diff=25627"/>
		<updated>2022-12-05T13:58:01Z</updated>

		<summary type="html">&lt;p&gt;Paulo Pacheco: Created page with &amp;quot;xrOS (Extended Reality OS) is the operating system (OS) for the upcoming Apple MR headset. The current version of the name being used internally by Apple was reported on...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;xrOS (Extended Reality OS) is the operating system (OS) for the upcoming [[Apple MR headset]]. The current version of the name being used internally by Apple was reported on December 2022 in Bloomberg, by Mark Gurman. &amp;lt;ref name=”1” &amp;gt; Moon, M (2022). Apple&#039;s upcoming mixed reality headset will reportedly run &#039;xrOS&#039;. Endagdget. https://www.engadget.com/apple-xros-mixed-reality-headset-130532613.html?guccounter=1&amp;amp;guce_referrer=aHR0cHM6Ly93d3cuc3RhcnRwYWdlLmNvbS8&amp;amp;guce_referrer_sig=AQAAAH81CTC5Uu8KVtRICz-pFdeGCThUHXl8ZKqzdUFVfC_waRByDQ6LmGPBKllAwuxppl8H7qvZ03GuWfgnUbmVK3YGiTWRbZNB6Hl1L8YijwqCAlZHNMSjkTpYLq3iU96lysOJwjCcTaKuhDyA6zVCd8PlKukdZfm25Gv-XNj4RbaN&amp;lt;/ref&amp;gt; &amp;lt;ref name=”2” &amp;gt; Mok, A (2022). Apple&#039;s long-rumored headset is reportedly getting a new name for its software: &#039;xrOS&#039;. Insider. https://www.businessinsider.com/apples-vrar-headset-software-will-reportedly-be-renamed-xros-2022-12?IR=T&amp;lt;/ref&amp;gt; There is no information about if this will be the final name of the headset&#039;s OS. &amp;lt;ref name=”3” &amp;gt; Malhotra, V (2022). Apple’s Long-Rumored Mixed Reality Headset to Run ‘xrOS’. Beebom. https://beebom.com/apple-mixed-reality-headset-run-xros/&amp;lt;/ref&amp;gt; Meta also seemed to have been interested in using xrOS has a name for its device&#039;s OS. &amp;lt;ref name=”4” &amp;gt; Lewis, D (2022). Apple xrOS, Twitter safe on the App Store, and 1 way to freshen up old AirPods. Mac O&#039;Clock. https://medium.com/macoclock/apple-xros-twitter-safe-on-the-app-store-and-1-way-to-freshen-up-old-airpods-david-lewis-b372310d11d8&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Previous rumours suggested that the MR headset&#039;s OS was known as realityOS. &amp;lt;ref name=”2” &amp;gt;&amp;lt;/ref&amp;gt; The change of name seems to better reflect the device&#039;s vision as a [[mixed reality]] ([[MR]]) headset, with the inclusion of [[augmented reality]] ([[AR]]) and [[virtual reality]] ([[VR]]). This [[XR]] ([[extended reality]]) capabilities will allow the user to both be completely immersed in a virtual environment or augment reality with an overlay of visual information over the real world. &amp;lt;ref name=”2” &amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5” &amp;gt; Clover, J (2022). Apple Now Calling AR/VR Headset Operating System &#039;xrOS&#039;. MacRumours. https://www.macrumors.com/2022/12/01/apple-headset-operating-system-xros/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The name xrOS appeared on patent applications filed in key markets worlwide &amp;lt;ref name=”6” &amp;gt; Evans, J (2022). Apple gives up on &#039;Reality,&#039; but still wants to extend it. ComputerWorld. https://www.computerworld.com/article/3681892/apple-gives-up-on-reality-but-still-wants-to-extend-it.html&amp;lt;/ref&amp;gt; like the US, Europe, and Asia by Deep Dive LLC. &amp;lt;ref name=”1” &amp;gt;&amp;lt;/ref&amp;gt; According to Gurman, this is a shell corporation that may be owned by Apple. &amp;lt;ref name=”2” &amp;gt;&amp;lt;/ref&amp;gt; The practice of using a shell corporations to trademark products is a common practice of Apple. &amp;lt;ref name=”3” &amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
xrOS is said to include new versions of core apps like Messages, Maps, FaceTime and other apps &amp;lt;ref name=”7” &amp;gt; Riley, D (2022) Apple mixed-reality headset operating system reportedly now known as ‘xrOS’. SiliconANGLE.https://siliconangle.com/2022/12/01/apple-mixed-reality-headset-operating-system-reportedly-now-known-xros/&amp;lt;/ref&amp;gt; &amp;lt;ref name=”8” &amp;gt; Bains, C (2022). Apple’s mixed-reality headset could finally appear next year. The Shortcut.https://www.theshortcut.com/p/apple-mixed-reality-headset-2023&amp;lt;/ref&amp;gt; Third parties will also be able to develop their games and apps using the operating system&#039;s software development kit. &amp;lt;ref name=”2” &amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”7” &amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The name change may be an indication that the announcement and release of Apple&#039;s MR headset could be in 2023. &amp;lt;ref name=”3” &amp;gt;&amp;lt;/ref&amp;gt; While there are no official announcements, WWSC 2023 is being suggested as the possible event where the device wil be finally unveiled. &amp;lt;ref name=”5” &amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;/div&gt;</summary>
		<author><name>Paulo Pacheco</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=File:Figure_4.3_-_Inpainting_corgi_3.png&amp;diff=25583</id>
		<title>File:Figure 4.3 - Inpainting corgi 3.png</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=File:Figure_4.3_-_Inpainting_corgi_3.png&amp;diff=25583"/>
		<updated>2022-11-13T12:15:30Z</updated>

		<summary type="html">&lt;p&gt;Paulo Pacheco: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Inpainting. Corgi added. Source OpenAI &lt;/div&gt;</summary>
		<author><name>Paulo Pacheco</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=File:Figure_4.1_-_Inpainting_ex_1.png&amp;diff=25582</id>
		<title>File:Figure 4.1 - Inpainting ex 1.png</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=File:Figure_4.1_-_Inpainting_ex_1.png&amp;diff=25582"/>
		<updated>2022-11-13T12:15:05Z</updated>

		<summary type="html">&lt;p&gt;Paulo Pacheco: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Inpainting. Corgi added. Source OpenAI &lt;/div&gt;</summary>
		<author><name>Paulo Pacheco</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=File:Figure_3.2_-_Dall-e_2.png&amp;diff=25581</id>
		<title>File:Figure 3.2 - Dall-e 2.png</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=File:Figure_3.2_-_Dall-e_2.png&amp;diff=25581"/>
		<updated>2022-11-13T12:12:26Z</updated>

		<summary type="html">&lt;p&gt;Paulo Pacheco: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Figure 3 - Comparison of images generated on DALL-E 1 and 2. Credit: OpenAI&lt;/div&gt;</summary>
		<author><name>Paulo Pacheco</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=File:Figure_3.1_-_Dall-E_1.png&amp;diff=25580</id>
		<title>File:Figure 3.1 - Dall-E 1.png</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=File:Figure_3.1_-_Dall-E_1.png&amp;diff=25580"/>
		<updated>2022-11-13T12:12:04Z</updated>

		<summary type="html">&lt;p&gt;Paulo Pacheco: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Figure 3 - Comparison of images generated on DALL-E 1 and 2. Credit: OpenAI&lt;/div&gt;</summary>
		<author><name>Paulo Pacheco</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=File:Figure_2_-_DALL-E_2_image_generation_process.png&amp;diff=25579</id>
		<title>File:Figure 2 - DALL-E 2 image generation process.png</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=File:Figure_2_-_DALL-E_2_image_generation_process.png&amp;diff=25579"/>
		<updated>2022-11-13T12:09:56Z</updated>

		<summary type="html">&lt;p&gt;Paulo Pacheco: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Figure 2 - DALL-E 2 image generation process. Credit: OpenAI&lt;/div&gt;</summary>
		<author><name>Paulo Pacheco</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=File:Figure_1_-_Image_generated_by_DALL-E_.png&amp;diff=25578</id>
		<title>File:Figure 1 - Image generated by DALL-E .png</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=File:Figure_1_-_Image_generated_by_DALL-E_.png&amp;diff=25578"/>
		<updated>2022-11-13T12:05:17Z</updated>

		<summary type="html">&lt;p&gt;Paulo Pacheco: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Figure 1 - Image generated by DALL-E. Credit: Ramesh et al. (2022)&lt;/div&gt;</summary>
		<author><name>Paulo Pacheco</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=File:OpAIGym.png&amp;diff=25575</id>
		<title>File:OpAIGym.png</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=File:OpAIGym.png&amp;diff=25575"/>
		<updated>2022-11-03T12:49:51Z</updated>

		<summary type="html">&lt;p&gt;Paulo Pacheco: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;OpenAI Gym agent and environment&lt;/div&gt;</summary>
		<author><name>Paulo Pacheco</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Apple_Vision_Pro&amp;diff=25569</id>
		<title>Apple Vision Pro</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Apple_Vision_Pro&amp;diff=25569"/>
		<updated>2022-10-14T18:19:26Z</updated>

		<summary type="html">&lt;p&gt;Paulo Pacheco: Created page with &amp;quot;Apple is working on a headset with virtual reality (VR) and augmented reality (AR) capabilities. While the company hasn&amp;#039;t confirmed it officially, it has been...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Apple is working on a headset with [[virtual reality]] ([[VR]]) and [[augmented reality]] ([[AR]]) capabilities. While the company hasn&#039;t confirmed it officially, it has been heavily rumored and seems probable that it will be revealed in 2023 &amp;lt;ref name=”1”&amp;gt; Rice-Jones, J (2022). Apple VR headset: release date, features, and price. &#039;&#039;KnowTechie&#039;&#039;. https://knowtechie.com/apple-vr-headset-release-date-features-and-price/&amp;lt;/ref&amp;gt; &amp;lt;ref name=”2”&amp;gt; MacRumours Staff (2022). Apple Glasses. &#039;&#039;MacRumors&#039;&#039;. https://www.macrumors.com/roundup/apple-glasses/&amp;lt;/ref&amp;gt;. The [[mixed reality]] ([[MR]]) headset is expected to be in line with current [[VR headsets]] albeit with several cameras and sensors that provide bonus functionality. According to Bloomberg, several names have been suggested for this new headset such as Reality One, Reality Pro, and Reality Processor. These trademarked names might not apply to the final product but they have, nevertheless, been giving way to speculation about different [[VR]] and [[AR]] device models &amp;lt;ref name=”3”&amp;gt; Pritchard, T (2022). Apple VR/AR headset - everything we know so far. &#039;&#039;Tom&#039;s Guide&#039;&#039;. https://www.tomsguide.com/news/apple-vr-and-mixed-reality-headset-release-date-price-specs-and-leaks&amp;lt;/ref&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
One of those other related products could be what has been called the [[Apple Glass]], see-through lenses that will provide a fully AR experience. According to the available information, they would be a lightweight pair of glasses able to project imagery and information onto the real world &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;. Tim Cook, Apple’s CEO, has mentioned that AR has more potential than VR on the long-term. This product, however, is expected to become reality after Apple’s VR/AR headset since current VR technology is more mature and easier to produce &amp;lt;ref name=”4”&amp;gt; Apple Insider. Apple VR. &#039;&#039;Apple Insider&#039;&#039;. https://appleinsider.com/inside/apple-vr&amp;lt;/ref&amp;gt;. The name Apple Glass most likely won’t be used for the final product due to its association with [[Google Glass]]. A possible release date for this device in 2025 has been rumored &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt; Blake, A (2022). Apple mixed-reality headset: Everything we know about Apple&#039;s VR headset. &#039;&#039;Digital Trends&#039;&#039;. https://www.digitaltrends.com/computing/apple-mixed-reality-headset-rumors-news-price-release-date/&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Apparently, the intent of Apple’s headset is for short trips into VR, with users being able to use the headset for communication and viewing content and gaming but not as a constant all-immersive experience &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;. The headset will have as its main feature mixed reality, including several external cameras to provide features like hand-tracking and gesture control. Some reports claim that games are not a priority for Apple’s VR/AR headset &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
After the release of Apple’s first headset, a cheaper version is expected to be launched, with less features than the premium model. If true, Apple would have two headsets with different price points focused on mixed reality and the Apple lenses for augmented reality. This line of products could be a game-changer for the headset industry, inspiring a new wave of demands and products on the VR and AR space &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Release date and price==&lt;br /&gt;
&lt;br /&gt;
While, initially, some were expecting the headset’s reveal and release date information during Apple’s 2022 Worldwide Developers Conference (WWDC), such did not occur. Nevertheless, references to a headset on the beta versions of iOS 16 indicate that a release date is not far away &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
All information available hints that 2023 should be the release year for Apple’s new device, with some suggesting January for the announcement and the product launch during the second quarter of 2023 &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”6”&amp;gt; Priday, R (2022). Apple VR/AR headset, 15-inch Macbook Air, HomePod 2 and more could arrive in 2023. &#039;&#039;Tom&#039;s Guide&#039;&#039;. https://www.tomsguide.com/news/apple-vrar-headset-15-inch-macbook-air-homepod-2-and-more-could-arrive-in-2023&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Several price points have been proposed, from $2000 to $3000, seemingly indicating that the first-generation model will be a product aimed at industry use &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”7”&amp;gt; McMillan, M (2022). Apple AR-VR headset just tipped for January launch - and it could be $2,000. &#039;&#039;Tom&#039;s Guide&#039;&#039;. https://www.tomsguide.com/news/apple-arvr-headset-just-tipped-for-january-launch-and-it-could-be-dollar2000&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Design and specifications==&lt;br /&gt;
&lt;br /&gt;
[[File:AppleVR-AR.png|thumb|Figure 1: Apple’s mixed reality headset concept design. Creator: Antonio de Rosa]]&lt;br /&gt;
&lt;br /&gt;
[[File:Apple MR side view.png|thumb|Figure 2: Side view of Apple’s mixed reality headset (concept design). Creator: Ian Zelbo]]&lt;br /&gt;
&lt;br /&gt;
Since it is expected that it will be a mixed reality headset, combining VR and AR, the proposed designs are of a full wraparound set using straps that look similar to those on the Apple Watch Sport Band (figures 1 and 2). Also, different weights have been suggested for the headset going from as little as 150 grams (0.33 pounds) to between 300 and 400 grams (0.66 - 0.88 pounds). It has also been rumored that it will be a wireless device, giving the user complete freedom to move around without being disturbed by cables &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
To achieve the augmented reality side of the dual nature headset, cameras will be needed to capture the outside world and feed it back to the user. Reports have given a number of up to 12 cameras and lidar sensors mounted on the device. However, this number as changed to 14 and then 15 cameras as new reports and information have been made available &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;. According to Digital Trends, from the 15 cameras, 8 would be for AR, “one for environmental detection, and six for ‘innovative biometrics&#039; &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;.”&lt;br /&gt;
&lt;br /&gt;
===Resolution===&lt;br /&gt;
&lt;br /&gt;
Journalistic reports about the Apple headset have suggested two 8K displays, an unprecedented level of detail when compared to the [[HTC Vive Cosmos Elite]] that has a resolution per eye of 1440 x 1700 &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;&amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;. This, however, has been contradicted by others that have provided a resolution for the device of 4000 x 4000 for the front-facing lenses &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;. Regarding pixels-per-inch, it seems that there was an increase from the initial report of 2,800 ppi to 3,500 ppi. This came about by Apple asking Samsung Display and LG Display to produce displays with the increased ppi. However, these updated displays are not expected to be used in the first-generation headset &amp;lt;ref name=”8”&amp;gt; Lee, G (2022). Apple request development of 3500ppi OLEDoS to Samsung and LG. &#039;&#039;The Elec&#039;&#039;. https://www.thelec.net/news/articleView.html?idxno=4220&amp;lt;/ref&amp;gt; &amp;lt;ref name=”9”&amp;gt; Fathi, S (2022). Apple looking to make its AR/VR headsets more immersive with sharper displays. &#039;&#039;MacRumours&#039;&#039;. https://www.macrumors.com/2022/09/28/apple-ar-vr-headsets-more-immersive/&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Further information reveals that the headset’s front panels will be micro-LED displays with a third panel for peripheral vision being an AMOLED display running at a lower resolution, allowing for a foveated display &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Prescription lenses===&lt;br /&gt;
&lt;br /&gt;
A feature that has been speculated about is to allow users to order custom prescription lenses that could be inserted into the headset. This could be related to a trademarked name by Apple, “Optica” &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Refresh rate and chipset===&lt;br /&gt;
&lt;br /&gt;
There hasn’t been a lot of information about the refresh rate that will be used. Normally, VR headsets aim for 90 Hz or higher in order to minimize lag and motion sickness &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Regarding the chip that powers the headset, it is expected to be a custom-designed Apple Silicon chip and one of the most advanced and powerful processors. This would be the new M2 chip with 16 GB of RAM. The power would be balanced by the efficiency of Apple’s ARM-based chip architecture which is ideal for compact devices, reducing or nullifying the need for cooling &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Digital Trends and The Information have both reported an alternative to the single M2 chip: two chips on the headset with one offering the main computing power and the other managing the device’s sensors &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”6”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
As a power source, Apple’s 96 W adapters will probably be used &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Wi-Fi===&lt;br /&gt;
&lt;br /&gt;
The VR/AR headset is expected to have Wi-Fi 6E instead of the Wi-Fi 6 of the iPhone 13. This would allow for lower latency and transferring large amounts of data. It could also mean that the processing hard-work could be done with a connection to a separate device (Mac or iPhone) without the need for a physical cable &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Hand-tracking===&lt;br /&gt;
&lt;br /&gt;
[[File:Apple watch VR.png|thumb|Figure 3. Apple VR patent showing hand-tracking with two Apple watches. Source: Digital Trends.]]&lt;br /&gt;
&lt;br /&gt;
Different approaches to the hand-tracking system of the headset have been rumored based on patents submitted by Apple. One of such could be a “clothespin-like finger clip” that would serve as the input device. Based on patents, finger-mounted devices could detect movement and provide haptic feedback. Another possibility would be using a pair of Apple Watches, allowing the user to interact with the virtual world using gesture controls (figure 3). However, this tracking system is not likely to be implemented first since Apple watches have a high cost &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;. Other patents mention the use of smart rings to track the movements of the fingers and hands and the ability to detect objects that the user is holding, like an Apple Pencil &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;/div&gt;</summary>
		<author><name>Paulo Pacheco</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=File:Apple_watch_VR.png&amp;diff=25568</id>
		<title>File:Apple watch VR.png</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=File:Apple_watch_VR.png&amp;diff=25568"/>
		<updated>2022-10-14T18:13:06Z</updated>

		<summary type="html">&lt;p&gt;Paulo Pacheco: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Figure 3. Apple VR patent showing hand-tracking with two Apple watches. Source: Digital Trends.&lt;/div&gt;</summary>
		<author><name>Paulo Pacheco</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=File:Apple_MR_side_view.png&amp;diff=25567</id>
		<title>File:Apple MR side view.png</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=File:Apple_MR_side_view.png&amp;diff=25567"/>
		<updated>2022-10-14T17:56:45Z</updated>

		<summary type="html">&lt;p&gt;Paulo Pacheco: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Figure 2: Side view of Apple’s mixed reality headset (concept design). Creator: Ian Zelbo&lt;/div&gt;</summary>
		<author><name>Paulo Pacheco</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=File:AppleVR-AR.png&amp;diff=25566</id>
		<title>File:AppleVR-AR.png</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=File:AppleVR-AR.png&amp;diff=25566"/>
		<updated>2022-10-14T17:55:35Z</updated>

		<summary type="html">&lt;p&gt;Paulo Pacheco: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Figure 1: Apple’s mixed reality headset concept design. Creator: Antonio de Rosa&lt;/div&gt;</summary>
		<author><name>Paulo Pacheco</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=File:August-kamp-girl-with-pearl-earring-outpainting.jpg&amp;diff=25552</id>
		<title>File:August-kamp-girl-with-pearl-earring-outpainting.jpg</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=File:August-kamp-girl-with-pearl-earring-outpainting.jpg&amp;diff=25552"/>
		<updated>2022-10-07T17:50:50Z</updated>

		<summary type="html">&lt;p&gt;Paulo Pacheco: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Figure 1. Example of Outpainting of August Kamp&#039;s &amp;quot;Girl With Earring&amp;quot;. Source: DPReview.&lt;/div&gt;</summary>
		<author><name>Paulo Pacheco</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Empathy&amp;diff=24912</id>
		<title>Empathy</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Empathy&amp;diff=24912"/>
		<updated>2017-12-13T19:14:26Z</updated>

		<summary type="html">&lt;p&gt;Paulo Pacheco: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{TOCRIGHT}}&lt;br /&gt;
==Introduction==&lt;br /&gt;
Empathy refers to the cognitive and emotional reaction of an individual to the observed experiences of another. It is the experience of understanding another person’s condition from their perspective; the ability to recognize, feel, and share the emotions of another person or even a fictional character. Empathy involves not only understanding a person’s condition from her perspective (a cognitive process) but also to share her emotions or distress (an emotional process). Empathy can be confused with pity, sympathy, and compassion, which are all reactions to the predicament of others. The term comes from the psychologist Edward Titchner that, in 1909, translated the German word Einfühlung (‘feeling into’) as ‘empathy’. &amp;lt;ref name=”1”&amp;gt;Shamay-Tsoory, S.G. (2011). The neural bases for empathy. The Neuroscientist, 17(1): 18-24&amp;lt;/ref&amp;gt; &amp;lt;ref name=”2”&amp;gt;Psychology Today. Empathy. Retrieved from https://www.psychologytoday.com/basics/empathy&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;Burton, N. (2014). Empathy and altruism: are they selfish? Retrieved from https://www.psychologytoday.com/blog/hide-and-seek/201410/empathy-and-altruism-are-they-selfish&amp;lt;/ref&amp;gt; &amp;lt;ref name=”4”&amp;gt;Burton, N. (2015). Empathy vs sympathy. Retrieved from https://www.psychologytoday.com/blog/hide-and-seek/201505/empathy-vs-sympathy&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
According to Decety (2011), in the developmental psychology and social psychology disciplines, empathy is defined “as an affective response stemming from the understanding of another’s emotional state or condition similar to what the other person is feeling or would be expected to feel in the given situation.” Others define it as a specific set of congruent emotions - the feelings that are more focused on others than on the self. Another definition - derived from psychoanalysis - describes empathy has having two acts: the first is an identification with the other person, and the second an awareness of one’s own feelings after the identification, resulting in an awareness of the object’s feeling. &amp;lt;ref name=”5”&amp;gt;Decety, J. (2011). Dissecting the neural mechanisms mediating empathy. Emotion Review, 3(1): 92-108&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There are several neural components that contribute to empathy. Indeed, one of them that received some notoriety is “mirror neurons”. Research on macaques showed that they are involved in reacting to emotions expressed by others and then reproducing them. These neurons are also present in humans and there have been some controversies in the fields of psychology, biology, and ethology over whether empathy is a uniquely human trait. Besides this, there is also the debate of whether empathy is an emotional or cognitive construct - sensing another’s feelings versus understanding another’s perspective. &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”6”&amp;gt;Elliott, R., Bohart, A.C., Watson, J.C., and Greenberg, L.S. (2011). Empathy. In J. Norcross (ed.), Psychotherapy relationships that work (2nd ed.) (pp. 132-152). New York, Oxford University Press&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The lowest common denominator of all emphatic processes is that one individual is affected by another’s emotional or arousal state. &amp;lt;ref name=”7”&amp;gt;de Waal,F.B.M. (2008). Putting the altruism back into altruism: the evolution of empathy. Annual Reviews of Psychology, 59: 279–300&amp;lt;/ref&amp;gt; Shamay-Tsoory (2011) states that evidence supports a model with two distinct systems for empathy - an emotional system and a cognitive system. Emotional empathy has been described as “the capacity to experience affective reactions to the observed experiences of others or share a ‘fellow feeling’”, and cognitive empathy “as a cognitive role-taking ability, or the capacity to engage in the cognitive process of adopting another’s psychological point of view.” Emotional empathy can involve different related underlying processes such as emotional contagion, emotion recognition, and shared pain. Cognitive empathy, on the other side, involves making inferences regarding the other’s affective and cognitive mental states. According to Shamay-Tsoory (2011), while the two systems can work together, “they may be behaviorally, developmentally, neurochemically, and neuroanatomically dissociable.” &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Some researchers consider that empathy is not unique to humans since many of the same biological mechanisms are shared with other mammalian species. Yet, since humans possess high-level cognitive abilities like language, executive function, and theory of mind on top of older social and emotional capabilities, they are considered a special case. These evolutionary newer cognitive features expand the possible range of behaviors that can result from, or lack of, empathy. These can range from positive behaviors like caring for others - even towards individuals from different species - to negative behavior such as cruelty and dehumanization when there is a lack of empathy. Deficits in empathy are characteristic of several psychopathologies. Therefore, a better knowledge of the neural circuits that relate to empathy is essential to advance the understanding of interpersonal sensitivity, basic neural and cognitive mechanisms of emotion processing, the relation of these mechanisms with cognition and motivation, individual differences in personality traits, and mental health. &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Empathy-related behaviors manifest early in development; 6-month-old infants already show a preference for characters that help others over characters that are not cooperative. There are also suggestions that an early form of affective perspective-taking that does not rely on emotion contagion or mimicry occurs in children aged 18 to 25 months old. Evidence also points to prosocial behaviors (e.g. altruistic helping) emerging in early childhood, with one-year-old children beginning to comfort victims of distress. Children between 14 to 18 months also start displaying spontaneous and unrewarded helping behaviors. &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
While today there is great interest in empathy research, between 1975 and 1995 there was a general lack of interest in this field of study. After this period, it regained scientific interest in the developmental and social psychology fields. Empathy evolved from an important component of ‘emotional intelligence’ to a multidisciplinary field of study that encompasses economics, evolutionary biology, and affective neuroscience. &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”6”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Origins of empathy==&lt;br /&gt;
&lt;br /&gt;
According to de Waal (2008), “empathy allows one to quickly and automatically relate to the emotional states of others, which is essential for the regulation of social interactions, coordinated activity, and cooperation toward shared goals.” It is very probable that the evolutionary basis for empathy started in the context of parental care even before the human species had evolved. Human infants signal their state through smiling, crying, and to call the attention of the parents. There are analogous mechanisms that occur in other animals in which reproduction relies on feeding, cleaning, and warming of the infants. De Waal (2008) suggested that “avian or mammalian parents alert to and affected by their offspring’s needs likely out-reproduced those who remained indifferent.” &amp;lt;ref name=”7”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Chimpanzees, gorillas, and humans share various parenting mechanisms with other placental mammals, such as internal gestation, lactation, and attachment mechanisms that involve neuropeptides (e.g. oxytocin). The development of parenting behavior in mammals paved way for an increased exposure and responsiveness to the emotional signals of others. &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Since humans are intrinsically social, with survival being dependent on social interactions with others, alliances, and accurate social judgments, it follows that specific neurobiological mechanisms evolved to perceive, understand, predict, and respond to the internal states of others. It has been suggested that empathic behavior evolved due to its contribution to genetic fitness. Once it evolved, it could be applied outside the parental-care context, according to the principle of motivational autonomy which states that motivation for a behavior becomes disconnected from its ultimate goal. This would lead to the empathic capacity playing a bigger role in the wider network of social relationships. This is exemplified when people send money to help distant disaster victims. In this case, empathy works beyond its original evolutionary context. &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”7”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
According to some researchers, the earliest system relating to empathy is the emotional contagion system (i.e. one person is affected by another’s emotional or arousal state). A more advanced system would be the cognitive empathic perspective-taking system, which involves higher cognitive functions. As evidence to this, emotional contagion has been seen in rodents, while the rudimentary traits of cognitive aspects of empathy have only been described in the closest living relatives of humans, the chimpanzees. Also, emotional contagion is observed earlier in development than cognitive perspective-taking abilities. &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Theory of mind==&lt;br /&gt;
&lt;br /&gt;
Theory of mind is one of the bases of empathy. It is the process of understanding another person’s perspective, to put someone else’s shoes, imagine their thoughts and feelings. It is the ability to understand that others see things differently, have different beliefs, intents, desires, emotions, etc. It appears at about four years of age and improves over time. It was suggested that theory of mind has its neural basis in the mirror neurons. These neurons fire when a particular action is carried out, or when the same action is observed in others. This enables the interpretation of the actions and assume the beliefs, intents, or desires of other people. &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Advantages of empathy==&lt;br /&gt;
&lt;br /&gt;
The main advantage of empathy is that it increases prosocial behaviors, playing a crucial role in human social interactions as an essential component for healthy coexistence. Empathy is also the basis of intimacy and close connection; without it, relationships become emotionally shallow. &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”8”&amp;gt;Streep, P. (2017). 6 things you need to know about empathy. Retrieved from https://www.psychologytoday.com/blog/tech-support/201701/6-things-you-need-know-about-empathy&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The absence of empathy would mean that the inner selves and feelings of people who are close would remain a mystery. Empathy also avoids the continuation of bad behavior since a person becomes aware of the pain it is causing to another. Again, in these cases, the lack of empathy can lead to devastating results. &amp;lt;ref name=”8”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Evolutionary, empathy is important because it promotes parental care, social attachment, and prosocial behavior. It assists in social interactions, group activities, teaching and learning, all essential to the human life. &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Characteristics of empathy==&lt;br /&gt;
&lt;br /&gt;
While most people associate empathy with intuition, as something closer to a gut reaction than a function of reasoning, in fact, empathy is not based on intuition. Instead, psychologists suggest that empathy consists of emotion sharing and executive control to regulate and modulate the experience - both supported by specific neural systems that interact with each other. &amp;lt;ref name=”8”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
MRI imaging experiments have improved the theoretical understanding of empathy by relating specific brain regions to the experience of empathy. Furthermore, other actions relevant to the understanding of empathy such as mimicry and mirroring also take place in specific regions of the brain. There are several brain areas involved in creating the sense of empathy. According to Decety (2011), these include the cortex, “the autonomic nervous system (ANS), hypothalamic-pituitary-adrenal axis (HPA), and endocrine systems that regulate bodily states, emotion, and reactivity.” &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”8”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Another characteristic of empathy is that the capacity for it is innate, although it is a behavior that needs development. The identification and regulation of emotions by infants is done through dyadic interactions with their caretakers (mainly the mothers). A mother that is attentive to the child’s needs and cues allows the infant to develop emotionally, laying the foundation for the child’s sense of self, sense of other, and eventually empathy. &amp;lt;ref name=”8”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The capacity for empathy is also variable; it differs from person to person according to the level of their own emotional intelligence (the ability to know what one is feeling, to label and name different emotions precisely, and use one’s emotions to inform thinking). The more connected a person is with her emotions, the capacity to empathize will also be greater. &amp;lt;ref name=”8”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Finally, psychology’s interpretation of empathy as an individual’s trait may have limitations. Indeed, anthropologists have suggested that empathy might be dyadic, noting that that the person who is the target of empathy is as important as the empathizer. Besides this, they suggest that cultural and social norms also act as moderators of empathy. &amp;lt;ref name=”8”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Empathy and associated phenomena==&lt;br /&gt;
The term empathy has been used as a description of various phenomena such as “feelings of concern for other people that create a motivation to help them, experiencing emotions that match another individual’s emotions, [and] knowing what the other is thinking or feeling.” But while these phenomena are related to one another, they are not elements or aspects of a single thing that is empathy. This diversity of phenomena is one of the reasons for the historical debate about the nature of empathy and if it distinguishes humans from other species. &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Empathy vs sympathy===&lt;br /&gt;
&lt;br /&gt;
It is common for people to use the terms empathy and sympathy interchangeably. They are different processes; feeling sympathy for someone is identifying with the situation that another person is in. However, feeling sympathy does not inevitably lead to a sense of connection with the other person or what she is feeling. Empathy involves the correct identification of what someone is feeling and also sharing those feelings. In short, sympathy is feeling for someone while empathy is feeling with them. &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”8”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Empathy vs pity===&lt;br /&gt;
&lt;br /&gt;
Pity is a more distant and superficial feeling when compared to empathy, sympathy, or compassion. It is a feeling of discomfort towards someone, a group of people, or a thing in distress. It implies that the sufferer does not deserve his suffering, and his unable to alleviate it. &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Empathy vs compassion===&lt;br /&gt;
&lt;br /&gt;
Compared to empathy, compassion is more engaged, being associated with an inclination to diminish the suffering of others. It is one of the main incentives of altruism. &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Empathy vs altruism===&lt;br /&gt;
&lt;br /&gt;
Altruism refers to the unselfish concern for the welfare of others and not only sharing emotions with another person. &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Virtual reality empathy==&lt;br /&gt;
&lt;br /&gt;
Virtual reality (VR) has a myriad of other possible applications beyond gaming. It is possible to use it as a sort of virtual reality empathy machine. There have been some experiments that use VR to generate empathy in someone by exposing them to specific situations like seeing through the eyes of a child, a woman, a stranger, a close friend, or a disabled man. &amp;lt;ref name=”9”&amp;gt;Alsever, J. Is virtual reality the ultimate empathy machine? Retrieved from https://www.wired.com/brandlab/2015/11/is-virtual-reality-the-ultimate-empathy-machine/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When compared to other forms of media, VR is more immersive and could, therefore, lead to experiences that generate a greater sense of empathy by placing the user in someone else’s place, changing people’s perception of each other. It would not only be an emotional and cognitive process of feeling the other person’s feelings but actually experiencing them in some way inside virtual reality. &amp;lt;ref name=”9”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”10”&amp;gt;Sutherland, E.A. (2015). Staged Empathy: empathy and visual perception in virtual reality systems. MSc thesis, Massachusetts Institute of Technology&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Terms]]&lt;/div&gt;</summary>
		<author><name>Paulo Pacheco</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=VR_audio&amp;diff=24910</id>
		<title>VR audio</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=VR_audio&amp;diff=24910"/>
		<updated>2017-12-13T17:56:58Z</updated>

		<summary type="html">&lt;p&gt;Paulo Pacheco: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{stub}}&lt;br /&gt;
{{see also|Oculus Audio SDK}}&lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
VR audio is a technology that simulates sound in a realistic manner for [[virtual reality]] (VR). When properly executed, it increases the user’s [[immersion]] and the sense of [[presence]] in the virtual environment.&lt;br /&gt;
&lt;br /&gt;
Localization is the process by which the human brain - with input signals coming from the ears- can precisely pinpoint the position of an object in 3D space only based on auditory clues. This characteristic of human biology is useful in different activities of day-to-day life and it can also be used to create immersive VR experiences. Indeed, while humans have five senses, only two of these are currently relevant to VR: sight and sound. Since these are the senses available to develop an immersive experience, they have to be explored to the fullest by means of high-caliber 3D graphics and truly [[3D audio]]. &amp;lt;ref name=”1”&amp;gt;Chase, M. (2016). How VR is resurrecting 3D audio. Retrieved from http://www.pcgamer.com/how-vr-is-resurrecting-3d-audio/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Immersion is essential in virtual reality, with the concept of presence being emphasized - the feeling of being physically present in an environment. Vision and sound both contribute to generate this sensation in VR. Graphically, one way immersion and presence are achieved is through low-latency head tracking, with the VR experience matching the user’s movement and field of vision in real time. Head tracking is also a reason for the necessity of virtual reality audio. Sound is often pinpointed by moving the head slightly or rotating it. Therefore, it is essential to have truly 3D audio in a VR experience to maintain the illusion of reality. &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”2”&amp;gt;Lalwani, M. (2016). For VR to be truly immersive, it needs convincing sound to match. Retrieved from https://www.engadget.com/2016/01/22/vr-needs-3d-audio/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Maintaining the audio cues that the brain needs to correctly localize the sound is still a challenge. The ears pick up audio in three dimensions, and the brain processes multiple cues to spatialize the sound. One of the cues is proximity, with the ear closer to the sound source picking up sound waves before the other. Distance is another cue, changing the audio levels. But these cues don’t apply to all directions. According to Lalwani (2016), “sounds that emerge from the front or the back are more ambiguous for the brain. In particular, when a sound from the front interacts with the outer ears, head, neck, and shoulders, it gets colored with modifications that help the brain solve the confusion. This interaction creates a response called Head-Related Transfer Function (HRTF), which has now become the linchpin of personalized immersive audio.” A person’s HTRFs is unique since the ears’ anatomy is different from person to person. &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Historically, audio has been a vital part of the computer and video gaming experience. It evolved from simple wave generators to FM synthesis, to 8-bit mono samples and 16-bit stereo samples, to today’s surround sound systems on modern gaming consoles. However, virtual reality is changing the traditional way that sound was used in computer and gaming experiences. VR brings the experience closer to the user through a head-mounted display (HMD) and headphones, and the head tracking changes how audio is implemented - being interdependent with the user’s actions and movements. &amp;lt;ref name=”3”&amp;gt;Oculus. Introduction to virtual reality audio. Retrieved from https://developer.oculus.com/documentation/audiosdk/latest/concepts/book-audio-intro&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With the advent of VR, virtual reality audio has gained more interest. Companies want to implement a VR audio solution that realistically reproduces audio in a virtual environment while not being computationally restrictive. The development of PC audio is more tumultuous than the field of graphics, but with the rise of VR, 3D audio is expected to gain traction and prominence. &amp;lt;ref name=”4”&amp;gt;Lang, B. (2017). Valve launches free steam audio SDK beta to give VR apps immersive 3D sound. Retrieved from https://www.roadtovr.com/valve-launches-free-steam-audio-sdk-beta-give-vr-apps-immersive-3d-sound/&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt;Lang, B. (2017). Oculus to talk “Breakthroughs in spatial audio technologies” at Connect Conference. Retrieved from https://www.roadtovr.com/oculus-talk-breakthroughs-spatial-audio-technologies-connect-conference/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Importance of VR audio==&lt;br /&gt;
VR audio is extremely important in a VR context in order to increase the user’s sense of presence by making the experience more immersive. VR developers cannot develop a virtual experience that only engages the sense of sight and expect to truly create an immersive environment. For the alternate worlds of VR to become real to the human brain, immersive graphics have to be matched by immersive 3D audio that simulates the natural listening experience. When properly implemented, it can solidify a scene, conveying information about where objects are and what type of environment the user is in. Visual and auditory cues amplify each other, and a conflict between the two will affect immersion. Indeed, truly 3D audio is vital to augment the entire VR experience, taking it to a level that could not be achieved by graphics only. &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”6”&amp;gt;Borsai, L. (2016). This is why it’s time for VR audio to shine. Retrieved from https://www.roadtovr.com/this-is-why-its-time-for-vr-audio-to-shine/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Evolving VR audio===&lt;br /&gt;
Les Borsai, VP of Business Development at Dysonics, has made some suggestions to move VR audio technology forward. He focuses mainly on three areas: better VR audio capture, better VR audio editing tools, and better VR audio for games. &amp;lt;ref name=”6”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Improved VR audio recording means a device that captures true spherical audio, for the best reproduction over headphones. This enables the user to hear sounds change relative to the head movement and is essential for live-captured immersive content - one that adds an essential layer of contextual awareness and realism. According to Borsai, “the incorporation of motion restores the natural dynamics of sound, giving your brain a crystal-clear context map that helps you pinpoint and interact with sound sources all around you. These positional audio cues that lock onto the visuals are vital in extending the overall virtual illusion and result in hauntingly lifelike and compelling VR content.” &amp;lt;ref name=”6”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The second suggestion made by Borsai - better VR audio editing tools - asserts that VR content creators need powerful but easy-to-use tools that will encompass all the stages of VR audio production, from raw capture to the finished product. Preferably the solution should be modular and easy-to-use since most content creators do not have the skill or time to focus on audio. Borsai’s suggestion of a complete audio stack includes “an 8-channel spherical capture solution for VR, plus post-processing tools that allow content creators to pull apart original audio, placing sounds around a virtual space with customizable 3D spatialization and motion-tracking control.” &amp;lt;ref name=”6”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
His final suggestion touches on how significant developments in VR audio will come with the creation of plugins for the major gaming engines, such as Unity or Unreal. Borsai mentions that audio-realism is essential to gaming, that even the most subtle of audio cues allows the player to interact with sound sources around him resulting in an increase in overall immersion and natural reaction time. &amp;lt;ref name=”6”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==VR audio and the human auditory system==&lt;br /&gt;
Humans depend on psychoacoustics and inference in order to locate sound sources within a three-dimensional space, taking into consideration factors like timing, phase, level, and spectral modifications. The main audio cues that humans use to localize sounds are interaural time differences, interaural level differences, and spectral filtering. &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”7”&amp;gt;Google. Spatial audio. Retrieved from https://developers.google.com/vr/concepts/spatial-audio&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Interaural time differences:&#039;&#039;&#039; this relates to the time of arrival of a sound wave to the left and right ears. The time difference varies according to the sound’s origin in relation to the person’s head. &amp;lt;ref name=”7”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Interaural level differences:&#039;&#039;&#039; Humans are not able to discern the time of arrival of sound waves for higher frequencies. The level (volume) differences between the ears are used for frequencies above 1.5 KHz in order to identify the sound’s direction. &amp;lt;ref name=”7”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Spectral filtering:&#039;&#039;&#039; The outer ears modify the sound’s frequencies depending on the direction of the sound. The alterations in frequency are used to determine the elevation of a sound source. &amp;lt;ref name=”7”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Researchers have been tackling the VR audio problem, trying to measure individual audio modifications that allow the brain to localize simulated sounds with precision. In VR, the visual setting is predetermined, and the audio is best generated on a rendering engine that attaches sound to objects as they move and interact with the environment. Lalwani (2016) refers that, “this object-based audio technique uses software to assign audible cues to things and characters in 3D space.” &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Head-related Transfer Functions (HTRFs)==&lt;br /&gt;
The HRTF is the foundation for the majority of current 3D sound spatialization techniques. Spatialization - the ability to reproduce a sound as if positioned at a specific place in a 3D environment - is an essential part of VR audio and a vital aspect to produce a sense of presence. Direction and distance are spatialization&#039;s main components. Depending on its direction, sounds are differently modified by the human body and ear geometry, and these effects are the basis of HRTFs that are used to localize them. &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Accurately capturing an HRTF requires an individual with microphones placed in the ears inside an anechoic chamber. Once inside, sounds are played from every direction necessary and recorded by the microphones. Comparing the original sound with the recorded one allows for the computation of the HRTF. To build a usable sample set of HRTFs, a sufficient number of discrete sound directions need to be captured. &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
While custom HRTFs to match a person’s body and ear geometry would be ideal, it is not a practical solution. HRTFs are similar enough from one person to the other to allow for a generic reference set that is adequate for most situations, particularly when combined with head tracking. There are different publicly available datasets for HRTF-based spatialization implementations such as the IRCAM Listen Database, MIT KEMAR, CIPIC HRTF Database, and ARI (Acoustics Research Institute) HRTF Database. &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
While HRTFs help to identify a sound’s direction, they do not model the localization of distance. Several factors affect how humans infer the distance to a sound source, which can be simulated with different levels of accuracy and computational cost. These are loudness, initial time delay, direct vs. reverberant sound, motion parallax, and high-frequency attenuation. &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &lt;br /&gt;
&lt;br /&gt;
==Google and Valve’s VR audio==&lt;br /&gt;
Google uses a technology called ambisonics to simulate sounds coming from virtual objects. The system surrounds the user with a high number of virtual loudspeakers that reproduce sound waves coming from all directions in the VR environment. The accuracy of the synthesized sound waves is directly proportional to the number of virtual loudspeakers. These are generated through the use of HRTFs. &amp;lt;ref name=”7”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Valve]] as made available the [[Steam]] Audio SDK - a free option for developers who want to use VR audio in their VR apps. Steam Audio supports unity and [[Unreal Engine]], and is available for Windows, Linux, MacOS, and Android. Furthermore, it is not restricted to a specific VR device or Steam. In a statement released by Valve, they said that “Steam Audio is an advanced spatial audio solution that uses physics-based sound propagation in addition to HRTF-based binaural audio for increased immersion. Spatial audio significantly improves immersion in VR; adding physics-based sound propagation further improves the experience by consistently recreating how sound interacts with the virtual environment.” &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==History==&lt;br /&gt;
Before the current emergence of VR, the interest in 3D audio was relatively low. Although sound has consistently improved over the years in terms of fidelity and signal-to-noise ratio, the real-time modeling of sound in a 3D space has not experienced the same level of consistent development. The true challenge for VR audio has been “reproducing the dynamic behavior of sound in a 3D space in real time.” The sound source and listener have to be computed in a 3D space (spatialization), so that has their positions change, the prerecorded audio sample sounds are also altered to adjust to the new spatial positions. Beside spatialization, the system has also to take into account the modifications made to a sound while it travels through an environment. The sound can be reflected, absorbed, blocked, or echoed. These effects on the sound are called audio ambiance and accounting for all these effects becomes computationally intensive. &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The capacity to create immersive, realistic VR audio already existed in the 1990s with a technology called A3D 2.0, developed by a company called Aureal. Mark Chase, in an article written for PC Gamer, said that “much of this technology relied on head-related transfer functions (or HRTFs), mathematical algorithms that take into account how sound from a 3D source enters the head based on ear and upper-body shape. This essentially helps replicate the auditory cues that allow us to pinpoint, or localize, where a sound is coming from.” &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The development of 3D audio would be affected by a legal action from Creative against Aureal for patent infringement. The cost of the legal action damaged Aureal financially, leaving the company to crippled to continue. Creative would then continue research on 3D audio, built on the backbone of DirectSound and DirectSound3D. &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
DirectSound and DirectSound3D created a standardized, unified environment for 3D audio, helping it grow as a technology and be easily used by developers. It also allowed for the hardware acceleration of 3D sound. When Microsoft released Windows Vista, it stopped supporting DirectSound3D, affecting years of development by Creative. &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
But with the advent of VR, the necessity of VR audio that can truly simulate natural sound has become a research priority. In 2014, Oculus licensed VisiSonic’s ReaSpace 3D audio technology, incorporating it into the Oculus Audio SDK. This technology follows the same principle that Aureal’s system used decades before, relying on custom HRTFs to recreate accurate spatialization over headphones. &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Microphones==&lt;br /&gt;
[[AMBEO VR Mic]]&lt;br /&gt;
&lt;br /&gt;
[[Dysonics RondoMic]]&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Terms]] [[Category:Technical Terms]]&lt;/div&gt;</summary>
		<author><name>Paulo Pacheco</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Binocular_overlap&amp;diff=24909</id>
		<title>Binocular overlap</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Binocular_overlap&amp;diff=24909"/>
		<updated>2017-12-13T17:39:38Z</updated>

		<summary type="html">&lt;p&gt;Paulo Pacheco: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
[[File:Binocular overlap.png|thumb|Figure 1. Human binocular overlap. (Image: David Johnson, University of Utah)]]&lt;br /&gt;
[[File:Eyepieces with no binocular overlap.png|thumb|Figure 2. Independent images of the left and right eyepieces. (Image: roadtovr.com)]]&lt;br /&gt;
[[File:Full overlap.png|thumb|Figure 3. 100% overlap (Image: roadtovr.com)]]&lt;br /&gt;
[[File:Partial overlap.png|thumb|Figure 4. Partial overlap (Image: roadtovr.com)]]&lt;br /&gt;
[[File:Binocular rivalry.png|thumb|Figure 5. Binocular rivalry caused by a partial-overlapping visual system. (Image: roadtovr.com)]]&lt;br /&gt;
&lt;br /&gt;
Binocular overlap is the overlapping region between the two eyes of a stereoscopic vision system. It is a term that describes the shared space that can be seen by both eyes as opposed to by just one of the eyes (Figure 1). It is different from the visual field, which is defined as the area of space seen by either eye at a single instant. In a [[virtual reality]] (VR) environment, the binocular overlap area is the region where true [[Stereoscopic|stereoscopy]] is produced. According to Mon-Williams et al. (1993), “the HMD attempts to simulate binocularly overlapped images so that the fusion of disparate images can create the illusion of a three-dimensional world.” &amp;lt;ref name=”1”&amp;gt;Boger, Y. (2016). Understanding binocular overlap and why it’s important for VR headsets. Retrieved from https://www.roadtovr.com/understanding-binocular-overlap-and-why-its-important-for-vr-headsets/&amp;lt;/ref&amp;gt; &amp;lt;ref name=”2”&amp;gt;Virtual Worldlets. Binocular Overlap. Retrieved from http://www.virtualworldlets.net/Resources/Dictionary.php?Term=Binocular%20Overlap&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;Fuchs, P. (2017). Virtual Reality Headsets - A Theoretical and Pragmatic Approach. CRC Press&amp;lt;/ref&amp;gt; &amp;lt;ref name=”4”&amp;gt;Mon-Williams, M., Wann, J.P. and Rushton, S. (1993). Binocular vision in a virtual world: visual deficits following the wearing of a head-mounted display. Ophthalmic and Physiological Optics, 13: 387-391&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
There are different amounts of binocular overlap required in a VR experience depending on the distance of focus. For long distances, a lesser degree of binocular overlap is necessary compared to close-up objects. In this case, a high degree of binocular overlap is needed to achieve realism. &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Visual field and binocular overlap==&lt;br /&gt;
Normally, the visual field extends 60 degrees inward and 100 degrees outwards; and 60 degrees above and 75 below the horizontal meridian. It should be noted that the visual field differs from person to person. Typically, the binocular overlap area is 120 degrees horizontally, and since each eye as a visual field of about 160 degrees, the binocular overlap covers a total of 75% of that area (120/160). &amp;lt;ref name=”5”&amp;gt;Boger, Y. (2013). What is Binocular overlap and why should you care? Retrieved from http://vrguy.blogspot.pt/2013/05/what-is-binocular-overlap-and-why.html&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Binocular overlap plays an important role in depth perception. When an object is seen, each eye rotates so the object is observed in the same location in both views. The relative angles of the eyes provide an estimate of how far away the object is located. If the object being observed is far away, the angle in which it is seen by both eyes is almost the same; however, if the object is close, the angles will be different. &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For virtual reality applications, [[head-mounted display]] (HMD) manufacturers can increase the overall horizontal and diagonal [[field of view]] (FOV) of the VR headset by creating partially overlapped systems. &amp;lt;ref name=”6”&amp;gt;Sensics. How binocular overlap impacts horizontal field of view. Retrieved from http://sensics.com/how-binocular-overlap-impacts-horizontal-field-of-view/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Binocular overlap in VR HMDs==&lt;br /&gt;
Since binocular overlap influences the field of view in an HMD, VR manufacturers have to decide how much of it to incorporate into their headsets. For example, if a VR HMD has two eyepieces with a 60-degree diagonal field of view and a 4:3 aspect ratio (Figure 2), converting that to horizontal and vertical degrees gives 48 degrees for the former and 36 degrees for the latter. If the eyepieces had 100% overlap between them, the combined binocular field of view would also be 48 degrees horizontal and 36 vertical (translating in 60 degrees diagonal). In this case, everything that can be seen by one eye can be seen by the other (Figure 3). &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
However, if the VR HMD manufacturer decided to install eyepieces with 75% horizontal overlap (called partial overlap) the resulting binocular horizontal field of view would be 60 degrees. The overlapping region would encompass 36 degrees (75% of 48 degrees); adding to this the 12 degrees that are just shown in the left eyepiece, plus the 12 degrees only shown by the right eyepiece (Figure 4), and the 60-degree horizontal FOV is reached. Converting this number to the diagonal field of view gives a total of 70 degrees, which is larger than the diagonal field of view obtained when using 100% overlap. &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Indeed, a wider field of view is one of the advantages of using partial overlap. This means that a greater level of [[immersion]] can be realized. Another advantage is an improved aspect ratio. Following the example provided above, the original aspect ratio was 4:3. When using 75% overlap, the aspect ratio became 5:3, making it more suitable for viewing widescreen content. &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
However, partial overlap can have its disadvantages in a virtual reality headset such as binocular rivalry. If there is an object that is partially in the binocular overlap region and partially in a region exclusive to one of the eyes (Figure 5) it will be fully visible by one of the eyes and only partially by the other. If a user looks through both eyepieces at the same time, he might notice the border on the field of view of one of the eyes caused by the object&#039;s location, and this might be distracting. Instead of seeing a summation of the two images, the user’s perception will switch from one image to the other. &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Another disadvantage of partial overlapping is the compatibility problems with non-3D content. In a fully-overlapped system, standard content can be viewed with no noticeable problems since the same content is presented on both eyes. However, in the case of partial overlap, if the same content was present to both eyes, it would result in eye strain since the eyes would try to merge the two images, even though they are shown in different angles. This means that in partial-overlap systems, the applications need to compensate for the difference between the two eyes. &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Terms]] [[Category:Technical Terms]]&lt;/div&gt;</summary>
		<author><name>Paulo Pacheco</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Neuromancer&amp;diff=24908</id>
		<title>Neuromancer</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Neuromancer&amp;diff=24908"/>
		<updated>2017-12-13T17:24:08Z</updated>

		<summary type="html">&lt;p&gt;Paulo Pacheco: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;[[Neuromancer]]: A Foreshadow of Things Still to Come&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
By Paulo Pacheco on July 20, 2016&lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
Neuromancer is the first novel of the writer William Gibson, and it was published on  July 1, 1984 &amp;lt;ref name=”1”&amp;gt; Sullivan, Mark (2009). Neuromancer Turns 25: What it Got Right, What it got Wrong. Retrieved from www.macworld.com/article/1141500/neuromancer_25.html&amp;lt;/ref&amp;gt;. It has sold more than 6 million copies, and in the year after its launch received the three biggest awards in Science Fiction writing: the Nebula, Philip K Dick and Hugo awards &amp;lt;ref name=”2”&amp;gt; Cumming, Ed (2014). William Gibson: the man who saw tomorrow. Retrieved from www.theguardian.com/books/2014/jul/28/william-gibson-neuromancer-cyberpunk-books&amp;lt;/ref&amp;gt;. It defined an aesthetic – Cyberpunk – and left a mark in the tech and digital culture by envisioning the concept of cyberspace and virtual reality, both integrated and being extensions of the physical world &amp;lt;ref name=”3”&amp;gt; DSMLF (2015). Neuromancer: William Gibson’s Virtual Reality Masterpiece. Retrieved from dsmlf.info/neuromancer-william-gibsons-virtual-reality-masterpiece&amp;lt;/ref&amp;gt;. Today, we have the World Wide Web, and the explosion of [[Virtual Reality]] is finally around the corner (even if it still hasn’t reached the same level has explored in the novel) has reminders of some aspects of the world created by Gibson that crept in into our reality.&lt;br /&gt;
&lt;br /&gt;
==Influences for the Story==&lt;br /&gt;
William Gibson was not a “techie” by nature. He was aware of the new technologies around him, but according to Gareth Damien Martin, “he never had even touched a PC when he wrote Neuromancer.” His exposure to computers came as he met and conversed with science fiction writers and people who were experiencing that novel technology. He focused on observing their behaviors, addictions, obsessions and how they would interface with technology.&lt;br /&gt;
&lt;br /&gt;
Another influence for the novel came from the counter-culture of the 1960’s. The author was embedded in its excesses, in the drug culture and the exploration of altered states of consciousness. This influence can easily be seen in the main character and in the criminal underworld described in the story. In both of these cases – in the tech and counter-culture world - his value was mainly has an observer &amp;lt;ref name=”4”&amp;gt; Marting, Gareth Damian. Re-reading William Gibson at the Advent of Virtual Reality. Retrieved from versions.killscreen.com/re-reading-william-gibson-at-the-advent-of-virtual-reality&amp;lt;/ref&amp;gt;. Other influences for the work of William Gibson came from movies (e.g. Escape From New York and 1940’s film-noir), music and pop culture elements &amp;lt;ref&amp;gt; McCaffery, Larry (1991). An Interview With William Gibson. Retrieved from project.cyberpunk.ru/idb/gibson_interview.html&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Summary of the Story of Neuromancer==&lt;br /&gt;
The setting of the story is in a “post-apocalyptic, not-too-distant future in which ‘human’ has transformed into ‘post-human’ and ecological systems have been supplanted by technological constructs” &amp;lt;ref&amp;gt; Leaver, Tama (1997). Post-Humanism and Ecocide in William Gibson’s Neuromancer and Ridley Scott’s Blade Runner. Retrieved from cyberpunk.asia/cp_project.php?txt=180&amp;amp;lng=fr&amp;lt;/ref&amp;gt;. It is a future where media, technology, pop culture and market imperatives have spun out of control &amp;lt;ref&amp;gt; Walker, Douglas (1989). Douglas Walker Interviews Science Fiction Author William Gibson. Retrieved from www.douglaswalker.ca/press/gibson.pdf&amp;lt;/ref&amp;gt;. It follows the story of a character called Case, a once “cyberspace cowboy” who could hack into corporate databases. Due to a job gone wrong, Case is left crippled and unable to access cyberspace. He is then recruited by an underworld group of people. They promise to heal Case’s nervous system if he helps them to infiltrate an AI (Artificial Intelligence) called Wintermute &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;&amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Cyberspace, Virtual Realities and the Fusion of Technology with Wetware==&lt;br /&gt;
There is no doubt that Neuromancer had a great impact in foreseeing the technologies that would follow its publication, and its level of prescience is still praised; the author’s being named as a prophet of the digital age. Even though there are some technologies that the book foreshadowed, others are still a bit far off &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;&amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt;&amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;. We may not have reached - in the real-world - the bleak aesthetics of the novel, but there still are intersecting paths between fiction and reality that are eerily similar.&lt;br /&gt;
&lt;br /&gt;
One of those is the idea of a World Wide Web: a global network of millions of computers. The concept of linking computers to each other already existed when the book launched – universities had already connected various systems of servers through a telecom link – but not on the global scale that the novel described. The concept of the internet as we know it today was still a decade away, and it may just have been a wild speculation at the time. Jack Womack has suggested, in the afterword of the 2000 re-release of the book, that it could have even influenced the way the Web developed by providing a sort of blueprint, a guide, to the developers who read and grew up with the novel &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It also defined cyberspace (or the matrix as it is also called) has “a consensual hallucination experienced daily by billions of legitimate operators, in every nation, by children being taught mathematical concepts… A graphic representation of data abstracted from the banks of every computer in the human system. Unthinkable complexity. Lines of light ranged in the nonspace of the mind, clusters and constellations of data. Like city lights receding…&amp;quot; &amp;lt;ref&amp;gt; Myers, Tony (2001). The Postmodern Imaginary in William Gibson’s Neuromancer. MFS Modern Fiction Studies, 47(4)&amp;lt;/ref&amp;gt;. The current Virtual Reality technology of our world may not be as advanced as that in the book, where people interact with the network directly through their nervous systems with full sensory stimulation, but that may be just a matter of time &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;. Virtual Reality seems to be finally on the cusp of penetrating our world and becoming the norm with the [[Oculus Rift]] and other types of [[headsets]].&lt;br /&gt;
The book reflects, ultimately, the increasing presence of technology in our lives, having in its core the direct integration of man and computer. Indeed, development in this direction has already started &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;. The [[VR HMDs|VR headsets]] are getting better and providing a greater immersion into their virtual realms. Direct brain-to-brain communication between human subjects has been achieved - a sort of technological telepathy – with the aid of electrodes attached to a person’s scalp and the use of the internet to transmit the information &amp;lt;ref&amp;gt; ScienceDaily (2014). Direct Brain-to-Brain Communication Demonstrated in Human Subjects. Retrieved from www.sciencedaily.com/releases/2014/09/140903105646.htm&amp;lt;/ref&amp;gt;. Real-time brain control of a computer cursor was already done back in 2002 &amp;lt;ref&amp;gt; ScienceDaily (2002). Researchers Demonstrate Direct, Real-Time Brain Control of Computer Cursor. Retrieved from www.sciencedaily.com/releases/2002/03/020314080832.htm&amp;lt;/ref&amp;gt;. There’s a real tendency to merge computers, the Internet and our own wetware &amp;lt;ref&amp;gt; Wikipedia. Wetware (brain). Retrieved from en.wikipedia.org/wiki/Wetware_(brain)&amp;lt;/ref&amp;gt; that is evocative of the world William Gibson created.&lt;br /&gt;
&lt;br /&gt;
With all these developments there is always the risk of abuse, addiction, as escapism – a subject also dealt with in the book. Either way, our connection with the technology we use is already affecting us &amp;lt;ref&amp;gt; ScienceDaily (2009). Is Technology Producing a Decline in Critical Thinking? Retrieved from www.sciencedaily.com/releases/2009/01/090128092341.htm&amp;lt;/ref&amp;gt; &amp;lt;ref&amp;gt; ScienceDaily (2016). Kids Who Text and Watch TV Simultaneously Likely to Underperform at School. Retrieved from www.sciencedaily.com/releases/2016/05/160518102746.htm&amp;lt;/ref&amp;gt; and only time will tell if we will achieve that full integration with the machines that was envisioned in Neuromancer.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Books]] [[Category:Media]] [[Category:VR Books]]&lt;/div&gt;</summary>
		<author><name>Paulo Pacheco</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Brain-computer_interface&amp;diff=24907</id>
		<title>Brain-computer interface</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Brain-computer_interface&amp;diff=24907"/>
		<updated>2017-12-13T17:17:10Z</updated>

		<summary type="html">&lt;p&gt;Paulo Pacheco: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
A Brain-computer interface (BCI) is a technological system of communication that is based on neural activity generated by the brain &amp;lt;ref name=”1”&amp;gt; Vallabhaneni, A., Wang, T. and He, B. (2005). Brain-Computer Interface. Neural Engineering, Springer US, pp. 85-121&amp;lt;/ref&amp;gt;. It’s comprised of four main parts: a means for acquiring neural signals from the brain, a method for isolating the desired specific features in that signal, an algorithm to decode the signals obtained, and a method for transforming the decoding into an action (Figure 1) &amp;lt;ref name=”2”&amp;gt; Sajda, P., Müller, KR. and Shenoy, K. V. (2008). Brain-Computer Interfaces. IEEE Signal Processing Magazine, 25(1): 16-17&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt; He, B., Gao, S., Yuan, H. and Wolpaw, J. R. (2013). Brain-Computer Interfaces. Neural Engineering, Springer US, pp 87-151&amp;lt;/ref&amp;gt;. This method of communication is independent of the normal output pathways of peripheral nerves and muscles, and the signal can be acquired by using invasive or non-invasive techniques &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;. This technology can help to provide a means of communication for people disabled by neurological diseases or injuries, giving them a new channel of output for the brain. It can also enhance functions in healthy individuals &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;. BCIs are also named brain-machine interfaces (BMIs) &amp;lt;ref name=”4”&amp;gt; McFarland, D. J. and Wolpaw, J. R. (2011). Brain-Computer Interfaces for Communication and Control. Commun ACM, 54(5): 60–66&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[[File:Figure 1. Basic design of a BCI system. (Image taken from Wolpaw et al., 2002).png|thumb|Figure 1 Basic design of a BCI system. (Image taken from Wolpaw et al., 2002)]]&lt;br /&gt;
&lt;br /&gt;
The central nervous system (CNS) responds to stimuli in the environment or in the body by producing an appropriate output that can be in the form of a neuromuscular or hormonal response. A BCI provides a new output for the CNS that is different from the typical neuromuscular and hormonal ones. It changes the electrophysiological signals from reflections of the CNS activity (such as an electroencephalography – or EEG - rhythm or a neuronal firing rate) into the intended products of that activity, such as messages and commands that act on the world and accomplish the person’s intent &amp;lt;ref name=”5”&amp;gt; Wolpaw, J. R., Birbaumer, N., McFarland, D. J., Pfurtscheller, G. and Vaughan, T. M. (2002). Brain-Computer Interfaces for Communication and Control. Clinical Neurophysiology 113: 767–791&amp;lt;/ref&amp;gt;. Since it measures CNS activity, converting it into an artificial output, it can replace, restore, or enhance the natural CNS output, changing the interactions between the CNS and its internal or external environment &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
The electrical signals produced by brain activity can be detected on the scalp, on the cortical surface, or within the brain. As mentioned previously, the BCI has the function of translating these electrical signals into outputs that allow the user to communicate without the peripheral nerves and muscles. This becomes relevant because, since the BCI does not depend on neuromuscular control, it can provide another way of communication for people with disorders such as amyotrophic lateral sclerosis (ALS), brainstem stroke, cerebral palsy and spinal cord injury &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;. It needs to be mentioned that a BCI also depends on feedback and on the adaptation of brain activity based on that feedback. According to McFarland and Wolpaw (2011), “communication and control applications are interactive processes that require the user to observe the results of their efforts in order to maintain good performance and to correct mistakes &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;.” The BCI system needs to provide feedback and interact with the adaptations the brain makes in response. The general BCI operation, therefore, depends on the interaction between the user’s brain (where the signals produced are measured by the BCI), and the BCI itself (that translates the signals into specific commands) &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;. One of the most difficult challenges in BCI research is the management of the complex interactions between the concurrent adaptations of the CNS and the BCI &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Even though the main objective of BCI research and development is the creation of assistive communication and control technology for disabled people with different ailments, BCIs also have potential as a new type of interface for interacting with a computer or machine for people with normal neurological function. This could be applied to the general population in areas such as gaming, for example, or in high-stress situations like air traffic control. There could also be systems that enhance or supplement human performance such as image analysis, and systems that expand the media access or artistic expression. There has been some research into another possible application for the BCI technology: assistance in the rehabilitation of people disabled by a stroke and other acute events &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The biology of BCIs&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Since the BCI includes both a biological and technological component, without specific characteristics of the biological factor that can be used, the system would not work. The technology works because of the way our brains function &amp;lt;ref name=”6”&amp;gt; Grabianowski, E. How Brain-Computer Interfaces Work. Retrieved from computer.howstuffworks.com/brain-computer-interface.htm&amp;lt;/ref&amp;gt;. The human brain (arguably the most complex signal processing machine in existence) is capable of transducing a variety of environmental signals and to extract information from them in order to produce behavior, cognition, and action &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;. The brain has a myriad of neurons - individual nerve cells connected to one another by dendrites and axons. The actions of the brain are carried out by small electric signals generated by differences in electric potential carried by ions on the membranes of the neurons. Even though the signal pathways are insulated by myelin, there is a residual electric signal that escapes and that can be detected, interpreted, and used, such as in the case of BCIs. This also allows for the development of technologies that send signals into specific regions of the brain. By connecting a camera that could send the same signals as the eye (or close enough) to the brain, a blind person could regain some measure of vision &amp;lt;ref name=”6”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The non-invasive recording of the electrical brain activity by electrodes on the surface of the scalp has been known for over 80 years, due to the work of Hans Berger. His observations demonstrated that the electroencephalogram (EEG) could be used as “an index of the gross state of the brain.” Besides the detection of electrical signals from the brain, neural activity can also be monitored by measuring magnetic fields or hemoglobin oxygenation using sensors on the scalp, the surface of the brain, or within the brain &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Dependent and independent BCIs&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The commands that the user sends to the external world through the BCI system do not follow the same output pathways of peripheral nerves and muscles. Instead, a BCI provides the user with an alternative method for acting on the world. The BCIs can be placed in two different classes: dependent and independent &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;. These terms appeared in 2002, and both are used to describe BCIs that use brain signals for the control of applications. The difference between them is in how they depend on natural CNS output &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
A dependent BCI uses brains signals that depend on muscle activity &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;, such as in the case of a BCI that present the user with a matrix of letters. Each letter flashes one at a time, and it is the objective of the user to select a specific letter by looking directly at it. This initiates a visual evoked potential (VEP) that is recorded from the scalp. The VEP produced when the right intended letter flashes is greater than the VEPs produced when other letters flash. In this example, the brain’s output channel is EEG, but the generation of the signal that is detected is dependent on the direction of the gaze which, in turn, depends on extraocular muscles and the cranial nerves that activate them &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
An independent BCI, on the contrary, does not depend on natural CNS output; there is no need for muscle activity to generate the brain signals, since the message is not carried by peripheral nerves and muscles &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;. This is more advantageous for people who are severely disabled by neuromuscular disorders. An independent BCI would present the user with a matrix of letters that flash one at a time. The user would select a specific letter by producing a P300 evoked potential when the chosen latter flashed. According to McFarland and Wolpaw (2011), “the P300 is a positive potential occurring around 300 msec after an event that is significant to the subject. It is considered a “cognitive” potential since it is generated in tasks when subjects attend and discriminate stimuli. (…) The fact that the P300 potential reflects attention rather than simply gaze direction implied that this BCI did not depend on muscle (i.e., eye-movement) control. Thus, it represented a significant advance &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;.” The brain’s output channel, in this case, would be EEG, and the generation of the EEG signal depends on the user’s intent and not on the precise orientation of the eyes. This kind of BCI is of greater theoretical interest since it provides the brain with new output pathways. Also, for people with the most severe neuromuscular disabilities, independent BCIs are probably more useful since they lack all normal output channels &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
There is also another term that has been used recently: hybrid BCI. According to He et al. (2013) this can be applied to a BCI that employs two different types of brain signals, such has VEPs and sensorimotor rhythms) to produce its outputs, or to a system that combines a BCI output and a natural muscle-based output &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Invasive and non-invasive BCIs&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
BCIs can also be classified into two different classes by the way the neural signals are collected. When the signals are monitored using implanted arrays of electrodes it is called invasive system. This is common in experiments involving rodents and nonhuman primates, and the invasive system is suited for decoding activity in the cerebral cortex. These type of systems provide measurements with a high signal-to-noise ratio (SNR) and also allow for the decoding of spiking activity from small populations of neurons &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt;. The downside of the invasive system is that it causes a significant amount of discomfort and risk to the user &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;. In turn, noninvasive systems such as the EEG acquire the signal without the need for surgical implementation. The ongoing challenge with noninvasive techniques is the low SNR, although there have been some developments with the EEG that provide a substantial increase in the SNR &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Brief overview of the development of Brain-Computer Interfaces==&lt;br /&gt;
&lt;br /&gt;
For a long time, there was speculation that a device such as an electroencephalogram, which can record electrical potentials generated by brain activity, could be used to control devices by taking advantage of the signals obtained by it &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;. In the 1960s there where the first demonstrations of BCIs technology. These were made in 1964 by Grey Walter, which used a signal recorded on the scalp by EEG to control a slide projector. Ebenhard Fetz also helped advance the development of BCIs teaching monkeys to control a meter needle by changing the firing rate of a single cortical neuron. Moving forward to the 1970s, Jacques Vidal developed a system that determined the eye-gaze direction using the scalp-recorded visual evoked potential over the visual cortex to determine the direction in which the user wanted to move a computer cursor. The term brain-computer interface can be traced to Vidal &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;. During 1980, Elbert T. and colleagues demonstrated that people could learn to control slow cortical potentials (SCPs) in scalp-recorded RRG activity. This was used to adjust the vertical position of a rocket image moving on a TV screen. Still in the 1980s, more specifically in 1988, Farwell and Donchin proved that people could use the P300 event-related potentials to spell words on a computer screen. Another major development was when Wolpaw and colleagues trained people to control the amplitude of mu and beta rhythms – sensorimotor rhythms – using the EEG recorded over the sensorimotor cortex. They demonstrated that users could use the mu and beta rhythms to move a computer cursor in one or two dimensions &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The research of BCIs increased rapidly in the mid-1990s, continuing to grow into the present years. During the past 20 years, the research has covered a broad range of areas that are relevant to the development of BCI technology, such as basic and applied neuroscience, biomedical engineering, materials engineering, electrical engineering, signal processing, computer science, assistive technology, and clinical rehabilitation &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Bain-Computer Interface components==&lt;br /&gt;
&lt;br /&gt;
A BCI, in order to achieve the desired output that reflects the user’s intent, has to detect and measure features of brain signals. It has an input, for example, the electrophysiological activity from the user, components that translate input into output, a device command (output), and a protocol that determines the onset, offset, how the timing of the operation is controlled, how the feature translation process is parameterized, the nature of the commands that the BCI produces, and how errors in translation are handled &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;. The BCI system can be divided into four basic components: signal acquisition, feature extraction, feature translation, and device output commands &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The first component, signal acquisition, is responsible for measuring the brains signals, and the adequate acquisition of this signal is important for the function of any BCI. The objective of this part of the BCI system is to detect the voluntary neural activity created by the user, whether by invasive or noninvasive means. To achieve this, some kind of sensor is used, such as scalp electrodes for electrophysiological activity or functional magnetic resonance imaging (fMRI) for hemodynamic activity. The component amplifies the signals obtained for subsequent processing. It may also filter them in order to remove noise like the power line interference, at 60 or 50 Hz. The received signals that were amplified are digitized and sent to a computer &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The next component, feature extraction, analyses those digitized signals with the objective of isolation the signal features. These are specific characteristics in the signal such as power in specific EEG frequency bands or firing rates of individual cortical neurons. There are several feature extraction procedures for the digitized signal such as the spatial filtering, voltage amplitude measurements, spectral analyses or single-neuron separation &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;. The features extracted are expressed in a compact form that is suited for translation into output commands. These features to be effective need to have a strong correlation with the user’s intent. It is important that artifacts such as electromyogram from cranial muscles are avoided or eliminated to ensure the accurate measurement of the desired signal features &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
After the features have been extracted, these are provided to the feature translation algorithm that converts them into commands for the output device, which will achieve the user’s intent. The translation algorithm should adapt to spontaneous or learned changes in the user’s signal features. This is important “in order to ensure that the user’s possible range of feature values covers the full range of device control and also to make control as effective and efﬁcient as possible &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.” The translation algorithms include linear equations, nonlinear methods such as neural networks, and other classification techniques. Independently of its nature, these algorithms change independent variables (the signal features) into dependent variables, that are the device control commands &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref&amp;gt; Wolpaw, J. R., Birbaumer, N., Heetderks, W. J., McFarland, D. J., Peckham, P.H., Schalk, G., Donchin, E., Quatrano, L. A., Robinson, C. J. and Vaughan, T. M. (2000). Brain-Computer Interface Technology: A Review of the First International Meeting. IEEE Transactions on Rehabilitation Engineering, 8(2): 164-173&amp;lt;/ref&amp;gt; (5; 7).&lt;br /&gt;
&lt;br /&gt;
Finally, the commands that were produced by the feature translation algorithm are the output of the BCI. They are sent to the application and a result is created like a selection of a letter, controlling a cursor, robotic arm operation, wheelchair movement, or any other number of desired outcomes. The realization of the operation of the device provides feedback to the user, closing the control loop &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==BCI signals==&lt;br /&gt;
&lt;br /&gt;
As mentioned above, brain signals acquired by different methods can be used as BCI inputs. But not all signals are the same: they can differ substantially in regards to topographical resolution, frequency content, area of origin, and technical needs. For example, their resolution can range from EEG – that has millimeter resolution – to electrocorticogram (ECoG), with its millimeter resolution, to neuronal action potentials that have tens-of-microns resolution. The main issue when considering signals for BCI usage is what signals can best indicate the user’s intent &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Sensorimotor rhythms were first reported by Wolpaw et al. (1991) for cursor control. These are EEG rhythms that vary according to movement or the imagination of movement and are spontaneous, not requiring specific stimuli for their occurrence &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;. The P300 type of signal is an endogenous event-related potential component in the EEG &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;. It is a positive potential that occurs around 300 msec after an event that has significance to the user. The BCIs based on the P300 potential do not depend on muscle control, such as eye movement since it reflects attention rather than simply gaze direction. Both sensorimotor rhythms and the P300 have demonstrated that the noninvasive acquiring of these brain signals can be used for communication and control of devices &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Other possibilities for BCI signals have been explored, such as using the single-neuron activity that can be acquired by microelectrodes implanted in the cortex. This research has been tried in humans but mainly in non-human primates. Another batch of studies demonstrated that recording electrocorticographic (ECoG) activity from the surface of the brain is also a viable method to produce signals for a BCI system. Both of this studies prove the viability of invasive methods to gather brain signals that could be useful for BCIs. However, there are also issues regarding their suitability and reliability for long-term use in humans &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Besides electrophysiological measures, there are other types of signals that can be useful: Magnetoencephalography (MEG), functional magnetic resonance imaging (fMRI), and near-infrared systems (fNIR). For recording MEG and fMRI, presently, the technology is still expensive and bulky, reducing the probabilities of them being used for practical applications in the near future in regards to BCIs. fNIR can be cheaper and more compact, but since it is based on changes in cerebral blood flow (like fMRI), which is a slow response, this could impact when applied to a BCI system. In conclusion, currently, electrophysiological features are the most practical signals for BCI technology &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Invasive and noninvasive techniques for acquiring signals&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Brain signals acquired by invasive methods are mainly accomplished by electrophysiologic recording from electrodes that are implanted, neurosurgically, on the inside of the person’s brain or over the surface of the brain. The area of the brain that has been the preferred site for implanting electrodes has been the motor cortex, due to its accessibility and large pyramidal cells that produce measurable signals that can be generated by actual or imaginary motor movements &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;. The advantage of the invasive techniques is their high spatial and temporal resolution since it is possible to record individual neurons at a very high sampling rates. The signals recorded intracranially can obtain more information and allow for quicker responses. This, in turn, may lead to decreased requirements of training and attention on the part of the user when comparing to noninvasive methods. However, there are some issues with invasive methods that need to be taken into account. First, the long-term stability and reliability of the signal over days and years that it is expected that a person would be able to use the implanted device. There is a need for the user to consistently be able to generate the control signal reliably without frequent retuning &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;. Secondly, the quality of the signal over long time periods is important. The brain tissue around a specific region where a device has been implanted will react after the electrode insertion (figure 2). This reaction includes not only damage to the local tissue but also irritation at the electrode-tissue surface &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;. The third issue relates to if the device includes a neuroprosthesis that requires a stimulus to activate the disabled limb. The additional stimulus could also produce a significant effect on the neural circuits that might interfere with the signal of interest. The BCI systems must accurately detect and remove this kind of artifacts &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[[File:2.jpg|thumb|Figure 2. Acute (a) and chronic tissue (b) responses after device insertion. (Image taken from He et al., 2013)]]&lt;br /&gt;
&lt;br /&gt;
Success has been limited with invasive techniques applied to humans, although there has not been a lot of experiment with human subjects. To improve the suitability of the invasive method there is a need for further advancements in microelectrodes in order to obtain stable recordings over a long term. For the widespread use of invasive techniques in humans, it would also be necessary more research to decrease of the number of cells required for simultaneous recording to obtain a useful signal, and to provide feedback to the nervous system via electrical stimulation through electrodes &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Contrarily to invasive techniques, noninvasive methods reduce the risk for users since surgery or permanent attachment to the device is not required &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;. There are several techniques that belong to this category that have been used to measure brain activity noninvasively such as computerized tomography (CT), positron electron tomography (PET), single-photon emission computed tomography (SPECT), magnetic resonance  imaging (MRI), functional  magnetic resonance imaging (fMRI), magnetoencephalography (MEG), and electroencephalography (EEG) &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;. EEG is the most prevalent method of signal acquisition for BCIs, having high temporal resolution that is capable of measuring changes in brain activity that occur within a few msec. Although the resolution of EEG is not on the same level as that of implanted methods, signals from up to 256 electrode sites can be measured at the same time &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;. EEG is practical in a laboratory setup (figure 3) or in a real-world setting, it is portable, inexpensive, and has a vast literature of past performance &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[[File:3.jpg|thumb|Figure 3. Example of a simple BCI setup. (Image taken from McFarland and Wolpaw, 2011)]]&lt;br /&gt;
&lt;br /&gt;
==Applications==&lt;br /&gt;
&lt;br /&gt;
There are a number of disorders that disrupt the neuromuscular pathways through which the brain communicates with and controls its external environment. Disorders like the amyotrophic lateral sclerosis (ALS), brainstem stroke, brain or spinal cord injury, cerebral palsy, muscular dystrophies, multiple sclerosis, and others undermine the capacity of the neural pathways that control muscles or impair the muscles &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
An option for to restore function to people with motor impairments is to provide the brain with a non-muscular communication and control channel. A BCI can, therefore, convey messages and commands to the external world, and the potential of these systems for helping handicapped people is obvious &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;.He et al. (2013) mentions that “a BCI output could replace natural output that has been lost to injury or disease. Thus, someone who cannot speak could use a BCI to spell words that are then spoken by a speech synthesizer. Or one who has lost limb control could use a BCI to operate a powered wheelchair &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.”&lt;br /&gt;
&lt;br /&gt;
A BCI output could enhance natural CNS output. For example, as a method to prevent the loss of attention when someone is engaged in a task that requires constant focus. A BCI could detect the brain activity that precedes break in attention and create an output (a sound for example) that would alert the person. It could also supplement natural CNS output, such as in the case of a person that uses a BCI to control a third robotic arm, for example, or to choose items when a user that is controlling the position of the cursor selects them. In these cases, the BCI supplements the natural neuromuscular output with another, the artificial output. Finally, the BCI output could improve the natural CNS output. As an example, a person whose arm movements are compromised by a sensorimotor cortex damaged by a stroke could use a BCI system to measure signals from the damaged areas and then excite muscles or control an orthosis that would improve arm movement &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==See Also==&lt;br /&gt;
&#039;&#039;&#039;[[OpenBCI]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[Neurable]] - Building BCI for [[VR]] and [[AR]]&lt;br /&gt;
&lt;br /&gt;
[[Neuralink]] - [[Elon Musk]]&#039;s company to develop [[implantable]] [[brain–computer interface]]&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Terms]] [[Category:Technical Terms]]&lt;/div&gt;</summary>
		<author><name>Paulo Pacheco</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Brain-computer_interface&amp;diff=24906</id>
		<title>Brain-computer interface</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Brain-computer_interface&amp;diff=24906"/>
		<updated>2017-12-13T17:12:49Z</updated>

		<summary type="html">&lt;p&gt;Paulo Pacheco: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
A Brain-computer interface (BCI) is a technological system of communication that is based on neural activity generated by the brain &amp;lt;ref name=”1”&amp;gt; Vallabhaneni, A., Wang, T. and He, B. (2005). Brain-Computer Interface. Neural Engineering, Springer US, pp. 85-121&amp;lt;/ref&amp;gt;. It’s comprised of four main parts: a means for acquiring neural signals from the brain, a method for isolating the desired specific features in that signal, an algorithm to decode the signals obtained, and a method for transforming the decoding into an action (Figure 1) &amp;lt;ref name=”2”&amp;gt; Sajda, P., Müller, KR. and Shenoy, K. V. (2008). Brain-Computer Interfaces. IEEE Signal Processing Magazine, 25(1): 16-17&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt; He, B., Gao, S., Yuan, H. and Wolpaw, J. R. (2013). Brain-Computer Interfaces. Neural Engineering, Springer US, pp 87-151&amp;lt;/ref&amp;gt;. This method of communication is independent of the normal output pathways of peripheral nerves and muscles, and the signal can be acquired by using invasive or non-invasive techniques &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;. This technology can help to provide a means of communication for people disabled by neurological diseases or injuries, giving them a new channel of output for the brain. It can also enhance functions in healthy individuals &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;. BCIs are also named brain-machine interfaces (BMIs) &amp;lt;ref name=”4”&amp;gt; McFarland, D. J. and Wolpaw, J. R. (2011). Brain-Computer Interfaces for Communication and Control. Commun ACM, 54(5): 60–66&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[[File:Figure 1. Basic design of a BCI system. (Image taken from Wolpaw et al., 2002).png|thumb|Figure 1 Basic design of a BCI system. (Image taken from Wolpaw et al., 2002)]]&lt;br /&gt;
&lt;br /&gt;
The central nervous system (CNS) responds to stimuli in the environment or in the body by producing an appropriate output that can be in the form of a neuromuscular or hormonal response. A BCI provides a new output for the CNS that is different from the typical neuromuscular and hormonal ones. It changes the electrophysiological signals from reflections of the CNS activity (such as an electroencephalography – or EEG - rhythm or a neuronal firing rate) into the intended products of that activity, such as messages and commands that act on the world and accomplish the person’s intent &amp;lt;ref name=”5”&amp;gt; Wolpaw, J. R., Birbaumer, N., McFarland, D. J., Pfurtscheller, G. and Vaughan, T. M. (2002). Brain-Computer Interfaces for Communication and Control. Clinical Neurophysiology 113: 767–791&amp;lt;/ref&amp;gt;. Since it measures CNS activity, converting it into an artificial output, it can replace, restore, or enhance the natural CNS output, changing the interactions between the CNS and its internal or external environment &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
The electrical signals produced by brain activity can be detected on the scalp, on the cortical surface, or within the brain. As mentioned previously, the BCI has the function of translating these electrical signals into outputs that allow the user to communicate without the peripheral nerves and muscles. This becomes relevant because, since the BCI does not depend on neuromuscular control, it can provide another way of communication for people with disorders such as amyotrophic lateral sclerosis (ALS), brainstem stroke, cerebral palsy and spinal cord injury &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;. It needs to be mentioned that a BCI also depends on feedback and on the adaptation of brain activity based on that feedback. According to McFarland and Wolpaw (2011), “communication and control applications are interactive processes that require the user to observe the results of their efforts in order to maintain good performance and to correct mistakes &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;.” The BCI system needs to provide feedback and interact with the adaptations the brain makes in response. The general BCI operation, therefore, depends on the interaction between the user’s brain (where the signals produced are measured by the BCI), and the BCI itself (that translates the signals into specific commands) &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;. One of the most difficult challenges in BCI research is the management of the complex interactions between the concurrent adaptations of the CNS and the BCI &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Even though the main objective of BCI research and development is the creation of assistive communication and control technology for disabled people with different ailments, BCIs also have potential as a new type of interface for interacting with a computer or machine for people with normal neurological function. This could be applied to the general population in areas such as gaming, for example, or in high-stress situations like air traffic control. There could also be systems that enhance or supplement human performance such as image analysis, and systems that expand the media access or artistic expression. There has been some research into another possible application for the BCI technology: assistance in the rehabilitation of people disabled by a stroke and other acute events &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The biology of BCIs&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Since the BCI includes both a biological and technological component, without specific characteristics of the biological factor that can be used, the system would not work. The technology works because of the way our brains function &amp;lt;ref name=”6”&amp;gt; Grabianowski, E. How Brain-Computer Interfaces Work. Retrieved from computer.howstuffworks.com/brain-computer-interface.htm&amp;lt;/ref&amp;gt;. The human brain (arguably the most complex signal processing machine in existence) is capable of transducing a variety of environmental signals and to extract information from them in order to produce behavior, cognition, and action &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;. The brain has a myriad of neurons - individual nerve cells connected to one another by dendrites and axons. The actions of the brain are carried out by small electric signals generated by differences in electric potential carried by ions on the membranes of the neurons. Even though the signal pathways are insulated by myelin, there is a residual electric signal that escapes and that can be detected, interpreted, and used, such as in the case of BCIs. This also allows for the development of technologies that send signals into specific regions of the brain. By connecting a camera that could send the same signals as the eye (or close enough) to the brain, a blind person could regain some measure of vision &amp;lt;ref name=”6”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The non-invasive recording of the electrical brain activity by electrodes on the surface of the scalp has been known for over 80 years, due to the work of Hans Berger. His observations demonstrated that the electroencephalogram (EEG) could be used as “an index of the gross state of the brain.” Besides the detection of electrical signals from the brain, neural activity can also be monitored by measuring magnetic fields or hemoglobin oxygenation using sensors on the scalp, the surface of the brain, or within the brain &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Dependent and independent BCIs&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The commands that the user sends to the external world through the BCI system do not follow the same output pathways of peripheral nerves and muscles. Instead, a BCI provides the user with an alternative method for acting on the world. The BCIs can be in two different classes: dependent and independent &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;. These terms appeared in 2002, and both are used to describe BCIs that use brain signals for the control of applications. The difference between them is in how they depend on natural CNS output &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
A dependent BCI uses brains signals that depend on muscle activity &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;, such as in the case of a BCI that present the user with a matrix of letters. Each letter flashes one at a time, and it is the objective of the user to select a specific letter by looking directly at it. This initiates a visual evoked potential (VEP) that is recorded from the scalp. The VEP produced when the right intended letter flashes is greater than the VEPs produced when other letters flash. In this example, the brain’s output channel is EEG, but the generation of the signal that is detected is dependent on the direction of the gaze which, in turn, depends on extraocular muscles and the cranial nerves that activate them &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
An independent BCI, on the contrary, does not depend on natural CNS output; there is no need for muscle activity to generate the brain signals, since the message is not carried by peripheral nerves and muscles &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;. This is more advantageous for people who are severely disabled by neuromuscular disorders. An independent BCI would present the user with a matrix of letters that flash one at a time. The user would select a specific letter by producing a P300 evoked potential when the chosen latter flashed. According to McFarland and Wolpaw (2011), “the P300 is a positive potential occurring around 300 msec after an event that is significant to the subject. It is considered a “cognitive” potential since it is generated in tasks when subjects attend and discriminate stimuli. (…) The fact that the P300 potential reflects attention rather than simply gaze direction implied that this BCI did not depend on muscle (i.e., eye-movement) control. Thus, it represented a significant advance &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;.” The brain’s output channel, in this case, would be EEG, and the generation of the EEG signal depends on the user’s intent and not on the precise orientation of the eyes. This kind of BCI is of greater theoretical interest since it provides the brain with new output pathways. Also, for people with the most severe neuromuscular disabilities, independent BCIs are probably more useful since they lack all normal output channels &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
There is also another term that has been used recently: hybrid BCI. According to He et al. (2013) this can be applied to a BCI that employs two different types of brain signals, such has VEPs and sensorimotor rhythms) to produce its outputs, or to a system that combines a BCI output and a natural muscle-based output &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Invasive and non-invasive BCIs&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
BCIs can also be classified into two different classes by the way the neural signals are collected. When the signals are monitored using implanted arrays of electrodes it is called invasive system. This is common in experiments involving rodents and nonhuman primates, and the invasive system is suited for decoding activity in the cerebral cortex. These type of systems provide measurements with a high signal-to-noise ratio (SNR) and also allow for the decoding of spiking activity from small populations of neurons &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt;. The downside of the invasive system is that it causes a significant amount of discomfort and risk to the user &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;. In turn, noninvasive systems such as the EEG acquire the signal without the need for surgical implementation. The ongoing challenge with noninvasive techniques is the low SNR, although there have been some developments with the EEG that provide a substantial increase in the SNR &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Brief overview of the development of Brain-Computer Interfaces==&lt;br /&gt;
&lt;br /&gt;
For a long time, there was speculation that a device such as an electroencephalogram, which can record electrical potentials generated by brain activity, could be used to control devices by taking advantage of the signals obtained by it &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;. In the 1960s there where the first demonstrations of BCIs technology. These were made in 1964 by Grey Walter, which used a signal recorded on the scalp by EEG to control a slide projector. Ebenhard Fetz also helped advance the development of BCIs teaching monkeys to control a meter needle by changing the firing rate of a single cortical neuron. Moving forward to the 1970s, Jacques Vidal developed a system that determined the eye-gaze direction using the scalp-recorded visual evoked potential over the visual cortex to determine the direction in which the user wanted to move a computer cursor. The term brain-computer interface can be traced to Vidal &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;. During 1980, Elbert T. and colleagues demonstrated that people could learn to control slow cortical potentials (SCPs) in scalp-recorded RRG activity. This was used to adjust the vertical position of a rocket image moving on a TV screen. Still in the 1980s, more specifically in 1988, Farwell and Donchin proved that people could use the P300 event-related potentials to spell words on a computer screen. Another major development was when Wolpaw and colleagues trained people to control the amplitude of mu and beta rhythms – sensorimotor rhythms – using the EEG recorded over the sensorimotor cortex. They demonstrated that users could use the mu and beta rhythms to move a computer cursor in one or two dimensions &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The research of BCIs increased rapidly in the mid-1990s, continuing to grow into the present years. During the past 20 years, the research has covered a broad range of areas that are relevant to the development of BCI technology, such as basic and applied neuroscience, biomedical engineering, materials engineering, electrical engineering, signal processing, computer science, assistive technology, and clinical rehabilitation &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Bain-Computer Interface components==&lt;br /&gt;
&lt;br /&gt;
A BCI, in order to achieve the desired output that reflects the user’s intent, has to detect and measure features of brain signals. It has an input, for example, the electrophysiological activity from the user, components that translate input into output, a device command (output), and a protocol that determines the onset, offset, how the timing of the operation is controlled, how the feature translation process is parameterized, the nature of the commands that the BCI produces, and how errors in translation are handled &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;. The BCI system can be divided into four basic components: signal acquisition, feature extraction, feature translation, and device output commands &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The first component, signal acquisition, is responsible for measuring the brains signals, and the adequate acquisition of this signal is important for the function of any BCI. The objective of this part of the BCI system is to detect the voluntary neural activity created by the user, whether by invasive or noninvasive means. To achieve this, some kind of sensor is used, such as scalp electrodes for electrophysiological activity or functional magnetic resonance imaging (fMRI) for hemodynamic activity. The component amplifies the signals obtained for subsequent processing. It may also filter them in order to remove noise like the power line interference, at 60 or 50 Hz. The received signals that were amplified are digitized and sent to a computer &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The next component, feature extraction, analyses those digitized signals with the objective of isolation the signal features. These are specific characteristics in the signal such as power in specific EEG frequency bands or firing rates of individual cortical neurons. There are several feature extraction procedures for the digitized signal such as the spatial filtering, voltage amplitude measurements, spectral analyses or single-neuron separation &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;. The features extracted are expressed in a compact form that is suited for translation into output commands. These features to be effective need to have a strong correlation with the user’s intent. It is important that artifacts such as electromyogram from cranial muscles are avoided or eliminated to ensure the accurate measurement of the desired signal features &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
After the features have been extracted, these are provided to the feature translation algorithm that converts them into commands for the output device, which will achieve the user’s intent. The translation algorithm should adapt to spontaneous or learned changes in the user’s signal features. This is important “in order to ensure that the user’s possible range of feature values covers the full range of device control and also to make control as effective and efﬁcient as possible &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.” The translation algorithms include linear equations, nonlinear methods such as neural networks, and other classification techniques. Independently of its nature, these algorithms change independent variables (the signal features) into dependent variables, that are the device control commands &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref&amp;gt; Wolpaw, J. R., Birbaumer, N., Heetderks, W. J., McFarland, D. J., Peckham, P.H., Schalk, G., Donchin, E., Quatrano, L. A., Robinson, C. J. and Vaughan, T. M. (2000). Brain-Computer Interface Technology: A Review of the First International Meeting. IEEE Transactions on Rehabilitation Engineering, 8(2): 164-173&amp;lt;/ref&amp;gt; (5; 7).&lt;br /&gt;
&lt;br /&gt;
Finally, the commands that were produced by the feature translation algorithm are the output of the BCI. They are sent to the application and a result is created like a selection of a letter, controlling a cursor, robotic arm operation, wheelchair movement, or any other number of desired outcomes. The realization of the operation of the device provides feedback to the user, closing the control loop &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==BCI signals==&lt;br /&gt;
&lt;br /&gt;
As mentioned above, brain signals acquired by different methods can be used as BCI inputs. But not all signals are the same: they can differ substantially in regards to topographical resolution, frequency content, area of origin, and technical needs. For example, their resolution can range from EEG – that has millimeter resolution – to electrocorticogram (ECoG), with its millimeter resolution, to neuronal action potentials that have tens-of-microns resolution. The main issue when considering signals for BCI usage is what signals can best indicate the user’s intent &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Sensorimotor rhythms were first reported by Wolpaw et al. (1991) for cursor control. These are EEG rhythms that vary according to movement or the imagination of movement and are spontaneous, not requiring specific stimuli for their occurrence &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;. The P300 type of signal is an endogenous event-related potential component in the EEG &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;. It is a positive potential that occurs around 300 msec after an event that has significance to the user. The BCIs based on the P300 potential do not depend on muscle control, such as eye movement since it reflects attention rather than simply gaze direction. Both sensorimotor rhythms and the P300 have demonstrated that the noninvasive acquiring of these brain signals can be used for communication and control of devices &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Other possibilities for BCI signals have been explored, such as using the single-neuron activity that can be acquired by microelectrodes implanted in the cortex. This research has been tried in humans but mainly in non-human primates. Another batch of studies demonstrated that recording electrocorticographic (ECoG) activity from the surface of the brain is also a viable method to produce signals for a BCI system. Both of this studies prove the viability of invasive methods to gather brain signals that could be useful for BCIs. However, there are also issues regarding their suitability and reliability for long-term use in humans &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Besides electrophysiological measures, there are other types of signals that can be useful: Magnetoencephalography (MEG), functional magnetic resonance imaging (fMRI), and near-infrared systems (fNIR). For recording MEG and fMRI, presently, the technology is still expensive and bulky, reducing the probabilities of them being used for practical applications in the near future in regards to BCIs. fNIR can be cheaper and more compact, but since it is based on changes in cerebral blood flow (like fMRI), which is a slow response, this could impact when applied to a BCI system. In conclusion, currently, electrophysiological features are the most practical signals for BCI technology &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Invasive and noninvasive techniques for acquiring signals&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Brain signals acquired by invasive methods are mainly accomplished by electrophysiologic recording from electrodes that are implanted, neurosurgically, on the inside of the person’s brain or over the surface of the brain. The area of the brain that has been the preferred site for implanting electrodes has been the motor cortex, due to its accessibility and large pyramidal cells that produce measurable signals that can be generated by actual or imaginary motor movements &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;. The advantage of the invasive techniques is their high spatial and temporal resolution since it is possible to record individual neurons at a very high sampling rates. The signals recorded intracranially can obtain more information and allow for quicker responses. This, in turn, may lead to decreased requirements of training and attention on the part of the user when comparing to noninvasive methods. However, there are some issues with invasive methods that need to be taken into account. First, the long-term stability and reliability of the signal over days and years that it is expected that a person would be able to use the implanted device. There is a need for the user to consistently be able to generate the control signal reliably without frequent retuning &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;. Secondly, the quality of the signal over long time periods is important. The brain tissue around a specific region where a device has been implanted will react after the electrode insertion (figure 2). This reaction includes not only damage to the local tissue but also irritation at the electrode-tissue surface &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;. The third issue relates to if the device includes a neuroprosthesis that requires a stimulus to activate the disabled limb. The additional stimulus could also produce a significant effect on the neural circuits that might interfere with the signal of interest. The BCI systems must accurately detect and remove this kind of artifacts &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[[File:2.jpg|thumb|Figure 2. Acute (a) and chronic tissue (b) responses after device insertion. (Image taken from He et al., 2013)]]&lt;br /&gt;
&lt;br /&gt;
Success has been limited with invasive techniques applied to humans, although there has not been a lot of experiment with human subjects. To improve the suitability of the invasive method there is a need for further advancements in microelectrodes in order to obtain stable recordings over a long term. For the widespread use of invasive techniques in humans, it would also be necessary more research to decrease of the number of cells required for simultaneous recording to obtain a useful signal, and to provide feedback to the nervous system via electrical stimulation through electrodes &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Contrarily to invasive techniques, noninvasive methods reduce the risk for users since surgery or permanent attachment to the device is not required &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;. There are several techniques that belong to this category that have been used to measure brain activity noninvasively such as computerized tomography (CT), positron electron tomography (PET), single-photon emission computed tomography (SPECT), magnetic resonance  imaging (MRI), functional  magnetic resonance imaging (fMRI), magnetoencephalography (MEG), and electroencephalography (EEG) &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;. EEG is the most prevalent method of signal acquisition for BCIs, having high temporal resolution that is capable of measuring changes in brain activity that occur within a few msec. Although the resolution of EEG is not on the same level as that of implanted methods, signals from up to 256 electrode sites can be measured at the same time &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;. EEG is practical in a laboratory setup (figure 3) or in a real-world setting, it is portable, inexpensive, and has a vast literature of past performance &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[[File:3.jpg|thumb|Figure 3. Example of a simple BCI setup. (Image taken from McFarland and Wolpaw, 2011)]]&lt;br /&gt;
&lt;br /&gt;
==Applications==&lt;br /&gt;
&lt;br /&gt;
There are a number of disorders that disrupt the neuromuscular pathways through which the brain communicates with and controls its external environment. Disorders like the amyotrophic lateral sclerosis (ALS), brainstem stroke, brain or spinal cord injury, cerebral palsy, muscular dystrophies, multiple sclerosis, and others undermine the capacity of the neural pathways that control muscles or impair the muscles &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
An option for to restore function to people with motor impairments is to provide the brain with a non-muscular communication and control channel. A BCI can, therefore, convey messages and commands to the external world, and the potential of these systems for helping handicapped people is obvious &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;.He et al. (2013) mentions that “a BCI output could replace natural output that has been lost to injury or disease. Thus, someone who cannot speak could use a BCI to spell words that are then spoken by a speech synthesizer. Or one who has lost limb control could use a BCI to operate a powered wheelchair &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.”&lt;br /&gt;
&lt;br /&gt;
A BCI output could enhance natural CNS output. For example, as a method to prevent the loss of attention when someone is engaged in a task that requires constant focus. A BCI could detect the brain activity that precedes break in attention and create an output (a sound for example) that would alert the person. It could also supplement natural CNS output, such as in the case of a person that uses a BCI to control a third robotic arm, for example, or to choose items when a user that is controlling the position of the cursor selects them. In these cases, the BCI supplements the natural neuromuscular output with another, the artificial output. Finally, the BCI output could improve the natural CNS output. As an example, a person whose arm movements are compromised by a sensorimotor cortex damaged by a stroke could use a BCI system to measure signals from the damaged areas and then excite muscles or control an orthosis that would improve arm movement &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==See Also==&lt;br /&gt;
&#039;&#039;&#039;[[OpenBCI]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[Neurable]] - Building BCI for [[VR]] and [[AR]]&lt;br /&gt;
&lt;br /&gt;
[[Neuralink]] - [[Elon Musk]]&#039;s company to develop [[implantable]] [[brain–computer interface]]&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Terms]] [[Category:Technical Terms]]&lt;/div&gt;</summary>
		<author><name>Paulo Pacheco</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Brain-computer_interface&amp;diff=24905</id>
		<title>Brain-computer interface</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Brain-computer_interface&amp;diff=24905"/>
		<updated>2017-12-13T16:52:25Z</updated>

		<summary type="html">&lt;p&gt;Paulo Pacheco: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
A Brain-computer interface (BCI) is a technological system of communication that is based on neural activity generated by the brain &amp;lt;ref name=”1”&amp;gt; Vallabhaneni, A., Wang, T. and He, B. (2005). Brain-Computer Interface. Neural Engineering, Springer US, pp. 85-121&amp;lt;/ref&amp;gt;. It’s comprised of four main parts: a means for acquiring neural signals from the brain, a method for isolating the desired specific features in that signal, an algorithm to decode the signals obtained, and a method for transforming the decoding into an action (Figure 1) &amp;lt;ref name=”2”&amp;gt; Sajda, P., Müller, KR. and Shenoy, K. V. (2008). Brain-Computer Interfaces. IEEE Signal Processing Magazine, 25(1): 16-17&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt; He, B., Gao, S., Yuan, H. and Wolpaw, J. R. (2013). Brain-Computer Interfaces. Neural Engineering, Springer US, pp 87-151&amp;lt;/ref&amp;gt;. This method of communication is independent of the normal output pathways of peripheral nerves and muscles, and the signal can be acquired by using invasive or non-invasive techniques &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;. This technology can help to provide a means of communication for people disabled by neurological diseases or injuries, giving them a new channel of output for the brain. It can also enhance functions in healthy individuals &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;. BCIs are also named brain-machine interfaces (BMIs) &amp;lt;ref name=”4”&amp;gt; McFarland, D. J. and Wolpaw, J. R. (2011). Brain-Computer Interfaces for Communication and Control. Commun ACM, 54(5): 60–66&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[[File:Figure 1. Basic design of a BCI system. (Image taken from Wolpaw et al., 2002).png|thumb|Figure 1 Basic design of a BCI system. (Image taken from Wolpaw et al., 2002)]]&lt;br /&gt;
&lt;br /&gt;
The central nervous system (CNS) responds to stimuli in the environment or in the body by producing an appropriate output that can be in the form of a neuromuscular or hormonal response. A BCI provides a new output for the CNS that is different from the typical neuromuscular and hormonal ones. It changes the electrophysiological signals from reflections of the CNS activity (such as an electroencephalography – or EEG - rhythm or a neuronal firing rate) into the intended products of that activity, such as messages and commands that act on the world and accomplish the person’s intent &amp;lt;ref name=”5”&amp;gt; Wolpaw, J. R., Birbaumer, N., McFarland, D. J., Pfurtscheller, G. and Vaughan, T. M. (2002). Brain-Computer Interfaces for Communication and Control. Clinical Neurophysiology 113: 767–791&amp;lt;/ref&amp;gt;. Since it measures CNS activity, converting it into an artificial output, it can replace, restore, or enhance the natural CNS output, changing the interactions between the CNS and its internal or external environment &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
The electrical signals produced by brain activity can be detected on the scalp, on the cortical surface, or within the brain. As mentioned previously, the BCI has the function of translating these electrical signals into outputs that allow the user to communicate without the peripheral nerves and muscles. This becomes relevant because, since the BCI does not depend on neuromuscular control, it can provide another way of communication for people with disorders such as amyotrophic lateral sclerosis (ALS), brainstem stroke, cerebral palsy and spinal cord injury &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;. It needs to be mentioned that a BCI also depends on feedback and on the adaptation of brain activity based on that feedback. According to McFarland and Wolpaw (2011), “communication and control applications are interactive processes that require the user to observe the results of their efforts in order to maintain good performance and to correct mistakes &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;.” The BCI system needs to provide feedback and interact with the adaptations the brain makes in response. The general BCI operation, therefore, depends on the interaction between the user’s brain (where the signals produced are measured by the BCI), and the BCI itself (that translates the signals into specific commands) &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;. One of the most difficult challenges in BCI research is the management of the complex interactions between the concurrent adaptations of the CNS and the BCI &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Even though the main objective of BCI research and development is the creation of assistive communication and control technology for disabled people with different ailments, BCIs also have potential as a new type of interface for interacting with a computer or machine for people with normal neurological function. This could be applied to the general population in areas such as gaming, for example, or in high-stress situations like air traffic control. There could also be systems that enhance or supplement human performance such as image analysis, and systems that expand the media access or artistic expression. There has been some research into another possible application for the BCI technology: assistance in the rehabilitation of people disabled by a stroke and other acute events &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The biology of BCIs&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Since the BCI includes both a biological and technological components, without specific characteristics of the biological factor that can be used, the system would not work. The technology works because of the way our brains function &amp;lt;ref name=”6”&amp;gt; Grabianowski, E. How Brain-Computer Interfaces Work. Retrieved from computer.howstuffworks.com/brain-computer-interface.htm&amp;lt;/ref&amp;gt;. The human brain (arguably the most complex signal processing machine in existence) is capable of transducing a variety of environmental signals and to extract information from them in order to produce behavior, cognition, and action &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;. The brains have a myriad of neurons that are individual nerve cells that are connected to one another by dendrites and axons. The actions of the brain are carried out by small electric signals that are generated by differences in electric potential carried by ions on the membranes of the neurons Even though the signal pathways are insulated by myelin, there is a residual electric signal that escapes and that can be detected, interpreted and used, such as in the case of BCIs. This also allows for the development of technologies that send signals into specific regions of the brain, such as in the case of the optic nerve. By connecting a camera that could send the same signals as the eye (or close enough) to the brain, a blind person could regain some measure of vision &amp;lt;ref name=”6”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The non-invasively recording of the electrical brain activity by electrodes on the surface of the scalp has been known for over 80 years ago, due to the work of Hans Berger. His observations demonstrated that the electroencephalogram (EEG) could be used as “an index of the gross state of the brain.” Besides the detection of electrical signals of the brain, the neural activity can also be monitored by measuring magnetic fields or hemoglobin oxygenation, by using sensors on the scalp, the surface of the brain, or within the brain &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Dependent and independent BCIs&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The commands that the user sends to the external world through the BCI system do not follow the same output pathways of peripheral nerves and muscles. Instead, a BCI provides the user with an alternative method for acting on the world. The BCIs can be in two different classes: dependent and independent &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;. These terms appeared in 2002, and both are used to describe BCIs that use brain signals for the control of applications. The difference between them is in how they depend on natural CNS output &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
A dependent BCI uses brains signals that depend on muscle activity &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;, such as in the case of a BCI that present the user with a matrix of letters. Each letter flashes one at a time, and it is the objective of the user to select a specific letter by looking directly at it. This initiates a visual evoked potential (VEP) that is recorded from the scalp. The VEP produced when the right intended letter flashes is greater than the VEPs produced when other letters flash. In this example, the brain’s output channel is EEG, but the generation of the signal that is detected is dependent on the direction of the gaze which, in turn, depends on extraocular muscles and the cranial nerves that activate them &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
An independent BCI, on the contrary, does not depend on natural CNS output; there is no need for muscle activity to generate the brain signals, since the message is not carried by peripheral nerves and muscles &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;. This is more advantageous for people who are severely disabled by neuromuscular disorders. An independent BCI would present the user with a matrix of letters that flash one at a time. The user would select a specific letter by producing a P300 evoked potential when the chosen latter flashed. According to McFarland and Wolpaw (2011), “the P300 is a positive potential occurring around 300 msec after an event that is significant to the subject. It is considered a “cognitive” potential since it is generated in tasks when subjects attend and discriminate stimuli. (…) The fact that the P300 potential reflects attention rather than simply gaze direction implied that this BCI did not depend on muscle (i.e., eye-movement) control. Thus, it represented a significant advance &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;.” The brain’s output channel, in this case, would be EEG, and the generation of the EEG signal depends on the user’s intent and not on the precise orientation of the eyes. This kind of BCI is of greater theoretical interest since it provides the brain with new output pathways. Also, for people with the most severe neuromuscular disabilities, independent BCIs are probably more useful since they lack all normal output channels &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
There is also another term that has been used recently: hybrid BCI. According to He et al. (2013) this can be applied to a BCI that employs two different types of brain signals, such has VEPs and sensorimotor rhythms) to produce its outputs, or to a system that combines a BCI output and a natural muscle-based output &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Invasive and non-invasive BCIs&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
BCIs can also be classified into two different classes by the way the neural signals are collected. When the signals are monitored using implanted arrays of electrodes it is called invasive system. This is common in experiments involving rodents and nonhuman primates, and the invasive system is suited for decoding activity in the cerebral cortex. These type of systems provide measurements with a high signal-to-noise ratio (SNR) and also allow for the decoding of spiking activity from small populations of neurons &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt;. The downside of the invasive system is that it causes a significant amount of discomfort and risk to the user &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;. In turn, noninvasive systems such as the EEG acquire the signal without the need for surgical implementation. The ongoing challenge with noninvasive techniques is the low SNR, although there have been some developments with the EEG that provide a substantial increase in the SNR &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Brief overview of the development of Brain-Computer Interfaces==&lt;br /&gt;
&lt;br /&gt;
For a long time, there was speculation that a device such as an electroencephalogram, which can record electrical potentials generated by brain activity, could be used to control devices by taking advantage of the signals obtained by it &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;. In the 1960s there where the first demonstrations of BCIs technology. These were made in 1964 by Grey Walter, which used a signal recorded on the scalp by EEG to control a slide projector. Ebenhard Fetz also helped advance the development of BCIs teaching monkeys to control a meter needle by changing the firing rate of a single cortical neuron. Moving forward to the 1970s, Jacques Vidal developed a system that determined the eye-gaze direction using the scalp-recorded visual evoked potential over the visual cortex to determine the direction in which the user wanted to move a computer cursor. The term brain-computer interface can be traced to Vidal &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;. During 1980, Elbert T. and colleagues demonstrated that people could learn to control slow cortical potentials (SCPs) in scalp-recorded RRG activity. This was used to adjust the vertical position of a rocket image moving on a TV screen. Still in the 1980s, more specifically in 1988, Farwell and Donchin proved that people could use the P300 event-related potentials to spell words on a computer screen. Another major development was when Wolpaw and colleagues trained people to control the amplitude of mu and beta rhythms – sensorimotor rhythms – using the EEG recorded over the sensorimotor cortex. They demonstrated that users could use the mu and beta rhythms to move a computer cursor in one or two dimensions &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The research of BCIs increased rapidly in the mid-1990s, continuing to grow into the present years. During the past 20 years, the research has covered a broad range of areas that are relevant to the development of BCI technology, such as basic and applied neuroscience, biomedical engineering, materials engineering, electrical engineering, signal processing, computer science, assistive technology, and clinical rehabilitation &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Bain-Computer Interface components==&lt;br /&gt;
&lt;br /&gt;
A BCI, in order to achieve the desired output that reflects the user’s intent, has to detect and measure features of brain signals. It has an input, for example, the electrophysiological activity from the user, components that translate input into output, a device command (output), and a protocol that determines the onset, offset, how the timing of the operation is controlled, how the feature translation process is parameterized, the nature of the commands that the BCI produces, and how errors in translation are handled &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;. The BCI system can be divided into four basic components: signal acquisition, feature extraction, feature translation, and device output commands &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The first component, signal acquisition, is responsible for measuring the brains signals, and the adequate acquisition of this signal is important for the function of any BCI. The objective of this part of the BCI system is to detect the voluntary neural activity created by the user, whether by invasive or noninvasive means. To achieve this, some kind of sensor is used, such as scalp electrodes for electrophysiological activity or functional magnetic resonance imaging (fMRI) for hemodynamic activity. The component amplifies the signals obtained for subsequent processing. It may also filter them in order to remove noise like the power line interference, at 60 or 50 Hz. The received signals that were amplified are digitized and sent to a computer &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The next component, feature extraction, analyses those digitized signals with the objective of isolation the signal features. These are specific characteristics in the signal such as power in specific EEG frequency bands or firing rates of individual cortical neurons. There are several feature extraction procedures for the digitized signal such as the spatial filtering, voltage amplitude measurements, spectral analyses or single-neuron separation &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;. The features extracted are expressed in a compact form that is suited for translation into output commands. These features to be effective need to have a strong correlation with the user’s intent. It is important that artifacts such as electromyogram from cranial muscles are avoided or eliminated to ensure the accurate measurement of the desired signal features &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
After the features have been extracted, these are provided to the feature translation algorithm that converts them into commands for the output device, which will achieve the user’s intent. The translation algorithm should adapt to spontaneous or learned changes in the user’s signal features. This is important “in order to ensure that the user’s possible range of feature values covers the full range of device control and also to make control as effective and efﬁcient as possible &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.” The translation algorithms include linear equations, nonlinear methods such as neural networks, and other classification techniques. Independently of its nature, these algorithms change independent variables (the signal features) into dependent variables, that are the device control commands &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref&amp;gt; Wolpaw, J. R., Birbaumer, N., Heetderks, W. J., McFarland, D. J., Peckham, P.H., Schalk, G., Donchin, E., Quatrano, L. A., Robinson, C. J. and Vaughan, T. M. (2000). Brain-Computer Interface Technology: A Review of the First International Meeting. IEEE Transactions on Rehabilitation Engineering, 8(2): 164-173&amp;lt;/ref&amp;gt; (5; 7).&lt;br /&gt;
&lt;br /&gt;
Finally, the commands that were produced by the feature translation algorithm are the output of the BCI. They are sent to the application and a result is created like a selection of a letter, controlling a cursor, robotic arm operation, wheelchair movement, or any other number of desired outcomes. The realization of the operation of the device provides feedback to the user, closing the control loop &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==BCI signals==&lt;br /&gt;
&lt;br /&gt;
As mentioned above, brain signals acquired by different methods can be used as BCI inputs. But not all signals are the same: they can differ substantially in regards to topographical resolution, frequency content, area of origin, and technical needs. For example, their resolution can range from EEG – that has millimeter resolution – to electrocorticogram (ECoG), with its millimeter resolution, to neuronal action potentials that have tens-of-microns resolution. The main issue when considering signals for BCI usage is what signals can best indicate the user’s intent &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Sensorimotor rhythms were first reported by Wolpaw et al. (1991) for cursor control. These are EEG rhythms that vary according to movement or the imagination of movement and are spontaneous, not requiring specific stimuli for their occurrence &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;. The P300 type of signal is an endogenous event-related potential component in the EEG &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;. It is a positive potential that occurs around 300 msec after an event that has significance to the user. The BCIs based on the P300 potential do not depend on muscle control, such as eye movement since it reflects attention rather than simply gaze direction. Both sensorimotor rhythms and the P300 have demonstrated that the noninvasive acquiring of these brain signals can be used for communication and control of devices &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Other possibilities for BCI signals have been explored, such as using the single-neuron activity that can be acquired by microelectrodes implanted in the cortex. This research has been tried in humans but mainly in non-human primates. Another batch of studies demonstrated that recording electrocorticographic (ECoG) activity from the surface of the brain is also a viable method to produce signals for a BCI system. Both of this studies prove the viability of invasive methods to gather brain signals that could be useful for BCIs. However, there are also issues regarding their suitability and reliability for long-term use in humans &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Besides electrophysiological measures, there are other types of signals that can be useful: Magnetoencephalography (MEG), functional magnetic resonance imaging (fMRI), and near-infrared systems (fNIR). For recording MEG and fMRI, presently, the technology is still expensive and bulky, reducing the probabilities of them being used for practical applications in the near future in regards to BCIs. fNIR can be cheaper and more compact, but since it is based on changes in cerebral blood flow (like fMRI), which is a slow response, this could impact when applied to a BCI system. In conclusion, currently, electrophysiological features are the most practical signals for BCI technology &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Invasive and noninvasive techniques for acquiring signals&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Brain signals acquired by invasive methods are mainly accomplished by electrophysiologic recording from electrodes that are implanted, neurosurgically, on the inside of the person’s brain or over the surface of the brain. The area of the brain that has been the preferred site for implanting electrodes has been the motor cortex, due to its accessibility and large pyramidal cells that produce measurable signals that can be generated by actual or imaginary motor movements &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;. The advantage of the invasive techniques is their high spatial and temporal resolution since it is possible to record individual neurons at a very high sampling rates. The signals recorded intracranially can obtain more information and allow for quicker responses. This, in turn, may lead to decreased requirements of training and attention on the part of the user when comparing to noninvasive methods. However, there are some issues with invasive methods that need to be taken into account. First, the long-term stability and reliability of the signal over days and years that it is expected that a person would be able to use the implanted device. There is a need for the user to consistently be able to generate the control signal reliably without frequent retuning &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;. Secondly, the quality of the signal over long time periods is important. The brain tissue around a specific region where a device has been implanted will react after the electrode insertion (figure 2). This reaction includes not only damage to the local tissue but also irritation at the electrode-tissue surface &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;. The third issue relates to if the device includes a neuroprosthesis that requires a stimulus to activate the disabled limb. The additional stimulus could also produce a significant effect on the neural circuits that might interfere with the signal of interest. The BCI systems must accurately detect and remove this kind of artifacts &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[[File:2.jpg|thumb|Figure 2. Acute (a) and chronic tissue (b) responses after device insertion. (Image taken from He et al., 2013)]]&lt;br /&gt;
&lt;br /&gt;
Success has been limited with invasive techniques applied to humans, although there has not been a lot of experiment with human subjects. To improve the suitability of the invasive method there is a need for further advancements in microelectrodes in order to obtain stable recordings over a long term. For the widespread use of invasive techniques in humans, it would also be necessary more research to decrease of the number of cells required for simultaneous recording to obtain a useful signal, and to provide feedback to the nervous system via electrical stimulation through electrodes &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Contrarily to invasive techniques, noninvasive methods reduce the risk for users since surgery or permanent attachment to the device is not required &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;. There are several techniques that belong to this category that have been used to measure brain activity noninvasively such as computerized tomography (CT), positron electron tomography (PET), single-photon emission computed tomography (SPECT), magnetic resonance  imaging (MRI), functional  magnetic resonance imaging (fMRI), magnetoencephalography (MEG), and electroencephalography (EEG) &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;. EEG is the most prevalent method of signal acquisition for BCIs, having high temporal resolution that is capable of measuring changes in brain activity that occur within a few msec. Although the resolution of EEG is not on the same level as that of implanted methods, signals from up to 256 electrode sites can be measured at the same time &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;. EEG is practical in a laboratory setup (figure 3) or in a real-world setting, it is portable, inexpensive, and has a vast literature of past performance &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[[File:3.jpg|thumb|Figure 3. Example of a simple BCI setup. (Image taken from McFarland and Wolpaw, 2011)]]&lt;br /&gt;
&lt;br /&gt;
==Applications==&lt;br /&gt;
&lt;br /&gt;
There are a number of disorders that disrupt the neuromuscular pathways through which the brain communicates with and controls its external environment. Disorders like the amyotrophic lateral sclerosis (ALS), brainstem stroke, brain or spinal cord injury, cerebral palsy, muscular dystrophies, multiple sclerosis, and others undermine the capacity of the neural pathways that control muscles or impair the muscles &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
An option for to restore function to people with motor impairments is to provide the brain with a non-muscular communication and control channel. A BCI can, therefore, convey messages and commands to the external world, and the potential of these systems for helping handicapped people is obvious &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;.He et al. (2013) mentions that “a BCI output could replace natural output that has been lost to injury or disease. Thus, someone who cannot speak could use a BCI to spell words that are then spoken by a speech synthesizer. Or one who has lost limb control could use a BCI to operate a powered wheelchair &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.”&lt;br /&gt;
&lt;br /&gt;
A BCI output could enhance natural CNS output. For example, as a method to prevent the loss of attention when someone is engaged in a task that requires constant focus. A BCI could detect the brain activity that precedes break in attention and create an output (a sound for example) that would alert the person. It could also supplement natural CNS output, such as in the case of a person that uses a BCI to control a third robotic arm, for example, or to choose items when a user that is controlling the position of the cursor selects them. In these cases, the BCI supplements the natural neuromuscular output with another, the artificial output. Finally, the BCI output could improve the natural CNS output. As an example, a person whose arm movements are compromised by a sensorimotor cortex damaged by a stroke could use a BCI system to measure signals from the damaged areas and then excite muscles or control an orthosis that would improve arm movement &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==See Also==&lt;br /&gt;
&#039;&#039;&#039;[[OpenBCI]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[Neurable]] - Building BCI for [[VR]] and [[AR]]&lt;br /&gt;
&lt;br /&gt;
[[Neuralink]] - [[Elon Musk]]&#039;s company to develop [[implantable]] [[brain–computer interface]]&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Terms]] [[Category:Technical Terms]]&lt;/div&gt;</summary>
		<author><name>Paulo Pacheco</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Pokemon_Go&amp;diff=24904</id>
		<title>Pokemon Go</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Pokemon_Go&amp;diff=24904"/>
		<updated>2017-12-13T16:04:25Z</updated>

		<summary type="html">&lt;p&gt;Paulo Pacheco: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{stub}}&lt;br /&gt;
{{App Infobox&lt;br /&gt;
|image={{#ev:youtube|GQgbXJub-IQ|350}}&lt;br /&gt;
|Developer=[[Niantic Labs]]&lt;br /&gt;
|Publisher=[[The Pokémon Company]]&lt;br /&gt;
|Platform=&lt;br /&gt;
|Device=All iOS and Android Devices&lt;br /&gt;
|Operating System=[[iOS]], [[Android]]&lt;br /&gt;
|Type=[[Full Game]]&lt;br /&gt;
|Genre=[[Action/Adventure]]&lt;br /&gt;
|Input Device=&lt;br /&gt;
|Game Mode=[[Single Player]], [[Multiplayer]]&lt;br /&gt;
|Comfort Level=&lt;br /&gt;
|Version=&lt;br /&gt;
|Rating=&lt;br /&gt;
|Downloads=&lt;br /&gt;
|Release Date=July 6, 2016&lt;br /&gt;
|Price=Free with microtransactions&lt;br /&gt;
|Website=http://www.pokemongo.com/&lt;br /&gt;
|Infobox Updated=7/14/2016&lt;br /&gt;
}}&lt;br /&gt;
[[Pokemon Go]] is a location-based [[augmented reality]] [[mobile game]] developed by [[Niantic Labs]] and published by [[The Pokemon Company]]. This [http://pkmngotrading.com/wiki/Pokemon Pokemon] game was released for all [[iOS]] and [[Android]] [[Devices]] on July 6, 2016.&lt;br /&gt;
==Review==&lt;br /&gt;
&#039;&#039;&#039;A Catch of Success with &amp;quot;Pokémon Go&amp;quot;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
By Paulo Pacheco on July 14, 2016&lt;br /&gt;
&lt;br /&gt;
[http://pkmngotrading.com/wiki/Pokemon_Go_Wiki Pokémon GO] has undoubtedly been a success since its launch on July 6, 2016. A wide phenomenon that still has to have a worldwide release, but that nevertheless already captured the interest of millions of people. From the beginnings of the franchise created by Satoshi Tajiri – inspired by his childhood hobby of insect collecting – in the early 90’s, to its recent incarnation on the smartphones, there seems to be no stopping to this longtime series, even if there have been a few bumps in the road for the latest game app.&lt;br /&gt;
&lt;br /&gt;
Described in the official Pokémon Go website as a &#039;Real World Adventure&#039;, the [[augmented reality]] [[Augmented Reality Games|game]] was originally launched in three countries: the USA, Australia, and New Zealand. It quickly increased in popularity and became viral. It’s the fastest mobile game ever to reach No. 1 &amp;lt;ref name=&amp;quot;venturebeat&amp;quot;&amp;gt; http://venturebeat.com/2016/07/11/pokemon-go-outpaces-clash-royale-as-the-fastest-game-ever-to-no-1-on-the-mobile-revenue-charts/&amp;lt;/ref&amp;gt;, and it has become the biggest mobile game in US history, attracting just under 21 million daily active users. If this trend continues, it could even surpass the number of daily active users of [[Snapchat]] and [[Google Maps]] on [[Android]]&amp;lt;ref name=&amp;quot;surveymonkey&amp;gt;https://www.surveymonkey.com/business/intelligence/pokemon-go-biggest-mobile-game-ever/&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The massive success brought, inevitably, an increase in [[Nintendo]]’s shares value&amp;lt;ref name=&amp;quot;bloomberg&amp;quot;&amp;gt;http://www.bloomberg.com/quote/7974:JP&amp;lt;/ref&amp;gt;. Viral means good business, and App Annie communications boss Fabien Pierre-Nicolas has estimated that Pokémon GO could be generating over $1 billion of net revenue for [[Niantic Labs]], the game’s developer &amp;lt;ref name=&amp;quot;venturebeat&amp;quot;&amp;gt;http://venturebeat.com/2016/07/11/pokemon-go-outpaces-clash-royale-as-the-fastest-game-ever-to-no-1-on-the-mobile-revenue-charts/&amp;lt;/ref&amp;gt;. All of this with an official release in only three countries. A phased roll-out launch has begun in Europe, with the release of the app in Germany on the 13th and the United Kingdom on the 14th of July. Other countries are expected to follow in the coming days or weeks &amp;lt;ref name=&amp;quot;twitter&amp;quot;&amp;gt;https://twitter.com/PokemonGoApp&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;http://www.pocket-lint.com/news/138196-pokemon-go-available-in-the-uk-at-last-get-it-on-itunes-and-google-play&amp;lt;/ref&amp;gt; &amp;lt;ref&amp;gt;http://www.pocket-lint.com/news/138196-pokemon-go-available-in-the-uk-at-last-get-it-on-itunes-and-google-play&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Even though a lot of the focus has been on Nintendo and the boost in the share value of the company, we must not forget that this is a joint venture between The Pokémon Company, Nintendo, and Niantic (with Google also in the mix, since Niantic was founded as an internal Google startup &amp;lt;ref&amp;gt;http://fortune.com/2016/07/12/google-pokemon-go/&amp;lt;/ref&amp;gt;). But even if Nintendo has only a minority stake in Pokémon GO, the success of the game app means exposure for the Japanese video game company, something much needed since many have viewed Nintendo to be on the decline after the success of the Wii &amp;lt;ref&amp;gt;http://www.forbes.com/sites/erikkain/2016/07/11/will-pokemon-go-be-the-nintendo-cash-cow-investors-are-hoping-for/#2e2b3d765926&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The game takes its foundations from another creation of Niantic: [[Ingress]]. It blends real-world exploration with a digital overlay through the use of geo-localization and camera functions on the smartphone to superimpose images of [http://pkmngotrading.com/wiki/Pokemon Pokémon] to be captured. Its success can be attributed to this blend of the virtual and the real, the geo-location and, of course, the massive appeal of the Pokémon brand. The allure of hunting down and collecting Pokémon is still high &amp;lt;ref&amp;gt;http://theconversation.com/whats-made-poke-mon-go-such-a-viral-success-62420&amp;lt;/ref&amp;gt;http://www.themarysue.com/pokemon-go-mental-health/&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
People are also moving, gathering, and exploring the outside due to the game app. There have been anecdotal reports that the game is helping people with [[Mental health|depression, anxiety and agoraphobia]] to leave the house, helping them by providing the necessary motivation to overcome their conditions &amp;lt;ref&amp;gt;http://www.themarysue.com/pokemon-go-mental-health/&amp;lt;/ref&amp;gt;. It’s not a cure and, as previously stated, these health benefits are only anecdotal, but it’s an example of how powerful game design can be by providing a system of motivation and rewards. The fact is that walking and spending more time outdoors are good for you &amp;lt;ref&amp;gt;http://www.heart.org/HEARTORG/HealthyLiving/PhysicalActivity/Walking/Walk-Dont-Run-Your-Way-to-a-Healthy-Heart_UCM_452926_Article.jsp&amp;lt;/ref&amp;gt; &amp;lt;ref&amp;gt; http://www.health.harvard.edu/press_releases/spending-time-outdoors-is-good-for-you&amp;lt;/ref&amp;gt;, and it seems that it’s something that Pokémon GO is making a lot of people do.&lt;br /&gt;
&lt;br /&gt;
There have been problems too, since the recent release of the game. Problems with the servers going down due to the overflow of players (which even caused a delay in the worldwide release of the app), bugs, and a myriad of strange occurrences, like the discovery of a dead body by a teenager while playing the game &amp;lt;ref&amp;gt;http://www.forbes.com/sites/davidthier/2016/07/07/pokemon-go-servers-seem-to-be-struggling/#64880df14958&amp;lt;/ref&amp;gt; &amp;lt;ref&amp;gt;http://www.inverse.com/article/18130-a-short-history-of-the-police-s-weird-relationship-with-pokemon-go&amp;lt;/ref&amp;gt; &amp;lt;ref&amp;gt;http://tek.sapo.pt/mobile/apps/artigo/pokemon_go_ja_deu_aso_a_uma_mao_cheia_de_situacoes_bizarras-48106umv.html&amp;lt;/ref&amp;gt;. Recently, there have also been concerns over privacy. The Democratic senator Al Franken has even written a letter to Niantic Labs, expressing his worries about the collecting, use, and sharing of the users’ personal information by the company &amp;lt;ref&amp;gt;http://www.i4u.com/2016/07/113286/pokemon-go-success-has-alarmed-us-senator-al-franken&amp;lt;/ref&amp;gt; &amp;lt;ref&amp;gt;http://money.cnn.com/2016/07/13/technology/pokemon-go-al-franken/&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
These troubles don’t seem to be affecting the interest in the game, although questions remain if it’s going to keep the momentum or fade away like so many other apps. An example closer to Nintendo is that of Miitomo that had early success but could not sustain it &amp;lt;ref name=&amp;quot;surveymonkey&amp;gt;https://www.surveymonkey.com/business/intelligence/pokemon-go-biggest-mobile-game-ever/&amp;lt;/ref&amp;gt; &amp;lt;ref&amp;gt;https://www.surveymonkey.com/business/intelligence/rise-fall-nintendos-miitomo-downloads-arent-enough/&amp;lt;/ref&amp;gt;. With the full release of Pokémon GO throughout the world, we will see if the success is just due to the novelty of it or if the game is indeed well designed and capable of capturing the attention and dedication of players for a long time.&lt;br /&gt;
&lt;br /&gt;
A game that was originally inspired by the natural fauna, by being outdoors and exploring a world filled with novel and wonderful creatures has now come full circle, inviting people to explore their surroundings and its wonders, by merging the real world with the digital creation of the pocket monsters. Whatever the future holds, the impact of Pokémon in the gaming culture is undeniable.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Apps]] [[Category:AR Apps]] [[Category:Games]] [[Category:Augmented Reality Games]] [[Category:AR Games]] [[Category:iOS Apps]] [[Category:Android Apps]]&lt;/div&gt;</summary>
		<author><name>Paulo Pacheco</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Holograms&amp;diff=24903</id>
		<title>Holograms</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Holograms&amp;diff=24903"/>
		<updated>2017-12-13T15:24:54Z</updated>

		<summary type="html">&lt;p&gt;Paulo Pacheco: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
[[File:Holograms 1.png|thumb|Figure 1. Types of light (image: science.howstuffworks.com)]]&lt;br /&gt;
&lt;br /&gt;
[[File:Holograms 2.png|thumb|Figure 2. Basic hologram setup (image: science.howstuffworks.com)]]&lt;br /&gt;
&lt;br /&gt;
[[File:Holograms 3.png|thumb|Figure 3. Reconstructing a hologram (image: www.livescience.com)]]&lt;br /&gt;
&lt;br /&gt;
A hologram is the recorded interference pattern between a point sourced of light of fixed wavelength (reference beam) and a wavefield scattered from the object (object beam). A hologram is recorded in a two- or three-dimensional medium and contains information about the entire three-dimensional wavefield of the recorded object. When the hologram is illuminated by the reference beam, the diffraction pattern recreates the lightfield of the original object. The viewer is then able to see an image that is indistinguishable from the recorded object &amp;lt;ref name=”1”&amp;gt; Jeong, A. and Jeong, T. What are the main types of holograms? Retrieved from http://www.integraf.com/resources/articles/a-main-types-of-holograms&amp;lt;/ref&amp;gt; &amp;lt;ref name=”2”&amp;gt; Schnars, U. and Jüptner, W. (2002). Digital recording and numerical reconstruction of holograms. Meas. Sci. Technol., 13: R85-R101&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The holographic plate is a kind of recording medium, in which the 3D virtual image of an object is stored. While in a recording media (e.g a CD) the grooves contain information about sound that can be used to reconstruct a song, a holographic plate contains information about light that is used to reconstruct an object &amp;lt;ref name=”3”&amp;gt; Physics Central. Holograms: virtually approaching science fiction. Retrieved from http://physicscentral.com/explore/action/hologram.cfm&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The information about light is coded in the form of bright and dark microinterferences. Usually, these are not visible to the human eye due to the high spatial frequencies. Reconstructing the object wave by illuminating the hologram with the reference wave creates a 3D image that exhibits the effects of perspective and depth of focus &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
This photographic technique of recording light scattered from an object and presenting it as a 3D image is called Holography. The object&#039;s representations generated by this technique are the most lifelike 3D renditions because it records information in a way closer to what our eyes use to see the world around us &amp;lt;ref name=”4”&amp;gt; Workman, R. (2013). What is a hologram? Retrieved from  http://www.livescience.com/34652-hologram.html&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt; Bryner, M. (2010). ‘Star Wars’-like holograms nearly a reality. Retrieved from http://www.livescience.com/10227-star-wars-holograms-reality.html&amp;lt;/ref&amp;gt;. Therefore, it is an attractive imaging technique since it allows the viewer to see a complete three-dimensional volume of one image &amp;lt;ref name=”6”&amp;gt; Rosen, J., Katz, B. and Brooker, G. (2009). Review of three-dimensional holographic imaging by Fresnel incoherent correlation holograms. 3D Research, 1(1)&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Throughout the years, several types of holograms have been created. These include transmission holograms, that allow light to be shined through them and the image to be viewed from the side, and rainbow holograms. These are common in credit cards and driver’s licenses (used for security reasons) &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
While various holograms have been used in movies like Star Wars and Iron Man, the real world technology has not achieved the same level as presented in those cinematic stories &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;. Currently, holograms are still static, but they can look incredible such as in the case of large-scale holograms that are illuminated with lasers or displayed in a darkened room with carefully directed lighting. Some holograms can even appear to move as the viewer walks past them, looking at them from different angles. Others can change colors or include views of different objects, depending on how the viewer looks at them &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”7”&amp;gt; Wilson, T. V. (2007). How holograms work. Retrieved from http://science.howstuffworks.com/hologram.htm&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
One of the interesting traits of a hologram is that cutting one in half, each half will contain the pattern to recreate the original object. Even if a small piece is cut out, it will still contain the entire holographic image. Another feature is that making a hologram of a magnifying glass will create a hologram that will magnify the other objects in the hologram &amp;lt;ref name=”7”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==How does it work?==&lt;br /&gt;
&lt;br /&gt;
To create a hologram, holography uses the wave nature of light. In a normal photograph, lenses are used to focus an image on film or an electronic chip, recording where there is light or not. With the holographic technique, the shape a light wave takes after it bounces off an object is recorded. It uses interfering waves of light to capture images that can be 3D. When waves of light meet they interfere with each other, analogous to what happens with waves of water. The pattern created by the interference of waves contains the information used to make the holograms &amp;lt;ref name=”8”&amp;gt; Holographic Studios. A brief history of holography. Retrieved from http://www.holographer.com/history-of-holography/&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
True 3D holograms could not be a practical reality without the invention of the laser. A laser creates waves of light that are coherent. It is this coherent light that makes it possible to record the light wave interference patterns of holography  &amp;lt;ref name=”8”&amp;gt;&amp;lt;/ref&amp;gt;. While white light contains all of the different frequencies of light traveling in all directions, laser light produces light that has only one wavelength and one color (Figure 1) &amp;lt;ref name=”7”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In its basic form, three elements are necessary to create a hologram: an object or person, a laser beam, and a recording medium. A clear environment is also recommended to enable the light beams to intersect &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The laser beam is separated into two beams and redirected using mirrors (Figure 2). One of the beams is directed at the object, while the other - the reference beam - is directed to the recording medium. Some of the light of the object beam is reflected off the object onto the recording medium. The beams intersect and interfere with each other, creating an interference pattern that is imprinted on the recording medium. This medium can be composed of various materials. A common recording medium is a photographic film with an added amount of light reactive grains, enabling a higher resolution for the two beams, and making the image more realistic than using silver halide material &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
A developed film from a regular camera shows the negative view of the original scene, with light and dark areas. Looking at it, it is still possible to more or less understand what the original scene looked like. However, when looking at a revealed holographic tape, there is nothing that resembles the original scene. There can be dark frames of film or a random pattern of lines and swirls, and only with the right illumination is the captured object properly shown &amp;lt;ref name=”7”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Using a transmission hologram made with silver halide emulsion as an example, there needs to be the right light source to recreate the original object beam. This beam is recreated due to the diffraction grating and reflective surfaces inside the hologram that were caused by the interference of the two light sources. The recreated beam is identical to the original object beam before it was combined with the reference wave. Furthermore, it also travels in the same direction as the original beam. This means that since the object was on the other side of the holographic plate, the beam travels towards the viewer. The eyes focus the light, and the brain interprets it as a 3D image located behind the recording medium (Figure 3) &amp;lt;ref name=”7”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Brief history==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1886 -&#039;&#039;&#039; Gabriel Lippmann, in France, develops a theory of using light wave interference to capture color in photography. He presented his theory in 1891 to the Academy of Sciences, along with some primitive examples of his interference color photographs. In 1983, he presented perfect color photographs to the Academy and won a Nobel Prize in Physics, in 1908, due to his work in this area.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1947&#039;&#039;&#039; - Dennis Gabor develops the theory of holography. He coined the term hologram from the Greek words holos (meaning ‘whole’) and gramma (‘message’).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1960 -&#039;&#039;&#039; N. Bassov, A. Prokhorov, and Charles Towns contributed to the development of the laser. Its pure, intense light was optimal for creating holograms.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1962 -&#039;&#039;&#039; Yuri Denisyuk publishes his work in recording 3D images, inspired by Lippmann’s description of interference photography. He began his experiments in 1958 using a highly filtered mercury discharge tube as his light source.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1968 -&#039;&#039;&#039; Dr. Stephen A. Benton invents the white-light transmission holography while researching holographic television. The white-light hologram can be viewed in ordinary white light.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1972 -&#039;&#039;&#039; Lloyd Cross develops the integral hologram. It combines white-light transmission holography with conventional cinematography to produce moving 3D images. &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”8”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”9”&amp;gt; Holography Virtual Gallery. History of holography. Retrieved from http://www.holography.ru/histeng.htm&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Main types of holograms==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;White-light transmission holograms -&#039;&#039;&#039; This type of holograms are illuminated with incandescent light, producing images that contain the rainbow spectrum of colors. Depending on the point of view of the viewer, the holograms&#039; colors change. They are also called rainbow holograms.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Reflection holograms -&#039;&#039;&#039; Reflection holograms are usually mass-produced using a stamping method. They can be seen in credit cards or in a driver’s license. Normally, these holograms can be viewed in white light.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Transmission holograms -&#039;&#039;&#039; Typically, a transmission hologram is viewed with laser light. The light is directed from behind the hologram and the image projected to the viewer’s side.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Hybrid hologram -&#039;&#039;&#039; This type of hologram is between the reflection and transmission types. Examples include embossed holograms, integral holograms, holographic interferometry, multichannel holograms, and computer-generated holograms. &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”7”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”10”&amp;gt; MIT Museum. Holography glossary. Retrieved from https://mitmuseum.mit.edu/holography-glossary&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Terms]] [[Category:Technical Terms]]&lt;/div&gt;</summary>
		<author><name>Paulo Pacheco</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Holograms&amp;diff=24902</id>
		<title>Holograms</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Holograms&amp;diff=24902"/>
		<updated>2017-12-13T15:09:15Z</updated>

		<summary type="html">&lt;p&gt;Paulo Pacheco: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
[[File:Holograms 1.png|thumb|Figure 1. Types of light (image: science.howstuffworks.com)]]&lt;br /&gt;
&lt;br /&gt;
[[File:Holograms 2.png|thumb|Figure 2. Basic hologram setup (image: science.howstuffworks.com)]]&lt;br /&gt;
&lt;br /&gt;
[[File:Holograms 3.png|thumb|Figure 3. Reconstructing a hologram (image: www.livescience.com)]]&lt;br /&gt;
&lt;br /&gt;
A hologram is the recorded interference pattern between a point sourced of light of fixed wavelength (reference beam) and a wavefield scattered from the object (object beam). A hologram is recorded in a two- or three-dimensional medium and contains information about the entire three-dimensional wavefield of the recorded object. When the hologram is illuminated by the reference beam, the diffraction pattern recreates the lightfield of the original object. The viewer is then able to see an image that is indistinguishable from the recorded object &amp;lt;ref name=”1”&amp;gt; Jeong, A. and Jeong, T. What are the main types of holograms? Retrieved from http://www.integraf.com/resources/articles/a-main-types-of-holograms&amp;lt;/ref&amp;gt; &amp;lt;ref name=”2”&amp;gt; Schnars, U. and Jüptner, W. (2002). Digital recording and numerical reconstruction of holograms. Meas. Sci. Technol., 13: R85-R101&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The holographic plate is a kind of recording medium, in which the 3D virtual image of an object is stored. While in a recording media (e.g a CD) the grooves contain information about sound that can be used to reconstruct a song, a holographic plate contains information about light that is used to reconstruct an object &amp;lt;ref name=”3”&amp;gt; Physics Central. Holograms: virtually approaching science fiction. Retrieved from http://physicscentral.com/explore/action/hologram.cfm&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The information about light is coded in the form of bright and dark microinterferences. Usually, these are not visible to the human eye due to the high spatial frequencies. Reconstructing the object wave by illuminating the hologram with the reference wave creates a 3D image that exhibits the effects of perspective and depth of focus &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
This photographic technique of recording light scattered from an object and presenting it as a 3D image is called Holography. The object&#039;s representations generated by this technique are the most lifelike 3D renditions because it records information in a way closer to what our eyes use to see the world around us &amp;lt;ref name=”4”&amp;gt; Workman, R. (2013). What is a hologram? Retrieved from  http://www.livescience.com/34652-hologram.html&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt; Bryner, M. (2010). ‘Star Wars’-like holograms nearly a reality. Retrieved from http://www.livescience.com/10227-star-wars-holograms-reality.html&amp;lt;/ref&amp;gt;. Therefore, it is an attractive imaging technique since it allows the viewer to see a complete three-dimensional volume of one image &amp;lt;ref name=”6”&amp;gt; Rosen, J., Katz, B. and Brooker, G. (2009). Review of three-dimensional holographic imaging by Fresnel incoherent correlation holograms. 3D Research, 1(1)&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Throughout the years, several types of holograms have been created. These include transmission holograms, that allow light to be shined through them and the image to be viewed from the side, and rainbow holograms. These are common in credit cards and driver’s licenses (used for security reasons) &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
While various holograms have been used in movies like Star Wars and Iron Man, the real world technology has not achieved the same level as presented in those cinematic stories &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;. Currently, holograms are still static, but they can look incredible such as in the case of large-scale holograms that are illuminated with lasers or displayed in a darkened room with carefully directed lighting. Some holograms can even appear to move as the viewer walks past them, looking at them from different angles. Others can change colors or include views of different objects, depending on how the viewer looks at them &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”7”&amp;gt; Wilson, T. V. (2007). How holograms work. Retrieved from http://science.howstuffworks.com/hologram.htm&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
One of the interesting traits of a hologram is that cutting one in half, each half will contain the pattern to recreate the original object. Even if a small piece is cut out, it will still contain the entire holographic image. Another feature is that making a hologram of a magnifying glass will create a hologram that will magnify the other objects in the hologram &amp;lt;ref name=”7”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==How does it work?==&lt;br /&gt;
&lt;br /&gt;
To create a hologram, holography uses the wave nature of light. In a normal photograph, lenses are used to focus an image on film or an electronic chip, recording where there is light or not. With the holographic technique, the shape a light wave takes after it bounces off an object is recorded. It uses interfering waves of light to capture images that can be 3D. When waves of light meet they interfere with each other, analogous to what happens with waves of water. The pattern created by the interference of waves contains the information used to make the holograms &amp;lt;ref name=”8”&amp;gt; Holographic Studios. A brief history of holography. Retrieved from http://www.holographer.com/history-of-holography/&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
True 3D holograms could not be a practical reality without the invention of the laser. A laser creates waves of light that are coherent. It is this coherent light that makes it possible to record the light wave interference patterns of holography  &amp;lt;ref name=”8”&amp;gt;&amp;lt;/ref&amp;gt;. While white light contains all of the different frequencies of light traveling in all directions, laser light produces light that has only one wavelength and one color (Figure 1) &amp;lt;ref name=”7”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In its basic form, three elements are necessary to create a hologram: an object or person, a laser beam, and a recording medium. A clear environment is recommended to enable the light beams to intersect &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The laser beam is separated into two beams and redirected using mirrors (Figure 2). One of the beams is directed at the object, while the other - the reference beam - is directed to the recording medium. Some of the light of the object beam is reflected off the object onto the recording medium. The beams intersect and interfere with each other, creating an interference pattern that is imprinted on the recording medium. This medium can be composed of various materials. A common recording medium is a photographic film with an added amount of light reactive grains, enabling a higher resolution for the two beams, and making the image more realistic than using silver halide material &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
A developed film from a regular camera shows the negative view of the original scene, with light and dark areas. Looking at it, it is still possible to more or less understand what the original scene looked like. However, when looking at a revealed holographic tape, there is nothing that resembles the original scene. There can be dark frames of film or a random pattern of lines and swirls, and only with the right illumination is the captured object properly shown &amp;lt;ref name=”7”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Using a transmission hologram made with silver halide emulsion as an example, there needs to be the right light source to recreate the original object beam. This beam is recreated due to the diffraction grating and reflective surfaces inside the hologram that were caused by the interference of the two light sources. The recreated beam is identical to the original object beam before it was combined with the reference wave. Furthermore, it also travels in the same direction as the original beam. This means that since the object was on the other side of the holographic plate, the beam travels towards the viewer. The eyes focus the light, and the brain interprets it as a 3D image located behind the recording medium (Figure 3) &amp;lt;ref name=”7”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Brief history==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1886 -&#039;&#039;&#039; Gabriel Lippmann, in France, develops a theory of using light wave interference to capture color in photography. He presented his theory in 1891 to the Academy of Sciences, along with some primitive examples of his interference color photographs. In 1983, he presented perfect color photographs to the Academy and won a Nobel Prize in Physics in 1908 due to his work in this area.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1947&#039;&#039;&#039; - Dennis Gabor develops the theory of holography. He coined the term hologram from the Greek words holos (meaning ‘whole’) and gramma (‘message’).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1960 -&#039;&#039;&#039; N. Bassov, A. Prokhorov, and Charles Towns contributed to the development of the laser. Its pure, intense light was optimal for creating holograms.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1962 -&#039;&#039;&#039; Yuri Denisyuk publishes his work in recording 3D images, inspired by the Lippmann’s description of interference photography. He began his experiments in 1958 using a highly filtered mercury discharge tube as his light source.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1968 -&#039;&#039;&#039; Dr. Stephen A. Benton invents the white-light transmission holography while researching holographic television. The white-light hologram can be viewed in ordinary white light.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1972 -&#039;&#039;&#039; Lloyd Cross develops the integral hologram. It combines white-light transmission holography with conventional cinematography to produce moving 3D images. &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”8”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”9”&amp;gt; Holography Virtual Gallery. History of holography. Retrieved from http://www.holography.ru/histeng.htm&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Main types of holograms==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;White-light transmission holograms -&#039;&#039;&#039; This type of holograms are illuminated with incandescent light, producing images that contain the rainbow spectrum of colors. Depending on the point of view of the viewer, the hologram’s colors change. They are also called rainbow holograms.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Reflection holograms -&#039;&#039;&#039; Reflection holograms are usually mass-produced using a stamping method. They can be seen in credit cards or in a driver’s license. Normally, these holograms can be viewed in white light.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Transmission holograms -&#039;&#039;&#039; Typically, a transmission hologram is viewed with laser light. The light is directed from behind the hologram and the image projected to the viewer’s side.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Hybrid hologram -&#039;&#039;&#039; These are holograms that are between the reflection and transmission types of holograms. Examples include embossed holograms, integral holograms, holographic interferometry, multichannel holograms, and computer-generated holograms. &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”7”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”10”&amp;gt; MIT Museum. Holography glossary. Retrieved from https://mitmuseum.mit.edu/holography-glossary&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Terms]] [[Category:Technical Terms]]&lt;/div&gt;</summary>
		<author><name>Paulo Pacheco</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Holograms&amp;diff=24901</id>
		<title>Holograms</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Holograms&amp;diff=24901"/>
		<updated>2017-12-13T14:41:08Z</updated>

		<summary type="html">&lt;p&gt;Paulo Pacheco: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
[[File:Holograms 1.png|thumb|Figure 1. Types of light (image: science.howstuffworks.com)]]&lt;br /&gt;
&lt;br /&gt;
[[File:Holograms 2.png|thumb|Figure 2. Basic hologram setup (image: science.howstuffworks.com)]]&lt;br /&gt;
&lt;br /&gt;
[[File:Holograms 3.png|thumb|Figure 3. Reconstructing a hologram (image: www.livescience.com)]]&lt;br /&gt;
&lt;br /&gt;
A hologram is the recorded interference pattern between a point sourced of light of fixed wavelength (reference beam) and a wavefield scattered from the object (object beam). A hologram is recorded in a two- or three-dimensional medium and contains information about the entire three-dimensional wavefield of the recorded object. When the hologram is illuminated by the reference beam, the diffraction pattern recreates the lightfield of the original object. The viewer is then able to see an image that is indistinguishable from the recorded object &amp;lt;ref name=”1”&amp;gt; Jeong, A. and Jeong, T. What are the main types of holograms? Retrieved from http://www.integraf.com/resources/articles/a-main-types-of-holograms&amp;lt;/ref&amp;gt; &amp;lt;ref name=”2”&amp;gt; Schnars, U. and Jüptner, W. (2002). Digital recording and numerical reconstruction of holograms. Meas. Sci. Technol., 13: R85-R101&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The holographic plate is a kind of recording medium, in which the 3D virtual image of an object is stored. While in a recording media (e.g a CD) the grooves contain information about sound that can be used to reconstruct a song, a holographic plate contains information about light that is used to reconstruct an object &amp;lt;ref name=”3”&amp;gt; Physics Central. Holograms: virtually approaching science fiction. Retrieved from http://physicscentral.com/explore/action/hologram.cfm&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The information about light is coded in the form of bright and dark microinterferences. Usually, these are not visible to the human eye due to the high spatial frequencies. Reconstructing the object wave by illuminating the hologram with the reference wave creates a 3D image that exhibits the effects of perspective and depth of focus &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
This photographic technique of recording light scattered from an object and presenting it as a 3D image is called Holography. The object&#039;s representations created with this technique are the most lifelike 3D renditions because it uses the same technique as our eyes to see the world around us &amp;lt;ref name=”4”&amp;gt; Workman, R. (2013). What is a hologram? Retrieved from  http://www.livescience.com/34652-hologram.html&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt; Bryner, M. (2010). ‘Star Wars’-like holograms nearly a reality. Retrieved from http://www.livescience.com/10227-star-wars-holograms-reality.html&amp;lt;/ref&amp;gt;. Therefore, it is an attractive imaging technique since it allows the viewer to see a complete three-dimensional volume of one image &amp;lt;ref name=”6”&amp;gt; Rosen, J., Katz, B. and Brooker, G. (2009). Review of three-dimensional holographic imaging by Fresnel incoherent correlation holograms. 3D Research, 1(1)&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Throughout the years, several types of holograms have been created. These include transmission holograms, that allow light to be shined through them and the image to be viewed from the side, and rainbow holograms. These are common in credit cards and driver’s licenses (used for security reasons) &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
While various holograms have been used in movies like Star Wars and Iron Man, the real world technology has not achieved the same level as presented in those cinematic stories &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;. Currently, holograms are still static, but they can look incredible such as in the case of large-scale holograms that are illuminated with lasers or displayed in a darkened room with carefully directed lighting. Some holograms can even appear to move as the viewer walks past them, looking at them from different angles. Others can change colors or include views of different objects, depending on how the viewer looks at them &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”7”&amp;gt; Wilson, T. V. (2007). How holograms work. Retrieved from http://science.howstuffworks.com/hologram.htm&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
One of the interesting traits of a hologram is that cutting one in half, each half will contain the pattern to recreate the original object. Even if a small piece is cut out, it will still contain the entire holographic image. Another feature is that making a hologram of a magnifying glass will create a hologram that will magnify the other objects in the hologram &amp;lt;ref name=”7”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==How does it work?==&lt;br /&gt;
&lt;br /&gt;
To create a hologram, holography uses the wave nature of light. In a normal photograph, lenses are used to focus an image on film or an electronic chip, recording where there is light or not. With the holographic technique, the shape a light wave takes after it bounces off an object is recorded. It uses interfering waves of light to capture images that can be 3D. When waves of light meet they interfere with each other, analogous to what happens with waves of water. The pattern created by the interference of waves contains the information used to make the holograms &amp;lt;ref name=”8”&amp;gt; Holographic Studios. A brief history of holography. Retrieved from http://www.holographer.com/history-of-holography/&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
True 3D holograms could not be a practical reality without the invention of the laser. A laser creates waves of light that are coherent. It is this coherent light that makes it possible to record the light wave interference patterns of holography  &amp;lt;ref name=”8”&amp;gt;&amp;lt;/ref&amp;gt;. While white light contains all of the different frequencies of light traveling in all directions, laser light produces light that has only one wavelength and one color (Figure 1) &amp;lt;ref name=”7”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In its basic form, three elements are necessary to create a hologram: an object or person, a laser beam, and a recording medium. A clear environment is recommended to enable the light beams to intersect &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The laser beam is separated into two beams and redirected using mirrors (Figure 2). One of the beams is directed at the object, while the other - the reference beam - is directed to the recording medium. Some of the light of the object beam is reflected off the object onto the recording medium. The beams intersect and interfere with each other, creating an interference pattern that is imprinted on the recording medium. This medium can be composed of various materials. A common recording medium is a photographic film with an added amount of light reactive grains, enabling a higher resolution for the two beams, and making the image more realistic than using silver halide material &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
A developed film from a regular camera shows the negative view of the original scene, with light and dark areas. Looking at it, it is still possible to more or less understand what the original scene looked like. However, when looking at a revealed holographic tape, there is nothing that resembles the original scene. There can be dark frames of film or a random pattern of lines and swirls, and only with the right illumination is the captured object properly shown &amp;lt;ref name=”7”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Using a transmission hologram made with silver halide emulsion as an example, there needs to be the right light source to recreate the original object beam. This beam is recreated due to the diffraction grating and reflective surfaces inside the hologram that were caused by the interference of the two light sources. The recreated beam is identical to the original object beam before it was combined with the reference wave. Furthermore, it also travels in the same direction as the original beam. This means that since the object was on the other side of the holographic plate, the beam travels towards the viewer. The eyes focus the light, and the brain interprets it as a 3D image located behind the recording medium (Figure 3) &amp;lt;ref name=”7”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Brief history==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1886 -&#039;&#039;&#039; Gabriel Lippmann, in France, develops a theory of using light wave interference to capture color in photography. He presented his theory in 1891 to the Academy of Sciences, along with some primitive examples of his interference color photographs. In 1983, he presented perfect color photographs to the Academy and won a Nobel Prize in Physics in 1908 due to his work in this area.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1947&#039;&#039;&#039; - Dennis Gabor develops the theory of holography. He coined the term hologram from the Greek words holos (meaning ‘whole’) and gramma (‘message’).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1960 -&#039;&#039;&#039; N. Bassov, A. Prokhorov, and Charles Towns contributed to the development of the laser. Its pure, intense light was optimal for creating holograms.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1962 -&#039;&#039;&#039; Yuri Denisyuk publishes his work in recording 3D images, inspired by the Lippmann’s description of interference photography. He began his experiments in 1958 using a highly filtered mercury discharge tube as his light source.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1968 -&#039;&#039;&#039; Dr. Stephen A. Benton invents the white-light transmission holography while researching holographic television. The white-light hologram can be viewed in ordinary white light.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1972 -&#039;&#039;&#039; Lloyd Cross develops the integral hologram. It combines white-light transmission holography with conventional cinematography to produce moving 3D images. &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”8”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”9”&amp;gt; Holography Virtual Gallery. History of holography. Retrieved from http://www.holography.ru/histeng.htm&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Main types of holograms==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;White-light transmission holograms -&#039;&#039;&#039; This type of holograms are illuminated with incandescent light, producing images that contain the rainbow spectrum of colors. Depending on the point of view of the viewer, the hologram’s colors change. They are also called rainbow holograms.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Reflection holograms -&#039;&#039;&#039; Reflection holograms are usually mass-produced using a stamping method. They can be seen in credit cards or in a driver’s license. Normally, these holograms can be viewed in white light.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Transmission holograms -&#039;&#039;&#039; Typically, a transmission hologram is viewed with laser light. The light is directed from behind the hologram and the image projected to the viewer’s side.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Hybrid hologram -&#039;&#039;&#039; These are holograms that are between the reflection and transmission types of holograms. Examples include embossed holograms, integral holograms, holographic interferometry, multichannel holograms, and computer-generated holograms. &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”7”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”10”&amp;gt; MIT Museum. Holography glossary. Retrieved from https://mitmuseum.mit.edu/holography-glossary&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Terms]] [[Category:Technical Terms]]&lt;/div&gt;</summary>
		<author><name>Paulo Pacheco</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Pokemon_Go&amp;diff=24900</id>
		<title>Pokemon Go</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Pokemon_Go&amp;diff=24900"/>
		<updated>2017-12-13T13:51:43Z</updated>

		<summary type="html">&lt;p&gt;Paulo Pacheco: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{stub}}&lt;br /&gt;
{{App Infobox&lt;br /&gt;
|image={{#ev:youtube|GQgbXJub-IQ|350}}&lt;br /&gt;
|Developer=[[Niantic Labs]]&lt;br /&gt;
|Publisher=[[The Pokémon Company]]&lt;br /&gt;
|Platform=&lt;br /&gt;
|Device=All iOS and Android Devices&lt;br /&gt;
|Operating System=[[iOS]], [[Android]]&lt;br /&gt;
|Type=[[Full Game]]&lt;br /&gt;
|Genre=[[Action/Adventure]]&lt;br /&gt;
|Input Device=&lt;br /&gt;
|Game Mode=[[Single Player]], [[Multiplayer]]&lt;br /&gt;
|Comfort Level=&lt;br /&gt;
|Version=&lt;br /&gt;
|Rating=&lt;br /&gt;
|Downloads=&lt;br /&gt;
|Release Date=July 6, 2016&lt;br /&gt;
|Price=Free with microtransactions&lt;br /&gt;
|Website=http://www.pokemongo.com/&lt;br /&gt;
|Infobox Updated=7/14/2016&lt;br /&gt;
}}&lt;br /&gt;
[[Pokemon Go]] is a location-based [[augmented reality]] [[mobile game]] developed by [[Niantic Labs]] and published by [[The Pokemon Company]]. This [http://pkmngotrading.com/wiki/Pokemon Pokemon] game was released for all [[iOS]] and [[Android]] [[Devices]] on July 6, 2016.&lt;br /&gt;
==Review==&lt;br /&gt;
&#039;&#039;&#039;A Catch of Success with &amp;quot;Pokémon Go&amp;quot;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
By Paulo Pacheco on July 14, 2016&lt;br /&gt;
&lt;br /&gt;
[http://pkmngotrading.com/wiki/Pokemon_Go_Wiki Pokémon GO] has undoubtedly been a success since its launch, on July 6, 2016. A wide phenomenon that still has to have a worldwide release, but that nevertheless already captured the interest of millions of people. From the beginnings of the franchise created by Satoshi Tajiri – inspired by his childhood hobby of insect collecting – in the early 90’s, to its recent incarnation on the smartphones, there seems to be no stopping to this longtime series, even if there have been a few bumps in the road for the latest game app.&lt;br /&gt;
&lt;br /&gt;
Described in the official Pokémon Go website as a Real World Adventure, the [[augmented reality]] [[Augmented Reality Games|game]] was originally launched in three countries: the USA, Australia, and New Zealand. It quickly increased in popularity, becoming viral. It’s the fastest mobile game ever to reach No. 1 &amp;lt;ref name=&amp;quot;venturebeat&amp;quot;&amp;gt; http://venturebeat.com/2016/07/11/pokemon-go-outpaces-clash-royale-as-the-fastest-game-ever-to-no-1-on-the-mobile-revenue-charts/&amp;lt;/ref&amp;gt;, and it has become the biggest mobile game in US history, attracting just under 21 million daily active users. If this trend continues, it could even surpass the number of daily active users of [[Snapchat]] and [[Google Maps]], on [[Android]]&amp;lt;ref name=&amp;quot;surveymonkey&amp;gt;https://www.surveymonkey.com/business/intelligence/pokemon-go-biggest-mobile-game-ever/&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The massive success brought, inevitably, an increase in [[Nintendo]]’s shares value&amp;lt;ref name=&amp;quot;bloomberg&amp;quot;&amp;gt;http://www.bloomberg.com/quote/7974:JP&amp;lt;/ref&amp;gt;. Viral means good business, and App Annie communications boss Fabien Pierre-Nicolas has estimated that Pokémon GO could be generating over $1 billion of net revenue for [[Niantic Labs]], the game’s developer &amp;lt;ref name=&amp;quot;venturebeat&amp;quot;&amp;gt;http://venturebeat.com/2016/07/11/pokemon-go-outpaces-clash-royale-as-the-fastest-game-ever-to-no-1-on-the-mobile-revenue-charts/&amp;lt;/ref&amp;gt;. All of this with an official release in only three countries. A phased roll-out launch has begun in Europe, with the release of the app in Germany on the 13th and the United Kingdom on the 14th of July. Other countries are expected to follow in the coming days or weeks &amp;lt;ref name=&amp;quot;twitter&amp;quot;&amp;gt;https://twitter.com/PokemonGoApp&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;http://www.pocket-lint.com/news/138196-pokemon-go-available-in-the-uk-at-last-get-it-on-itunes-and-google-play&amp;lt;/ref&amp;gt; &amp;lt;ref&amp;gt;http://www.pocket-lint.com/news/138196-pokemon-go-available-in-the-uk-at-last-get-it-on-itunes-and-google-play&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Even though a lot of the focus has been on Nintendo, with the boost in the share value of the company, we must not forget that this is a joint venture between The Pokémon Company, Nintendo, and Niantic (with Google also in the mix, since Niantic was founded as an internal Google startup &amp;lt;ref&amp;gt;http://fortune.com/2016/07/12/google-pokemon-go/&amp;lt;/ref&amp;gt;). But even if Nintendo has only a minority stake in Pokémon GO, the success of the game app means exposure for the Japanese video game company. Something much needed since many have viewed Nintendo to be on the decline after the success of the Wii &amp;lt;ref&amp;gt;http://www.forbes.com/sites/erikkain/2016/07/11/will-pokemon-go-be-the-nintendo-cash-cow-investors-are-hoping-for/#2e2b3d765926&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The game takes its foundations from another creation of Niantic: [[Ingress]]. It blends real-world exploration with a digital overlay, through the use of geo-localization and camera functions on the smartphone to superimpose images of [http://pkmngotrading.com/wiki/Pokemon Pokémon] to be captured. Its success can be attributed to this blend of the virtual and the real, the geo-location and, of course, the massive appeal of the Pokémon brand. The allure of hunting down and collecting Pokémon is still high &amp;lt;ref&amp;gt;http://theconversation.com/whats-made-poke-mon-go-such-a-viral-success-62420&amp;lt;/ref&amp;gt;http://www.themarysue.com/pokemon-go-mental-health/.&lt;br /&gt;
&lt;br /&gt;
People are also moving, gathering, and exploring the outside due to the game app. There have been anecdotal reports that the game is helping people with [[Mental health|depression, anxiety and agoraphobia]] to leave the house, helping them by providing the necessary motivation to overcome their conditions &amp;lt;ref&amp;gt;http://www.themarysue.com/pokemon-go-mental-health/&amp;lt;/ref&amp;gt;. It’s not a cure and, as previously stated, these health benefits are only anecdotal, but it’s an example of how powerful game design can be by providing a system of motivation and rewards. The fact is that walking and spending more time outdoors are good for you &amp;lt;ref&amp;gt;http://www.heart.org/HEARTORG/HealthyLiving/PhysicalActivity/Walking/Walk-Dont-Run-Your-Way-to-a-Healthy-Heart_UCM_452926_Article.jsp&amp;lt;/ref&amp;gt; &amp;lt;ref&amp;gt; http://www.health.harvard.edu/press_releases/spending-time-outdoors-is-good-for-you&amp;lt;/ref&amp;gt;, and it seems that it’s something that Pokémon GO is making a lot of people do.&lt;br /&gt;
&lt;br /&gt;
There have been problems too, since the recent release of the game. Problems with the servers going down due to the overflow of players (which even caused a delay in the worldwide release of the app), bugs, and a myriad of strange occurrences, like the discovery of a dead body by a teenager while playing the game &amp;lt;ref&amp;gt;http://www.forbes.com/sites/davidthier/2016/07/07/pokemon-go-servers-seem-to-be-struggling/#64880df14958&amp;lt;/ref&amp;gt; &amp;lt;ref&amp;gt;http://www.inverse.com/article/18130-a-short-history-of-the-police-s-weird-relationship-with-pokemon-go&amp;lt;/ref&amp;gt; &amp;lt;ref&amp;gt;http://tek.sapo.pt/mobile/apps/artigo/pokemon_go_ja_deu_aso_a_uma_mao_cheia_de_situacoes_bizarras-48106umv.html&amp;lt;/ref&amp;gt;. Recently, there have also been concerns over privacy. The Democratic senator Al Franken has even written a letter to Niantic Labs, expressing his worries about the collecting, use, and sharing of the users’ personal information by the company &amp;lt;ref&amp;gt;http://www.i4u.com/2016/07/113286/pokemon-go-success-has-alarmed-us-senator-al-franken&amp;lt;/ref&amp;gt; &amp;lt;ref&amp;gt;http://money.cnn.com/2016/07/13/technology/pokemon-go-al-franken/&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
These troubles don’t seem to be affecting the interest in the game, although questions remain if it’s going to keep the momentum or fade away like so many other apps. An example closer to Nintendo is that of Miitomo that had early success but could not sustain it &amp;lt;ref name=&amp;quot;surveymonkey&amp;gt;https://www.surveymonkey.com/business/intelligence/pokemon-go-biggest-mobile-game-ever/&amp;lt;/ref&amp;gt; &amp;lt;ref&amp;gt;https://www.surveymonkey.com/business/intelligence/rise-fall-nintendos-miitomo-downloads-arent-enough/&amp;lt;/ref&amp;gt;. With the full release of Pokémon GO throughout the world, we will see if the success is just due to the novelty of it or if the game is indeed well designed and capable of capturing the attention and dedication of players for a long time.&lt;br /&gt;
&lt;br /&gt;
A game that was originally inspired by the natural fauna, by being outdoors and exploring a world filled with novel and wonderful creatures has now come full circle, inviting people to explore their surroundings and its wonders, by merging the real world with the digital creation of the pocket monsters. Whatever the future holds, the impact of Pokémon in the gaming culture is undeniable.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Apps]] [[Category:AR Apps]] [[Category:Games]] [[Category:Augmented Reality Games]] [[Category:AR Games]] [[Category:iOS Apps]] [[Category:Android Apps]]&lt;/div&gt;</summary>
		<author><name>Paulo Pacheco</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Holograms&amp;diff=24899</id>
		<title>Holograms</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Holograms&amp;diff=24899"/>
		<updated>2017-12-13T13:38:48Z</updated>

		<summary type="html">&lt;p&gt;Paulo Pacheco: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
[[File:Holograms 1.png|thumb|Figure 1. Types of light (image: science.howstuffworks.com)]]&lt;br /&gt;
&lt;br /&gt;
[[File:Holograms 2.png|thumb|Figure 2. Basic hologram setup (image: science.howstuffworks.com)]]&lt;br /&gt;
&lt;br /&gt;
[[File:Holograms 3.png|thumb|Figure 3. Reconstructing a hologram (image: www.livescience.com)]]&lt;br /&gt;
&lt;br /&gt;
A hologram is the recorded interference pattern between a point sourced of light of fixed wavelength (reference beam) and a wavefield scattered from the object (object beam). A hologram is recorded in a two- or three-dimensional medium and contains information about the entire three-dimensional wavefield of the recorded object. When the hologram is illuminated by the reference beam, the diffraction pattern recreates the lightfield of the original object. The viewer is then able to see an image that is indistinguishable from the recorded object &amp;lt;ref name=”1”&amp;gt; Jeong, A. and Jeong, T. What are the main types of holograms? Retrieved from http://www.integraf.com/resources/articles/a-main-types-of-holograms&amp;lt;/ref&amp;gt; &amp;lt;ref name=”2”&amp;gt; Schnars, U. and Jüptner, W. (2002). Digital recording and numerical reconstruction of holograms. Meas. Sci. Technol., 13: R85-R101&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The holographic plate is a kind of recording medium, in which the 3D virtual image of an object is stored. While in a recording media (e.g a CD) the grooves contain information about sound that can be used to reconstruct a song, a holographic plate contains information about light that is used to reconstruct an object &amp;lt;ref name=”3”&amp;gt; Physics Central. Holograms: virtually approaching science fiction. Retrieved from http://physicscentral.com/explore/action/hologram.cfm&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The information about light is coded in the form of bright and dark microinterferences. Usually, these are not visible to the human eye due to the high spatial frequencies. Reconstructing the object wave by illuminating the hologram with the reference wave creates a 3D image that exhibits the effects of perspective and depth of focus &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
This photographic technique of recording light scattered from an object and presenting it as a 3D image is called Holography. The object representations created with this technique are the most lifelike 3D renditions because it uses the same technique as our eyes to see the world around us &amp;lt;ref name=”4”&amp;gt; Workman, R. (2013). What is a hologram? Retrieved from  http://www.livescience.com/34652-hologram.html&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt; Bryner, M. (2010). ‘Star Wars’-like holograms nearly a reality. Retrieved from http://www.livescience.com/10227-star-wars-holograms-reality.html&amp;lt;/ref&amp;gt;. Therefore, it is an attractive imaging technique since it allows the viewer to see a complete three-dimensional volume of one image &amp;lt;ref name=”6”&amp;gt; Rosen, J., Katz, B. and Brooker, G. (2009). Review of three-dimensional holographic imaging by fresnel incoherent correlation holograms. 3D Research, 1(1)&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Throughout the years, several types of holograms have been created. These include transmission holograms, that allow light to be shined through them and the image to be viewed from the side, and rainbow holograms. These are common in credit cards and driver’s licenses (used for security reasons) &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
While various holograms have been used in movies like Star Wars and Iron Man, the real world technology has not achieved the same level as presented in those cinematic stories &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;. Currently, holograms are still static, but they can look incredible such as in the case of large-scale holograms that are illuminated with lasers or displayed in a darkened room with carefully directed lighting. Some holograms can even appear to move as the viewer walks past them, looking at them from different angles. Others can change colors or include views of different objects, depending on how the viewer looks at them &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”7”&amp;gt; Wilson, T. V. (2007). How holograms work. Retrieved from http://science.howstuffworks.com/hologram.htm&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
One of the interesting traits of a hologram is that cutting one in half, each half will contain the pattern to recreate the original object. Even if a small piece is cut out, it will still contain the entire holographic image. Another feature is that making a hologram of a magnifying glass will create a hologram that will magnify the other objects in the hologram &amp;lt;ref name=”7”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==How does it work?==&lt;br /&gt;
&lt;br /&gt;
To create a hologram, holography uses the wave nature of light. In a normal photograph, lenses are used to focus an image on film or an electronic chip, recording where there is light or not. With the holographic technique, the shape a light wave takes after it bounces off an object is recorded. It uses interfering waves of light to capture images that can be 3D. When waves of light meet they interfere with each other, analogous to what happens with waves of water. The pattern created by the interference of waves contains the information used to make the holograms &amp;lt;ref name=”8”&amp;gt; Holographic Studios. A brief history of holography. Retrieved from http://www.holographer.com/history-of-holography/&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
True 3D holograms could not be a practical reality without the invention of the laser. A laser creates waves of light that are coherent. It is this coherent light that makes it possible to record the light wave interference patterns of holography  &amp;lt;ref name=”8”&amp;gt;&amp;lt;/ref&amp;gt;. While white light contains all of the different frequencies of light traveling in all directions, laser light produces light that has only one wavelength and one color (Figure 1) &amp;lt;ref name=”7”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In its basic form, three elements are necessary to create a hologram: an object or person, a laser beam, and a recording medium. A clear environment is recommended to enable the light beams to intersect &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The laser beam is separated into two beams and redirected using mirrors (Figure 2). One of the beams is directed at the object, while the other - the reference beam - is directed to the recording medium. Some of the light of the object beam is reflected off the object onto the recording medium. The beams intersect and interfere with each other, creating an interference pattern that is imprinted on the recording medium. This medium can be composed of various materials. A common recording medium is a photographic film with an added amount of light reactive grains, enabling a higher resolution for the two beams, and making the image more realistic than using silver halide material &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
A developed film from a regular camera shows the negative view of the original scene, with light and dark areas. Looking at it, it is still possible to more or less understand what the original scene looked like. However, when looking at a revealed holographic tape, there is nothing that resembles the original scene. There can be dark frames of film or a random pattern of lines and swirls, and only with the right illumination is the captured object properly shown &amp;lt;ref name=”7”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Using a transmission hologram made with silver halide emulsion as an example, there needs to be the right light source to recreate the original object beam. This beam is recreated due to the diffraction grating and reflective surfaces inside the hologram that were caused by the interference of the two light sources. The recreated beam is identical to the original object beam before it was combined with the reference wave. Furthermore, it also travels in the same direction as the original beam. This means that since the object was on the other side of the holographic plate, the beam travels towards the viewer. The eyes focus the light, and the brain interprets it as a 3D image located behind the recording medium (Figure 3) &amp;lt;ref name=”7”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Brief history==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1886 -&#039;&#039;&#039; Gabriel Lippmann, in France, develops a theory of using light wave interference to capture color in photography. He presented his theory in 1891 to the Academy of Sciences, along with some primitive examples of his interference color photographs. In 1983, he presented perfect color photographs to the Academy and won a Nobel Prize in Physics in 1908 due to his work in this area.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1947&#039;&#039;&#039; - Dennis Gabor develops the theory of holography. He coined the term hologram from the Greek words holos (meaning ‘whole’) and gramma (‘message’).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1960 -&#039;&#039;&#039; N. Bassov, A. Prokhorov, and Charles Towns contributed to the development of the laser. Its pure, intense light was optimal for creating holograms.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1962 -&#039;&#039;&#039; Yuri Denisyuk publishes his work in recording 3D images, inspired by the Lippmann’s description of interference photography. He began his experiments in 1958 using a highly filtered mercury discharge tube as his light source.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1968 -&#039;&#039;&#039; Dr. Stephen A. Benton invents the white-light transmission holography while researching holographic television. The white-light hologram can be viewed in ordinary white light.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1972 -&#039;&#039;&#039; Lloyd Cross develops the integral hologram. It combines white-light transmission holography with conventional cinematography to produce moving 3D images. &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”8”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”9”&amp;gt; Holography Virtual Gallery. History of holography. Retrieved from http://www.holography.ru/histeng.htm&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Main types of holograms==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;White-light transmission holograms -&#039;&#039;&#039; This type of holograms are illuminated with incandescent light, producing images that contain the rainbow spectrum of colors. Depending on the point of view of the viewer, the hologram’s colors change. They are also called rainbow holograms.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Reflection holograms -&#039;&#039;&#039; Reflection holograms are usually mass-produced using a stamping method. They can be seen in credit cards or in a driver’s license. Normally, these holograms can be viewed in white light.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Transmission holograms -&#039;&#039;&#039; Typically, a transmission hologram is viewed with laser light. The light is directed from behind the hologram and the image projected to the viewer’s side.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Hybrid hologram -&#039;&#039;&#039; These are holograms that are between the reflection and transmission types of holograms. Examples include embossed holograms, integral holograms, holographic interferometry, multichannel holograms, and computer-generated holograms. &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”7”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”10”&amp;gt; MIT Museum. Holography glossary. Retrieved from https://mitmuseum.mit.edu/holography-glossary&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Terms]] [[Category:Technical Terms]]&lt;/div&gt;</summary>
		<author><name>Paulo Pacheco</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Brain-computer_interface&amp;diff=24898</id>
		<title>Brain-computer interface</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Brain-computer_interface&amp;diff=24898"/>
		<updated>2017-12-13T12:58:22Z</updated>

		<summary type="html">&lt;p&gt;Paulo Pacheco: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
A Brain-computer interface (BCI) is a technological system of communication that is based on neural activity generated by the brain &amp;lt;ref name=”1”&amp;gt; Vallabhaneni, A., Wang, T. and He, B. (2005). Brain-Computer Interface. Neural Engineering, Springer US, pp. 85-121&amp;lt;/ref&amp;gt;. It’s comprised of four main parts: a means for acquiring neural signals from the brain, a method for isolating the desired specific features in that signal, an algorithm to decode the signals obtained and a method for transforming the decoding into an action (Figure 1) &amp;lt;ref name=”2”&amp;gt; Sajda, P., Müller, KR. and Shenoy, K. V. (2008). Brain-Computer Interfaces. IEEE Signal Processing Magazine, 25(1): 16-17&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt; He, B., Gao, S., Yuan, H. and Wolpaw, J. R. (2013). Brain-Computer Interfaces. Neural Engineering, Springer US, pp 87-151&amp;lt;/ref&amp;gt;. This method of communication is independent of the normal output pathways of peripheral nerves and muscles. The signal can be acquired by using invasive or non-invasive techniques &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;. This technology can help to provide a means of communication for people disabled by neurological diseases or injuries, providing a new channel of output for the brain to the user. It can also enhance functions in healthy individuals &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;. BCIs are also named brain-machine interfaces (BMIs) &amp;lt;ref name=”4”&amp;gt; McFarland, D. J. and Wolpaw, J. R. (2011). Brain-Computer Interfaces for Communication and Control. Commun ACM, 54(5): 60–66&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[[File:Figure 1. Basic design of a BCI system. (Image taken from Wolpaw et al., 2002).png|thumb|Figure 1 Basic design of a BCI system. (Image taken from Wolpaw et al., 2002)]]&lt;br /&gt;
&lt;br /&gt;
The central nervous system (CNS) responds to stimuli in the environment or in the body by producing an appropriate output that can be in the form of a neuromuscular or hormonal response. A BCI provides a new output for the CNS that is different from the typical neuromuscular and hormonal ones. It changes the electrophysiological signals from reflections of the CNS activity (such as an electroencephalography – or EEG - rhythm or a neuronal firing rate) into the intended products of that activity, such as messages and commands that act on the world and accomplish the person’s intent &amp;lt;ref name=”5”&amp;gt; Wolpaw, J. R., Birbaumer, N., McFarland, D. J., Pfurtscheller, G. and Vaughan, T. M. (2002). Brain-Computer Interfaces for Communication and Control. Clinical Neurophysiology 113: 767–791&amp;lt;/ref&amp;gt;. Since it measures CNS activity, converting it into an artificial output, it can replace, restore, or enhance the natural CNS output, changing the interactions between the CNS and its internal or external environment &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;. The electrical signals produced by the brain activity can be detected on the scalp, on the cortical surface, or within the brain. As mentioned previously, the BCI has the function of translating these electrical signals into outputs that allow the user to communicate without the peripheral nerves and muscles. This becomes relevant because, since the BCI does not depend on neuromuscular control, it can provide another way of communication for people with disorders such as amyotrophic lateral sclerosis (ALS), brainstem stroke, cerebral palsy and spinal cord injury &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;. It needs to be mentioned that a BCI also depends on feedback and on adaptation of brain activity based on that feedback. According to McFarland and Wolpaw (2011), “communication and control applications are interactive processes that require the user to observe the results of their efforts in order to maintain good performance and to correct mistakes &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;.” The BCI system needs to provide feedback and interact with the adaptations the brains makes in response. The general BCI operation therefore depends on the interaction of the user’s brain (where the signals are produced that are measured by the BCI), and the BCI itself (that translates the signals into specific commands) &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;. One of the most difficult challenges in BCI research is the management of the complex interactions between the concurrent adaptations of the CNS and the BCI &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Even though the main objective of BCI research and development is the creation of assistive communication and control technology for disabled people with different ailments, BCIs also have potential as a new type of interface for interacting with a computer or machine for people with normal neurological function. This could be applied to the general population in gaming, for example, or in high-stress situations like air traffic control. There could also be systems that enhance or supplement human performance such as image analysis, and systems that expand the media access or artistic expression. There has been some research into another possible application for the BCI technology: assistance in the rehabilitation of people disabled by a stroke and other acute events &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The biology of BCIs&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Since the BCI includes both a biological and technological components, without specific characteristics of the biological factor that can be used, the system would not work. The technology works because of the way our brains function &amp;lt;ref name=”6”&amp;gt; Grabianowski, E. How Brain-Computer Interfaces Work. Retrieved from computer.howstuffworks.com/brain-computer-interface.htm&amp;lt;/ref&amp;gt;. The human brain (arguably the most complex signal processing machine in existence) is capable of transducing a variety of environmental signals and to extract information from them in order to produce behavior, cognition, and action &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;. The brains have a myriad of neurons that are individual nerve cells that are connected to one another by dendrites and axons. The actions of the brain are carried out by small electric signals that are generated by differences in electric potential carried by ions on the membranes of the neurons Even though the signal pathways are insulated by myelin, there is a residual electric signal that escapes and that can be detected, interpreted and used, such as in the case of BCIs. This also allows for the development of technologies that send signals into specific regions of the brain, such as in the case of the optic nerve. By connecting a camera that could send the same signals as the eye (or close enough) to the brain, a blind person could regain some measure of vision &amp;lt;ref name=”6”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The non-invasively recording of the electrical brain activity by electrodes on the surface of the scalp has been known for over 80 years ago, due to the work of Hans Berger. His observations demonstrated that the electroencephalogram (EEG) could be used as “an index of the gross state of the brain.” Besides the detection of electrical signals of the brain, the neural activity can also be monitored by measuring magnetic fields or hemoglobin oxygenation, by using sensors on the scalp, the surface of the brain, or within the brain &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Dependent and independent BCIs&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The commands that the user sends to the external world through the BCI system do not follow the same output pathways of peripheral nerves and muscles. Instead, a BCI provides the user with an alternative method for acting on the world. The BCIs can be in two different classes: dependent and independent &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;. These terms appeared in 2002, and both are used to describe BCIs that use brain signals for the control of applications. The difference between them is in how they depend on natural CNS output &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
A dependent BCI uses brains signals that depend on muscle activity &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;, such as in the case of a BCI that present the user with a matrix of letters. Each letter flashes one at a time, and it is the objective of the user to select a specific letter by looking directly at it. This initiates a visual evoked potential (VEP) that is recorded from the scalp. The VEP produced when the right intended letter flashes is greater than the VEPs produced when other letters flash. In this example, the brain’s output channel is EEG, but the generation of the signal that is detected is dependent on the direction of the gaze which, in turn, depends on extraocular muscles and the cranial nerves that activate them &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
An independent BCI, on the contrary, does not depend on natural CNS output; there is no need for muscle activity to generate the brain signals, since the message is not carried by peripheral nerves and muscles &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;. This is more advantageous for people who are severely disabled by neuromuscular disorders. An independent BCI would present the user with a matrix of letters that flash one at a time. The user would select a specific letter by producing a P300 evoked potential when the chosen latter flashed. According to McFarland and Wolpaw (2011), “the P300 is a positive potential occurring around 300 msec after an event that is significant to the subject. It is considered a “cognitive” potential since it is generated in tasks when subjects attend and discriminate stimuli. (…) The fact that the P300 potential reflects attention rather than simply gaze direction implied that this BCI did not depend on muscle (i.e., eye-movement) control. Thus, it represented a significant advance &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;.” The brain’s output channel, in this case, would be EEG, and the generation of the EEG signal depends on the user’s intent and not on the precise orientation of the eyes. This kind of BCI is of greater theoretical interest since it provides the brain with new output pathways. Also, for people with the most severe neuromuscular disabilities, independent BCIs are probably more useful since they lack all normal output channels &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
There is also another term that has been used recently: hybrid BCI. According to He et al. (2013) this can be applied to a BCI that employs two different types of brain signals, such has VEPs and sensorimotor rhythms) to produce its outputs, or to a system that combines a BCI output and a natural muscle-based output &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Invasive and non-invasive BCIs&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
BCIs can also be classified into two different classes by the way the neural signals are collected. When the signals are monitored using implanted arrays of electrodes it is called invasive system. This is common in experiments involving rodents and nonhuman primates, and the invasive system is suited for decoding activity in the cerebral cortex. These type of systems provide measurements with a high signal-to-noise ratio (SNR) and also allow for the decoding of spiking activity from small populations of neurons &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt;. The downside of the invasive system is that it causes a significant amount of discomfort and risk to the user &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;. In turn, noninvasive systems such as the EEG acquire the signal without the need for surgical implementation. The ongoing challenge with noninvasive techniques is the low SNR, although there have been some developments with the EEG that provide a substantial increase in the SNR &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Brief overview of the development of Brain-Computer Interfaces==&lt;br /&gt;
&lt;br /&gt;
For a long time, there was speculation that a device such as an electroencephalogram, which can record electrical potentials generated by brain activity, could be used to control devices by taking advantage of the signals obtained by it &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;. In the 1960s there where the first demonstrations of BCIs technology. These were made in 1964 by Grey Walter, which used a signal recorded on the scalp by EEG to control a slide projector. Ebenhard Fetz also helped advance the development of BCIs teaching monkeys to control a meter needle by changing the firing rate of a single cortical neuron. Moving forward to the 1970s, Jacques Vidal developed a system that determined the eye-gaze direction using the scalp-recorded visual evoked potential over the visual cortex to determine the direction in which the user wanted to move a computer cursor. The term brain-computer interface can be traced to Vidal &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;. During 1980, Elbert T. and colleagues demonstrated that people could learn to control slow cortical potentials (SCPs) in scalp-recorded RRG activity. This was used to adjust the vertical position of a rocket image moving on a TV screen. Still in the 1980s, more specifically in 1988, Farwell and Donchin proved that people could use the P300 event-related potentials to spell words on a computer screen. Another major development was when Wolpaw and colleagues trained people to control the amplitude of mu and beta rhythms – sensorimotor rhythms – using the EEG recorded over the sensorimotor cortex. They demonstrated that users could use the mu and beta rhythms to move a computer cursor in one or two dimensions &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The research of BCIs increased rapidly in the mid-1990s, continuing to grow into the present years. During the past 20 years, the research has covered a broad range of areas that are relevant to the development of BCI technology, such as basic and applied neuroscience, biomedical engineering, materials engineering, electrical engineering, signal processing, computer science, assistive technology, and clinical rehabilitation &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Bain-Computer Interface components==&lt;br /&gt;
&lt;br /&gt;
A BCI, in order to achieve the desired output that reflects the user’s intent, has to detect and measure features of brain signals. It has an input, for example, the electrophysiological activity from the user, components that translate input into output, a device command (output), and a protocol that determines the onset, offset, how the timing of the operation is controlled, how the feature translation process is parameterized, the nature of the commands that the BCI produces, and how errors in translation are handled &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;. The BCI system can be divided into four basic components: signal acquisition, feature extraction, feature translation, and device output commands &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The first component, signal acquisition, is responsible for measuring the brains signals, and the adequate acquisition of this signal is important for the function of any BCI. The objective of this part of the BCI system is to detect the voluntary neural activity created by the user, whether by invasive or noninvasive means. To achieve this, some kind of sensor is used, such as scalp electrodes for electrophysiological activity or functional magnetic resonance imaging (fMRI) for hemodynamic activity. The component amplifies the signals obtained for subsequent processing. It may also filter them in order to remove noise like the power line interference, at 60 or 50 Hz. The received signals that were amplified are digitized and sent to a computer &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The next component, feature extraction, analyses those digitized signals with the objective of isolation the signal features. These are specific characteristics in the signal such as power in specific EEG frequency bands or firing rates of individual cortical neurons. There are several feature extraction procedures for the digitized signal such as the spatial filtering, voltage amplitude measurements, spectral analyses or single-neuron separation &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;. The features extracted are expressed in a compact form that is suited for translation into output commands. These features to be effective need to have a strong correlation with the user’s intent. It is important that artifacts such as electromyogram from cranial muscles are avoided or eliminated to ensure the accurate measurement of the desired signal features &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
After the features have been extracted, these are provided to the feature translation algorithm that converts them into commands for the output device, which will achieve the user’s intent. The translation algorithm should adapt to spontaneous or learned changes in the user’s signal features. This is important “in order to ensure that the user’s possible range of feature values covers the full range of device control and also to make control as effective and efﬁcient as possible &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.” The translation algorithms include linear equations, nonlinear methods such as neural networks, and other classification techniques. Independently of its nature, these algorithms change independent variables (the signal features) into dependent variables, that are the device control commands &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref&amp;gt; Wolpaw, J. R., Birbaumer, N., Heetderks, W. J., McFarland, D. J., Peckham, P.H., Schalk, G., Donchin, E., Quatrano, L. A., Robinson, C. J. and Vaughan, T. M. (2000). Brain-Computer Interface Technology: A Review of the First International Meeting. IEEE Transactions on Rehabilitation Engineering, 8(2): 164-173&amp;lt;/ref&amp;gt; (5; 7).&lt;br /&gt;
&lt;br /&gt;
Finally, the commands that were produced by the feature translation algorithm are the output of the BCI. They are sent to the application and a result is created like a selection of a letter, controlling a cursor, robotic arm operation, wheelchair movement, or any other number of desired outcomes. The realization of the operation of the device provides feedback to the user, closing the control loop &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==BCI signals==&lt;br /&gt;
&lt;br /&gt;
As mentioned above, brain signals acquired by different methods can be used as BCI inputs. But not all signals are the same: they can differ substantially in regards to topographical resolution, frequency content, area of origin, and technical needs. For example, their resolution can range from EEG – that has millimeter resolution – to electrocorticogram (ECoG), with its millimeter resolution, to neuronal action potentials that have tens-of-microns resolution. The main issue when considering signals for BCI usage is what signals can best indicate the user’s intent &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Sensorimotor rhythms were first reported by Wolpaw et al. (1991) for cursor control. These are EEG rhythms that vary according to movement or the imagination of movement and are spontaneous, not requiring specific stimuli for their occurrence &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;. The P300 type of signal is an endogenous event-related potential component in the EEG &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;. It is a positive potential that occurs around 300 msec after an event that has significance to the user. The BCIs based on the P300 potential do not depend on muscle control, such as eye movement since it reflects attention rather than simply gaze direction. Both sensorimotor rhythms and the P300 have demonstrated that the noninvasive acquiring of these brain signals can be used for communication and control of devices &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Other possibilities for BCI signals have been explored, such as using the single-neuron activity that can be acquired by microelectrodes implanted in the cortex. This research has been tried in humans but mainly in non-human primates. Another batch of studies demonstrated that recording electrocorticographic (ECoG) activity from the surface of the brain is also a viable method to produce signals for a BCI system. Both of this studies prove the viability of invasive methods to gather brain signals that could be useful for BCIs. However, there are also issues regarding their suitability and reliability for long-term use in humans &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Besides electrophysiological measures, there are other types of signals that can be useful: Magnetoencephalography (MEG), functional magnetic resonance imaging (fMRI), and near-infrared systems (fNIR). For recording MEG and fMRI, presently, the technology is still expensive and bulky, reducing the probabilities of them being used for practical applications in the near future in regards to BCIs. fNIR can be cheaper and more compact, but since it is based on changes in cerebral blood flow (like fMRI), which is a slow response, this could impact when applied to a BCI system. In conclusion, currently, electrophysiological features are the most practical signals for BCI technology &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Invasive and noninvasive techniques for acquiring signals&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Brain signals acquired by invasive methods are mainly accomplished by electrophysiologic recording from electrodes that are implanted, neurosurgically, on the inside of the person’s brain or over the surface of the brain. The area of the brain that has been the preferred site for implanting electrodes has been the motor cortex, due to its accessibility and large pyramidal cells that produce measurable signals that can be generated by actual or imaginary motor movements &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;. The advantage of the invasive techniques is their high spatial and temporal resolution since it is possible to record individual neurons at a very high sampling rates. The signals recorded intracranially can obtain more information and allow for quicker responses. This, in turn, may lead to decreased requirements of training and attention on the part of the user when comparing to noninvasive methods. However, there are some issues with invasive methods that need to be taken into account. First, the long-term stability and reliability of the signal over days and years that it is expected that a person would be able to use the implanted device. There is a need for the user to consistently be able to generate the control signal reliably without frequent retuning &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;. Secondly, the quality of the signal over long time periods is important. The brain tissue around a specific region where a device has been implanted will react after the electrode insertion (figure 2). This reaction includes not only damage to the local tissue but also irritation at the electrode-tissue surface &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;. The third issue relates to if the device includes a neuroprosthesis that requires a stimulus to activate the disabled limb. The additional stimulus could also produce a significant effect on the neural circuits that might interfere with the signal of interest. The BCI systems must accurately detect and remove this kind of artifacts &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[[File:2.jpg|thumb|Figure 2. Acute (a) and chronic tissue (b) responses after device insertion. (Image taken from He et al., 2013)]]&lt;br /&gt;
&lt;br /&gt;
Success has been limited with invasive techniques applied to humans, although there has not been a lot of experiment with human subjects. To improve the suitability of the invasive method there is a need for further advancements in microelectrodes in order to obtain stable recordings over a long term. For the widespread use of invasive techniques in humans, it would also be necessary more research to decrease of the number of cells required for simultaneous recording to obtain a useful signal, and to provide feedback to the nervous system via electrical stimulation through electrodes &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Contrarily to invasive techniques, noninvasive methods reduce the risk for users since surgery or permanent attachment to the device is not required &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;. There are several techniques that belong to this category that have been used to measure brain activity noninvasively such as computerized tomography (CT), positron electron tomography (PET), single-photon emission computed tomography (SPECT), magnetic resonance  imaging (MRI), functional  magnetic resonance imaging (fMRI), magnetoencephalography (MEG), and electroencephalography (EEG) &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;. EEG is the most prevalent method of signal acquisition for BCIs, having high temporal resolution that is capable of measuring changes in brain activity that occur within a few msec. Although the resolution of EEG is not on the same level as that of implanted methods, signals from up to 256 electrode sites can be measured at the same time &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;. EEG is practical in a laboratory setup (figure 3) or in a real-world setting, it is portable, inexpensive, and has a vast literature of past performance &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[[File:3.jpg|thumb|Figure 3. Example of a simple BCI setup. (Image taken from McFarland and Wolpaw, 2011)]]&lt;br /&gt;
&lt;br /&gt;
==Applications==&lt;br /&gt;
&lt;br /&gt;
There are a number of disorders that disrupt the neuromuscular pathways through which the brain communicates with and controls its external environment. Disorders like the amyotrophic lateral sclerosis (ALS), brainstem stroke, brain or spinal cord injury, cerebral palsy, muscular dystrophies, multiple sclerosis, and others undermine the capacity of the neural pathways that control muscles or impair the muscles &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
An option for to restore function to people with motor impairments is to provide the brain with a non-muscular communication and control channel. A BCI can, therefore, convey messages and commands to the external world, and the potential of these systems for helping handicapped people is obvious &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;.He et al. (2013) mentions that “a BCI output could replace natural output that has been lost to injury or disease. Thus, someone who cannot speak could use a BCI to spell words that are then spoken by a speech synthesizer. Or one who has lost limb control could use a BCI to operate a powered wheelchair &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.”&lt;br /&gt;
&lt;br /&gt;
A BCI output could enhance natural CNS output. For example, as a method to prevent the loss of attention when someone is engaged in a task that requires constant focus. A BCI could detect the brain activity that precedes break in attention and create an output (a sound for example) that would alert the person. It could also supplement natural CNS output, such as in the case of a person that uses a BCI to control a third robotic arm, for example, or to choose items when a user that is controlling the position of the cursor selects them. In these cases, the BCI supplements the natural neuromuscular output with another, the artificial output. Finally, the BCI output could improve the natural CNS output. As an example, a person whose arm movements are compromised by a sensorimotor cortex damaged by a stroke could use a BCI system to measure signals from the damaged areas and then excite muscles or control an orthosis that would improve arm movement &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==See Also==&lt;br /&gt;
&#039;&#039;&#039;[[OpenBCI]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[Neurable]] - Building BCI for [[VR]] and [[AR]]&lt;br /&gt;
&lt;br /&gt;
[[Neuralink]] - [[Elon Musk]]&#039;s company to develop [[implantable]] [[brain–computer interface]]&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Terms]] [[Category:Technical Terms]]&lt;/div&gt;</summary>
		<author><name>Paulo Pacheco</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=VR_advertising&amp;diff=24866</id>
		<title>VR advertising</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=VR_advertising&amp;diff=24866"/>
		<updated>2017-12-12T17:06:54Z</updated>

		<summary type="html">&lt;p&gt;Paulo Pacheco: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
[[Virtual reality]] advertising is a form of marketing communication applied to virtual reality technologies. As of 2017, this form of advertising is still in the early stages, with different companies experimenting with new strategies to bring marketing content to virtual reality (VR) due to the potential this technology holds for VR ads.&lt;br /&gt;
&lt;br /&gt;
While there is still not much information regarding the efficacy of VR advertising, there is a general acknowledgment of the immersive potential of VR ads and their impact on the consumers. Results presented in a study by Ericsson ConsumerLab show that e-commerce - mainly being able to see items in real size and form when shopping - is one of the reasons consumers are interested in VR. &amp;lt;ref name=”1”&amp;gt;Johnson, T. (2017). What can we expect from virtual reality advertising. Retrieved from http://www.cpcstrategy.com/blog/2017/07/virtual-reality-advertising/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Some companies have already been using virtual reality technologies for VR advertising and offering VR experiences to users. Cadillac, for example, offers virtual dealerships and Mercedes provides a VR experience for its SL model. &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The ability of virtual reality to provide an [[immersive]] experience to the users, creating an emotional connection with them, entertaining them, or sharing a message or vision is a powerful marketing tool, thereby enticing companies to invest in this new medium. &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
However, skeptics of VR advertising say that the biggest challenge will be to increase the consumers’ uptake of VR technology. Nevertheless, those who predict that VR will become ubiquitous are already positioning themselves to drive advertising strategies for the digital landscape of the future. &amp;lt;ref name=”2”&amp;gt;D’Angelo, M. (2017). How virtual reality is impacting the ad industry. Retrieved from https://www.business.com/articles/virtual-reality-advertising-augmented/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Virtual reality advertising==&lt;br /&gt;
Modern digital marketing techniques are considered to be intrusive, manipulative, and misleading, with banner ads, prevideos, and scroll-throughs increasing user frustration. Indeed, there has been an increase in the use of ad blockers - 30% in 2016 - that has negatively affected the advertising industry. Some analysts suggest that current digital marketing strategies will not survive the next decade, with the advent of virtual reality and [[augmented reality]] (AR) also contributing to this outcome. &amp;lt;ref name=”3”&amp;gt;Damiani, J. (2017). VR and AR will be the death of pop-up ads and pre-roll videos. Retrieved from https://qz.com/1089554/virtual-reality-and-augmented-reality-are-the-future-of-digital-advertising/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Another factor that adversely affects the current digital ad model is that it repels Generation Y and Z, its target audience. These two generations exhibit different traits, but they also have things in common such as valuing community, conversation, authenticity, and a dislike of undeserved impositions on their time and attention. &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The decline of present digital marketing strategies means that new strategies like VR advertising campaigns will become more relevant as companies invest in a more immersive advertising designed for these new digital infrastructures that are expected to become more popular with the general audience. &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Surge of interest in virtual reality===&lt;br /&gt;
While some analyst still question if VR will have a global adoption rate, it is nevertheless a technology that has left the realm of science fiction and entered reality. A contributing factor to the raising awareness of VR is the ubiquity and quality of mobile devices that allow turning smartphones into VR [[head-mounted display|head-mounted displays]] (HMDs). This is allowing people to get their first experiences of virtual reality without needing specific equipment. Indeed, global search interest for VR on [[Google]] has increased. &amp;lt;ref name=”4”&amp;gt;Luber, A. (2016). What virtual reality will mean for advertising. Retrieved from https://www.thinkwithgoogle.com/marketing-resources/virtual-reality-advertising/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Since any medium can become an advertising medium, the marketing industry is taking note of this surge of interest in VR and investigating the potential of VR advertising. &amp;lt;ref name=”5”&amp;gt;Pathak, S. (2017). Virtual reality ads are still more hype than reality. Retrieved from https://digiday.com/marketing/virtual-reality-ads-still-hype-reality/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===The potential of VR advertising===&lt;br /&gt;
An enticing characteristic of VR for marketing purposes is that it permits companies to connect with customers on an experiential level. Brands have experimented with 360-degree virtual reality videos, immersion-style test drives, and brand-related product experiences. For example, BMW used VR video technology to create an ad featuring a 360-degree car race and AT&amp;amp;T simulated a car crash to warn against the dangers of driving while using the phone. &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
VR and 360-degree video are both compelling tools to create empathy and a greater sense of immersion that can increase the impact of the messages conveyed. However, a deeper level of interaction can be achieved with a true VR experience, something that even a 360-degree cannot provide - where the user is merely the observer. Unity, a VR development company, has experimented with creating specific VR experiences as a form of advertising. They have launched ‘Virtual Room’ which is a sort of ad network that allows brands to place ads across [[VR apps]], and partnered with Lionsgate to create a VR experience for the studio’s movie ‘Jigsaw.’ In the experience, players interact with objects and try to figure out how to avoid being killed. &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Agatha Bochenek, Unity’s head of VR/AR and mobile ad sales, referring to their VR ad for ‘Jigsaw’ said that what they are trying to play with “is doubling down on the things that can be entertainment. So, Jigsaw being a great example—it’s [not just] an ad: It’s a piece of entertainment in-and-of itself. The ad shouldn’t be boring. It shouldn’t just throw, ‘Buy Tickets’ in your face the whole time; it should make you feel what the movie feels like.” Another member of Unity, Julie Shumaker, Vice-president of advertiser solutions, explained that the company likes “to talk about the medium of VR advertising as a responsive storytelling ad. Instead of sitting and passively seeing a display or watching a video for a few seconds, this is a completely immersive and interactive experience, and we&#039;re able to value [things like] how does the user actually touch the ad unit itself and, ultimately, how much time did they spend with the brand.&amp;quot; &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Virtual rooms as VR advertising campaigns differ from other virtual reality advertising such as a 360-degree immersive video; the user can interact with the content whenever inside the VR experience. This means that companies can create unique VR sandbox applications, telling interactive stories that can engage and build a relationship with the customer. An evaluation of the users’ response to the Jigsaw’s interactive VR content found that they experienced a higher elevated heart rate, sweating, and muscle activation associated with smiling when compared to those who only watcher the trailer in VR. This means that the interactive aspect of the VR ad contributed to an increase inthe emotional and physical response of the users. &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Selling ad space in VR app===&lt;br /&gt;
While virtual rooms are an impressive development in virtual reality advertising, they are limited to big companies that can afford to create a full VR experience - at least for the moment.&lt;br /&gt;
&lt;br /&gt;
Another option for VR advertising could be placing VR ads within games or other types of VR content. Some have suggested that advertising space could be sold and charged per impression using gaze-tracking data. This model could provide publishers with the opportunity to create VR content with ads that don’t disturb the user’s experience, blending VR ads within the environment. &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Google VR advertising===&lt;br /&gt;
Google is also working on VR ads. On June 2017, they announced that they began experimenting with advertising formats suitable for VR experiences. This program is being run by a team at Area 120, Google’s internal workshop for experimental ideas, as a response to developers that are looking to generate revenue to fund their VR applications. &amp;lt;ref name=”6”&amp;gt;Google Developers (2017). Experimenting with VR ad formats at Area 120. Retrieved from https://developers.googleblog.com/2017/06/experimenting-with-vr-ad-formats-at.html&amp;lt;/ref&amp;gt; &amp;lt;ref name=”7”&amp;gt;Brennan, D. (2017). Google begins experimenting with VR ads. Retrieved from https://www.roadtovr.com/google-begins-experimenting-vr-ads&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The first idea shared by the company for a potential format of VR advertising is to present a small floating cube to users which can then choose to engage with it or not. Tapping the cube or gazing at it for a few seconds will open a video player where the user can watch the ad. According to Google, the company wants to create useful and non-intrusive solutions that avoid user and application disruption. The company also intends to focus on some other key principles such as VR ad formats being easy for developers to implement, native to VR, and flexible enough to customize. &amp;lt;ref name=”6”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”7”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is Google’s first venture into virtual reality advertising, leveraging existing ad formats like a flat video. By doing this, no additional ad budget is needed to create a new format. The company plans to test its VR advertising format on [[Google Cardboard]], [[Daydream]], and [[Gear VR]] platforms. &amp;lt;ref name=”7”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Besides Google, other companies have started research to develop virtual reality advertising techniques such as Team One, an ad agency in California. &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Considerations before investing in VR technology==&lt;br /&gt;
Aaron Luber, writing for Think with Google, proposed four questions that brands should consider before investing in VR technology. The following is reproduced from his article, ‘What virtual reality will mean for advertising.” &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
- &#039;&#039;Will VR give viewers an experience that they otherwise couldn&#039;t have? The subject matter should truly take advantage of the medium—transport people to a place, immerse them in a world, and compel them to explore.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
- &#039;&#039;Could virtual reality ads give shoppers a better feel for your product? According to a study from Ericsson ConsumerLab, shopping was the top reason worldwide smartphone users were interested in VR, with &amp;quot;seeing items in real size and form when shopping online&amp;quot; cited by 64% of respondents. This doesn&#039;t just apply to retail brands. Cadillac is already using VR to create virtual dealerships.&lt;br /&gt;
&#039;&#039;&lt;br /&gt;
- &#039;&#039;Will your recording environment be rich with things to see? If you&#039;re shooting in a simple white room with nothing on the walls, probably not. If you&#039;re at a sports event or a music festival, there&#039;s likely plenty to see.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
- &#039;&#039;Will viewers want to continue watching beyond the initial &amp;quot;That&#039;s cool&amp;quot; moment? It can be a challenge to get viewers to stick around after a minute or so. Make sure you have a compelling hook that will keep them engaged.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Problems with virtual reality advertising==&lt;br /&gt;
Although there is great potential in VR advertising, there are still problems that need to be addressed. Firstly, VR ads require specialized knowledge or specialized outside vendors; secondly, the VR ad campaigns need further research regarding their success and impact on consumers; and finally, many advertisers do not know how best to use the technology. &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The cost of producing a VR ad is another difficulty. For a high-quality VR experience, a brand might need to spend $500,000 while a 360-degree video might only cost between $10,000 and $100,000. &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The final problem with investing in virtual reality advertising is related to the adoption of VR technology. While interest in VR has increased, a big percentage of adults in the U.S. have not heard of VR headsets, an indication that widespread adoption is still some years off. However, some projections point to 154 million mobile VR users by 2020. With improvements in VR technology and mass adoption by the general public, brands will inevitably invest in the VR advertising market. &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;/div&gt;</summary>
		<author><name>Paulo Pacheco</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=VR_advertising&amp;diff=24865</id>
		<title>VR advertising</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=VR_advertising&amp;diff=24865"/>
		<updated>2017-12-12T16:33:00Z</updated>

		<summary type="html">&lt;p&gt;Paulo Pacheco: Created page with &amp;quot;==Introduction== Virtual reality advertising is a form of marketing communication applied to virtual reality technologies. As of 2017, this form of advertising is still in the...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
Virtual reality advertising is a form of marketing communication applied to virtual reality technologies. As of 2017, this form of advertising is still in the early stages, with different companies experimenting with new strategies to bring marketing content to virtual reality (VR) due to the potential this technology holds for VR ads.&lt;br /&gt;
&lt;br /&gt;
While there is still not much information regarding the efficacy of VR advertising, there is a general acknowledgment of the immersive potential of VR ads and their impact on the consumers. Results presented in a study by Ericsson ConsumerLab show that e-commerce - mainly being able to see items in real size and form when shopping - is one of the reasons consumers are interested in VR. &amp;lt;ref name=”1”&amp;gt;Johnson, T. (2017). What can we expect from virtual reality advertising. Retrieved from http://www.cpcstrategy.com/blog/2017/07/virtual-reality-advertising/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Some companies have already been using virtual reality technologies for VR advertising and offering VR experiences to users. Cadillac, for example, offers virtual dealerships and Mercedes provides a VR experience for its SL model. &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The ability of virtual reality to provide an immersive experience to the users, creating an emotional connection with them, entertaining them, or sharing a message or vision is a powerful marketing tool, thereby enticing companies to invest in this new medium. &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
However, skeptics of VR advertising say that the biggest challenge will be to increase the consumers’ uptake of VR technology. Nevertheless, those who predict that VR will become ubiquitous are already positioning themselves to drive advertising strategies for the digital landscape of the future. &amp;lt;ref name=”2”&amp;gt;D’Angelo, M. (2017). How virtual reality is impacting the ad industry. Retrieved from https://www.business.com/articles/virtual-reality-advertising-augmented/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Virtual reality advertising==&lt;br /&gt;
Modern digital marketing techniques are considered to be intrusive, manipulative, and misleading, with banner ads, prevideos, and scroll-throughs increasing user frustration. Indeed, there has been an increase in the use of ad blockers - 30% in 2016 - that has negatively affected the advertising industry. Some analysts suggest that current digital marketing strategies will not survive the next decade, with the advent of virtual reality and augmented reality (AR) also contributing to this outcome. &amp;lt;ref name=”3”&amp;gt;Damiani, J. (2017). VR and AR will be the death of pop-up ads and pre-roll videos. Retrieved from https://qz.com/1089554/virtual-reality-and-augmented-reality-are-the-future-of-digital-advertising/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Another factor that adversely affects the current digital ad model is that it repels Generation Y and Z, its target audience. These two generations exhibit different traits, but they also have things in common such as valuing community, conversation, authenticity, and a dislike of undeserved impositions on their time and attention. &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The decline of present digital marketing strategies means that new strategies like VR advertising campaigns will become more relevant as companies invest in a more immersive advertising designed for these new digital infrastructures that are expected to become more popular with the general audience. &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Surge of interest in virtual reality===&lt;br /&gt;
While some analyst still question if VR will have a global adoption rate, it is nevertheless a technology that has left the realm of science fiction and entered reality. A contributing factor to the raising awareness of VR is the ubiquity and quality of mobile devices that allow turning smartphones into VR head-mounted displays (HMDs). This is allowing people to get their first experiences of virtual reality without needing specific equipment. Indeed, global search interest for VR on Google has increased. &amp;lt;ref name=”4”&amp;gt;Luber, A. (2016). What virtual reality will mean for advertising. Retrieved from https://www.thinkwithgoogle.com/marketing-resources/virtual-reality-advertising/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Since any medium can become an advertising medium, the marketing industry is taking note of this surge of interest in VR and investigating the potential of VR advertising. &amp;lt;ref name=”5”&amp;gt;Pathak, S. (2017). Virtual reality ads are still more hype than reality. Retrieved from https://digiday.com/marketing/virtual-reality-ads-still-hype-reality/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===The potential of VR advertising===&lt;br /&gt;
An enticing characteristic of VR for marketing purposes is that it permits companies to connect with customers on an experiential level. Brands have experimented with 360-degree virtual reality videos, immersion-style test drives, and brand-related product experiences. For example, BMW used VR video technology to create an ad featuring a 360-degree car race and AT&amp;amp;T simulated a car crash to warn against the dangers of driving while using the phone. &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
VR and 360-degree video are both compelling tools to create empathy and a greater sense of immersion that can increase the impact of the messages conveyed. However, a deeper level of interaction can be achieved with a true VR experience, something that even a 360-degree cannot provide - where the user is merely the observer. Unity, a VR development company, has experimented with creating specific VR experiences as a form of advertising. They have launched ‘Virtual Room’ which is a sort of ad network that allows brands to place ads across VR apps, and partnered with Lionsgate to create a VR experience for the studio’s movie ‘Jigsaw.’ In the experience, players interact with objects and try to figure out how to avoid being killed. &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Agatha Bochenek, Unity’s head of VR/AR and mobile ad sales, referring to their VR ad for ‘Jigsaw’ said that what they are trying to play with “is doubling down on the things that can be entertainment. So, Jigsaw being a great example—it’s [not just] an ad: It’s a piece of entertainment in-and-of itself. The ad shouldn’t be boring. It shouldn’t just throw, ‘Buy Tickets’ in your face the whole time; it should make you feel what the movie feels like.” Another member of Unity, Julie Shumaker, Vice-president of advertiser solutions, explained that the company likes “to talk about the medium of VR advertising as a responsive storytelling ad. Instead of sitting and passively seeing a display or watching a video for a few seconds, this is a completely immersive and interactive experience, and we&#039;re able to value [things like] how does the user actually touch the ad unit itself and, ultimately, how much time did they spend with the brand.&amp;quot; &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Virtual rooms as VR advertising campaigns differ from other virtual reality advertising such as a 360-degree immersive video; the user can interact with the content whenever inside the VR experience. This means that companies can create unique VR sandbox applications, telling interactive stories that can engage and build a relationship with the customer. An evaluation of the users’ response to the Jigsaw’s interactive VR content found that they experienced a higher elevated heart rate, sweating, and muscle activation associated with smiling when compared to those who only watcher the trailer in VR. This means that the interactive aspect of the VR ad contributed to increase the emotional and physical response of the users. &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Selling ad space in VR app===&lt;br /&gt;
While virtual rooms are an impressive development in virtual reality advertising, they are limited to big companies that can afford to create a full VR experience - at least for the moment.&lt;br /&gt;
&lt;br /&gt;
Another option for VR advertising could be placing VR ads within games or other types of VR content. Some have suggested that advertising space could be sold and charged per impression using gaze-tracking data. This model could provide publishers with the opportunity to create VR content with ads that don’t disturb the user’s experience, blending VR ads within the environment. &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Google VR advertising===&lt;br /&gt;
Google is also working on VR ads. On June 2017, they announced that they began experimenting with advertising formats suitable for VR experiences. This program is being run by a team at Area 120, Google’s internal workshop for experimental ideas, as a response to developers that are looking to generate revenue to fund their VR applications. &amp;lt;ref name=”6”&amp;gt;Google Developers (2017). Experimenting with VR ad formats at Area 120. Retrieved from https://developers.googleblog.com/2017/06/experimenting-with-vr-ad-formats-at.html&amp;lt;/ref&amp;gt; &amp;lt;ref name=”7”&amp;gt;Brennan, D. (2017). Google begins experimenting with VR ads. Retrieved from https://www.roadtovr.com/google-begins-experimenting-vr-ads&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The first idea shared by the company for a potential format of VR advertising is to present a small floating cube to users which can then choose to engage with it or not. Tapping the cube or gazing at it for a few seconds will open a video player where the user can watch the ad. According to Google, the company wants to create useful and non-intrusive solutions that avoid user and application disruption. The company also intends to focus on some other key principles such as VR ad formats being easy for developers to implement, native to VR, and flexible enough to customize. &amp;lt;ref name=”6”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”7”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is Google’s first venture into virtual reality advertising, leveraging existing ad formats like a flat video. By doing this, no additional ad budget is needed to create a new format. The company plans to test its VR advertising format on Google Cardboard, Daydream, and Gear VR platforms. &amp;lt;ref name=”7”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Besides Google, other companies have started research to develop virtual reality advertising techniques such as Team One, an ad agency in California. &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Considerations before investing in VR technology==&lt;br /&gt;
Aaron Luber, writing for Think with Google, proposed four questions that brands should consider before investing in VR technology. The following is reproduced from his article, ‘What virtual reality will mean for advertising.” &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
- &#039;&#039;Will VR give viewers an experience that they otherwise couldn&#039;t have? The subject matter should truly take advantage of the medium—transport people to a place, immerse them in a world, and compel them to explore.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
- &#039;&#039;Could virtual reality ads give shoppers a better feel for your product? According to a study from Ericsson ConsumerLab, shopping was the top reason worldwide smartphone users were interested in VR, with &amp;quot;seeing items in real size and form when shopping online&amp;quot; cited by 64% of respondents. This doesn&#039;t just apply to retail brands. Cadillac is already using VR to create virtual dealerships.&lt;br /&gt;
&#039;&#039;&lt;br /&gt;
- &#039;&#039;Will your recording environment be rich with things to see? If you&#039;re shooting in a simple white room with nothing on the walls, probably not. If you&#039;re at a sports event or a music festival, there&#039;s likely plenty to see.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
- &#039;&#039;Will viewers want to continue watching beyond the initial &amp;quot;That&#039;s cool&amp;quot; moment? It can be a challenge to get viewers to stick around after a minute or so. Make sure you have a compelling hook that will keep them engaged.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Problems with virtual reality advertising==&lt;br /&gt;
Although there is great potential in VR advertising, there are still problems that need to be addressed. Firstly, VR ads require specialized knowledge or specialized outside vendors; secondly, the VR ad campaigns need further research regarding their success and impact on consumers; and finally, many advertisers do not know how best to use the technology. &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The cost of producing a VR ad is another difficulty. For a high-quality VR experience, a brand might need to spend $500,000 while a 360-degree video might only cost between $10,000 and $100,000. &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The final problem with investing in virtual reality advertising is related to the adoption of VR technology. While interest in VR has increased, a big percentage of adults in the U.S. have not heard of VR headsets, an indication that widespread adoption is still some years off. However, some projections point to 154 million mobile VR users by 2020. With improvements in VR technology and mass adoption by the general public, brands will inevitably invest in the VR advertising market. &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;/div&gt;</summary>
		<author><name>Paulo Pacheco</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Ray_tracing&amp;diff=24857</id>
		<title>Ray tracing</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Ray_tracing&amp;diff=24857"/>
		<updated>2017-12-04T11:41:20Z</updated>

		<summary type="html">&lt;p&gt;Paulo Pacheco: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
Ray tracing is a technique for rendering three-dimensional images with complex light interactions by tracing a path of light through pixels on an image plane. This technique can create graphics of mirrors, transparent surfaces, and shadows with very good results. &amp;lt;ref name=”1”&amp;gt;Rademacher, P. Ray tracing: Graphics for the masses. Retrieved from https://www.cs.unc.edu/~rademach/xroads-RT/RTarticle.html&amp;lt;/ref&amp;gt; &amp;lt;ref name=”2”&amp;gt;WhatIs. Ray tracing (raytracing, ray-tracing or ray casting). Retrieved from http://whatis.techtarget.com/definition/ray-tracing-raytracing-ray-tracing-or-ray-casting&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It is necessary to use a renderer that can simulate the light interactions occurring in the scene to achieve a sense of realism. These interactions can be the reflection of light, refraction, absorption, etc., and it is necessary a full knowledge of the scene when processing each individual pixel. A common rendering technique - a real-time rasterised renderer - does not really support such computations. &amp;lt;ref name=”3”&amp;gt;Einig, M. (2017). How ray tracing is bringing disruption to the graphics market – and impacting VR. Retrieved from https://www.virtualreality-news.net/news/2017/mar/17/how-ray-tracing-bringing-disruption-graphics-market-and-impacting-vr/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Using ray tracing, rays are sent out into the scene to explore the surroundings when rendering a pixel. If a ray is interrupted by some piece of geometry, there is a shadow. Using a ray to find the color of an object, there are reflections. This allows for graphical effects that are not possible with traditional renderers. &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
One of the downsides of ray tracing is that it requires a high amount of processing capability since firing rays into a scene to find their intersection with the scene geometry is complex and computationally intensive. &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==The ray tracing technique==&lt;br /&gt;
[[File:Ray tracing lighting no shadow.gif|thumb|Figure 1. Primary ray and shadow ray. (Image: scratchapixel.com)]]&lt;br /&gt;
[[File:Ray tracing lighting shadow.gif|thumb|Figure 2. Shadow ray intersects another object. (Image: scratchapixel.com)]]&lt;br /&gt;
&lt;br /&gt;
Some things have to be taken into account when trying to simulate a light-object interaction in a computer-generated image: without light, a person cannot see anything; without objects in the environment, light cannot be seen; and from a total number of rays that are reflected by an object, only a few will reach the surface of the eye. &amp;lt;ref name=”4”&amp;gt;Scratchapixel. Introduction to ray tracing: a simple method for creating 3D images. Retrieved from https://www.scratchapixel.com/lessons/3d-basic-rendering/introduction-to-ray-tracing/raytracing-algorithm-in-a-nutshell&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In case of computer graphics, the eyes are replaced with an image plane composed of pixels. The photons emitted by the light source will hit one of the pixels on the image plane, increasing its brightness value. Repeating this process several times until all pixels are adjusted leads to the creation of a computer-generated image. This technique is called forward ray tracing because the path of the photon from the light source to the observer is followed. &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This technique has a problem: not all of the reflected photons intersect the surface of the eye. In fact, since they are reflected in every possible direction, making each of them have a small probability of actually hitting the eye. This means that it would be necessary to simulate a vast number of photons coming from the light source and interacting with the objects in a scene, which is not a practical solution. &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The main difficulty is not in creating a large number of photons from the light source, but finding all of their intersections within the scene, which would be computationally costly. While it is technically possible to simulate the way light travels in nature, it is not the most efficient or practical technique. According to Turner Whitted, who wrote an influential paper called ‘An Improved Illumination Model for Shaded Display,’ “In an obvious approach to ray tracing, light rays emanating from a source are traced through their paths until they strike the viewer. Since only a few will reach the viewer, this approach is wasteful.” &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
An alternative to forward ray tracing is backward tracing. In this case, instead of tracing rays from the light source to the receptor, the rays are traced backwards from the receptor to the objects. This is a convenient solution to the problem presented by the forward ray tracing technique. Since simulations cannot be as fast and perfect as nature, a compromise is made and a ray is traced from the receptor into the scene (called primary ray, visibility ray, or camera ray). If this ray hits an object, another ray can the sent from the hit point to the light source in order to find out how much light it receives. This second ray is called a light or shadow ray. When this ray is obstructed by another object, it means that the original hit point is in a shadow. &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It should be noted that some authors use the terms ‘forward tracing’ and ‘backward tracing’ with inverted meanings: in this case forward tracing would mean to trace the rays from the receptor to the objects and backward tracing to trace them from the light source to the receptor. &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In general, a ray tracing algorithm takes an image made of pixels and for each pixel, it shoots a primary ray into the scene. After the primary ray’s direction is set, the objects of the scene are checked to see if the ray intersects with any of them. It could be the case that the primary ray will intersect more than one object. When this happens, the object with the intersection point closest to the eye is selected. After this, a shadow ray is shot from the intersection to the light source (Figure 1). If this ray does not intersect an object, then the hit point is illuminated. If it does intersect another object, then that object casts a shadow on it (Figure 2). &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Repeating this operation for all pixels, a two-dimensional representation of a three-dimensional scene is obtained. &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Some characteristics of ray tracing==&lt;br /&gt;
One of the advantages of ray tracing is that it takes just a few lines to code and it takes little effort to implement, unlike other algorithms like a scanline renderer. &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ray tracing was first described by Arthur Appel on a paper published in 1969 entitled ‘Some Techniques for Shading Machine Renderings of Solids’. Although it is a valuable algorithm, the main reason for it not having replaced all other rendering algorithms is that it is a very time-consuming method, taking a long time to find the intersection between rays and geometry. Historically, this has been the major drawback of ray tracing but has become less of a problem as computers get faster. However, compared to other techniques (e.g. the z-buffer algorithm), ray tracing is still slower. &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Extending the idea of ray propagation, it is easy to simulate effects like reflection and refraction. These are essential when simulating glass materials or mirror surfaces. Turner Whitted described how to extend Appel’s ray tracing algorithm for more advanced rendering in his 1979 paper, ‘An Improved Illumination Model for Shaded Display’. &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Ray tracing and virtual reality==&lt;br /&gt;
According to Einig (2017), in virtual reality, ray tracing makes it possible to “counter the lens distortion at the very first stage of the rendering process, instead of moving and stretching some pixels at the end of the render like in rasterisers. Even better, the amount of rays sent per pixel can vary depending on the pixel position in the frame, which means that it is trivial to implement foveated rendering, which tracks the eye and only draws the highest detail images where you are looking, and add precision where it matters.” &amp;lt;ref name=”5”&amp;gt;Estes, G. (2016). New VR and ray tracing tools for developers. Retrieved from 	https://blogs.nvidia.com/blog/2016/07/25/nvidia-sdk-updates/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On the technological side, in 2016, NVIDIA announced new SDKs and updates for NIVIDIA DesignWorks and NVIDIA VRWorks that improve the capabilities for interactive ray tracing. With the update, it is easier to create VR scenes and panoramas in their physically based ray tracing software. It is a matter of selecting a 360-degree camera from the list provided and a scene can be viewed as a fully ray traced VR experience with a single step. &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
NVIDIA has also updated their OptiX ray tracing engine “to include support for NVIDIA NVLink and Pascal GPUs including the powerful new DGX-1 appliance with 8 high-performance NVIDIA GPUs per node. This allows the visualization of scenes as large as 64GB in size – never before possible using GPU rendering. OptiX is used in commercial applications such as Adobe After Effects, as well as in-house tools at studios like PIXAR.“ &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;/div&gt;</summary>
		<author><name>Paulo Pacheco</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Ray_tracing&amp;diff=24850</id>
		<title>Ray tracing</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Ray_tracing&amp;diff=24850"/>
		<updated>2017-11-30T10:45:12Z</updated>

		<summary type="html">&lt;p&gt;Paulo Pacheco: Created page with &amp;quot;==Introduction== Ray tracing is technique for rendering three-dimensional images with complex light interactions by tracing a path of light through pixels on an image plane. T...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
Ray tracing is technique for rendering three-dimensional images with complex light interactions by tracing a path of light through pixels on an image plane. This technique can create graphics of mirrors, transparent surfaces, and shadows with very good results. &amp;lt;ref name=”1”&amp;gt;Rademacher, P. Ray tracing: Graphics for the masses. Retrieved from https://www.cs.unc.edu/~rademach/xroads-RT/RTarticle.html&amp;lt;/ref&amp;gt; &amp;lt;ref name=”2”&amp;gt;WhatIs. Ray tracing (raytracing, ray-tracing or ray casting). Retrieved from http://whatis.techtarget.com/definition/ray-tracing-raytracing-ray-tracing-or-ray-casting&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It is necessary to use a renderer that can simulate the light interactions occurring in the scene to achieve a sense of realism. These interactions can be the reflection of light, refraction, absorption, etc., and it is necessary a full knowledge of the scene when processing each individual pixel. A common rendering technique - a real-time rasterised renderer - does not really support such computations. &amp;lt;ref name=”3”&amp;gt;Einig, M. (2017). How ray tracing is bringing disruption to the graphics market – and impacting VR. Retrieved from https://www.virtualreality-news.net/news/2017/mar/17/how-ray-tracing-bringing-disruption-graphics-market-and-impacting-vr/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Using ray tracing, rays are sent out into the scene to explore the surroundings when rendering a pixel. If a ray is interrupted by some piece of geometry, there is a shadow. Using a ray to find the color of an object, there are reflections. This allows for graphical effects that are not possible with traditional renderers. &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
One of the downsides of ray tracing is that it requires a high amount of processing capability since firing rays into a scene to find their intersection with the scene geometry is complex and computationally intensive. &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==The ray tracing technique==&lt;br /&gt;
[[File:Ray tracing lighting no shadow.gif|thumb|Figure 1. Primary ray and shadow ray. (Image: scratchapixel.com)]]&lt;br /&gt;
[[File:Ray tracing lighting shadow.gif|thumb|Figure 2. Shadow ray intersects another object. (Image: scratchapixel.com)]]&lt;br /&gt;
&lt;br /&gt;
Some things have to be taken into account when trying to simulate a light-object interaction in a computer-generated image: without light, a person cannot see anything; without objects in the environment, light cannot be seen; and from a total number of rays that are reflected by an object, only a few will reach the surface of the eye. &amp;lt;ref name=”4”&amp;gt;Scratchapixel. Introduction to ray tracing: a simple method for creating 3D images. Retrieved from https://www.scratchapixel.com/lessons/3d-basic-rendering/introduction-to-ray-tracing/raytracing-algorithm-in-a-nutshell&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In case of computer graphics, the eyes are replaced with an image plane composed of pixels. The photons emitted by the light source will hit one of the pixels on the image plane, increasing its brightness value. Repeating this process several times until all pixels are adjusted leads to the creation of a computer-generated image. This technique is called forward ray tracing because the path of the photon from the light source to the observer is followed. &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This technique has a problem: not all of the reflected photons intersect the surface of the eye. In fact, since they are reflected in every possible direction, making each of them have a small probability of actually hitting the eye. This means that it would be necessary to simulate a vast number of photons coming from the light source and interacting with the objects in a scene, which is not a practical solution. &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The main difficulty is not in creating a large number of photons from the light source, but finding all of their intersections within the scene, which would be computationally costly. While it is technically possible to simulate the way light travels in nature, it is not the most efficient or practical technique. According to Turner Whitted, who wrote an influential paper called ‘An Improved Illumination Model for Shaded Display,’ “In an obvious approach to ray tracing, light rays emanating from a source are traced through their paths until they strike the viewer. Since only a few will reach the viewer, this approach is wasteful.” &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
An alternative to forward ray tracing is backward tracing. In this case, instead of tracing rays from the light source to the receptor, the rays are traced backwards from the receptor to the objects. This is a convenient solution to the problem presented by the forward ray tracing technique. Since simulations cannot be as fast and perfect as nature, a compromise is made and a ray is traced from the receptor into the scene (called primary ray, visibility ray, or camera ray). If this ray hits an object, another ray can the sent from the hit point to the light source in order to find out how much light it receives. This second ray is called a light or shadow ray. When this ray is obstructed by another object, it means that the original hit point is in a shadow. &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It should be noted that some authors use the terms ‘forward tracing’ and ‘backward tracing’ with inverted meanings: in this case forward tracing would mean to trace the rays from the receptor to the objects and backward tracing to trace them from the light source to the receptor. &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In general, a ray tracing algorithm takes an image made of pixels and for each pixel, it shoots a primary ray into the scene. After the primary ray’s direction is set, the objects of the scene are checked to see if the ray intersects with any of them. It could be the case that the primary ray will intersect more than one object. When this happens, the object with the intersection point closest to the eye is selected. After this, a shadow ray is shot from the intersection to the light source (Figure 1). If this ray does not intersect an object, then the hit point is illuminated. If it does intersect another object, then that object casts a shadow on it (Figure 2). &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Repeating this operation for all pixels, a two-dimensional representation of a three-dimensional scene is obtained. &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Some characteristics of ray tracing==&lt;br /&gt;
One of the advantages of ray tracing is that it takes just a few lines to code and it takes little effort to implement, unlike other algorithms like a scanline renderer. &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Ray tracing was first described by Arthur Appel on a paper published in 1969 entitled ‘Some Techniques for Shading Machine Renderings of Solids’. Although it is a valuable algorithm, the main reason for it not having replaced all other rendering algorithms is that it is a very time-consuming method, taking a long time to find the intersection between rays and geometry. Historically, this has been the major drawback of ray tracing but has become less of a problem as computers get faster. However, compared to other techniques (e.g. the z-buffer algorithm), ray tracing is still slower. &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Extending the idea of ray propagation, it is easy to simulate effects like reflection and refraction. These are essential when simulating glass materials or mirror surfaces. Turner Whitted described how to extend Appel’s ray tracing algorithm for more advanced rendering in his 1979 paper, ‘An Improved Illumination Model for Shaded Display’. &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Ray tracing and virtual reality==&lt;br /&gt;
According to Einig (2017), in virtual reality, ray tracing makes it possible to “counter the lens distortion at the very first stage of the rendering process, instead of moving and stretching some pixels at the end of the render like in rasterisers. Even better, the amount of rays sent per pixel can vary depending on the pixel position in the frame, which means that it is trivial to implement foveated rendering, which tracks the eye and only draws the highest detail images where you are looking, and add precision where it matters.” &amp;lt;ref name=”5”&amp;gt;Estes, G. (2016). New VR and ray tracing tools for developers. Retrieved from 	https://blogs.nvidia.com/blog/2016/07/25/nvidia-sdk-updates/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On the technological side, in 2016, NVIDIA announced new SDKs and updates for NIVIDIA DesignWorks and NVIDIA VRWorks that improve the capabilities for interactive ray tracing. With the update, it is easier to create VR scenes and panoramas in their physically based ray tracing software. It is a matter of selecting a 360-degree camera from the list provided and a scene can be viewed as a fully ray traced VR experience with a single step. &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
NVIDIA has also updated their OptiX ray tracing engine “to include support for NVIDIA NVLink and Pascal GPUs including the powerful new DGX-1 appliance with 8 high-performance NVIDIA GPUs per node. This allows the visualization of scenes as large as 64GB in size – never before possible using GPU rendering. OptiX is used in commercial applications such as Adobe After Effects, as well as in-house tools at studios like PIXAR.“ &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;/div&gt;</summary>
		<author><name>Paulo Pacheco</name></author>
	</entry>
</feed>