<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://vrarwiki.com/index.php?action=history&amp;feed=atom&amp;title=Depth_cue</id>
	<title>Depth cue - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://vrarwiki.com/index.php?action=history&amp;feed=atom&amp;title=Depth_cue"/>
	<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Depth_cue&amp;action=history"/>
	<updated>2026-04-13T20:17:40Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.43.0</generator>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Depth_cue&amp;diff=34814&amp;oldid=prev</id>
		<title>Xinreality: Undo revision 34813 by Xinreality (talk)</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Depth_cue&amp;diff=34814&amp;oldid=prev"/>
		<updated>2025-05-01T16:02:17Z</updated>

		<summary type="html">&lt;p&gt;Undo revision &lt;a href=&quot;/wiki/Special:Diff/34813&quot; title=&quot;Special:Diff/34813&quot;&gt;34813&lt;/a&gt; by &lt;a href=&quot;/wiki/Special:Contributions/Xinreality&quot; title=&quot;Special:Contributions/Xinreality&quot;&gt;Xinreality&lt;/a&gt; (&lt;a href=&quot;/wiki/User_talk:Xinreality&quot; title=&quot;User talk:Xinreality&quot;&gt;talk&lt;/a&gt;)&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 16:02, 1 May 2025&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l1&quot;&gt;Line 1:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 1:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;{{see also|Terms|Technical Terms}}&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;{{see also|Terms|Technical Terms}}&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[Depth cue]] is any of a variety of perceptual signals that allow the [[human visual system]] to infer the distance or depth of objects in a scene, enabling the brain to transform two-dimensional retinal images into a perception of three-dimensional space. &amp;lt;ref name=&quot;HowardRogers2012&quot;&amp;gt;Howard, I. P., &amp;amp; Rogers, B. J. (2012). *Perceiving in Depth, Volume 1: Basic Mechanisms*. Oxford University Press.&amp;lt;/ref&amp;gt; These cues are crucial for navigating the three-dimensional world and are fundamental to creating convincing, immersive, and comfortable experiences in [[Virtual Reality]] (VR) and [[Augmented Reality]] (AR), where reproducing accurate depth perception presents significant technical challenges. &amp;lt;ref name=&quot;HowardRogers1995&quot;&amp;gt;Howard, Ian P., and Brian J. Rogers. (1995). *Binocular vision and stereopsis*. Oxford University Press.&amp;lt;/ref&amp;gt; The brain automatically fuses multiple available depth cues to build a robust model of the spatial layout of the environment. &amp;lt;ref name=&quot;HITLCues1&quot;&amp;gt;(2014-06-20) Visual Depth Cues - Human Interface Technology Laboratory. Retrieved April 25, 2025, from https://www.hitl.washington.edu/projects/&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;knowledge-base&lt;/del&gt;/virtual-worlds/EVE/III.A.1.&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;b&lt;/del&gt;.&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;VisualDepthCues&lt;/del&gt;.html&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[Depth cue]] is any of a variety of perceptual signals that allow the [[human visual system]] to infer the distance or depth of objects in a scene, enabling the brain to transform two-dimensional retinal images into a perception of three-dimensional space. &amp;lt;ref name=&quot;HowardRogers2012&quot;&amp;gt;Howard, I. P., &amp;amp; Rogers, B. J. (2012). *Perceiving in Depth, Volume 1: Basic Mechanisms*. Oxford University Press.&amp;lt;/ref&amp;gt; These cues are crucial for navigating the three-dimensional world and are fundamental to creating convincing, immersive, and comfortable experiences in [[Virtual Reality]] (VR) and [[Augmented Reality]] (AR), where reproducing accurate depth perception presents significant technical challenges. &amp;lt;ref name=&quot;HowardRogers1995&quot;&amp;gt;Howard, Ian P., and Brian J. Rogers. (1995). *Binocular vision and stereopsis*. Oxford University Press.&amp;lt;/ref&amp;gt; The brain automatically fuses multiple available depth cues to build a robust model of the spatial layout of the environment. &amp;lt;ref name=&quot;HITLCues1&quot;&amp;gt;(2014-06-20) Visual Depth Cues - Human Interface Technology Laboratory. Retrieved April 25, 2025, from https://www.hitl.washington.edu/projects/&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;knowledge_base&lt;/ins&gt;/virtual-worlds/EVE/III.A.1.&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;c&lt;/ins&gt;.&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;DepthCues&lt;/ins&gt;.html&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Classification of Depth Cues ==&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Classification of Depth Cues ==&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l82&quot;&gt;Line 82:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 82:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;====The [[Vergence-Accommodation Conflict]] (VAC)====&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;====The [[Vergence-Accommodation Conflict]] (VAC)====&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;A major limitation in most current VR/AR displays is the mismatch between vergence and accommodation cues. Most headsets use [[fixed-focus display]]s, meaning the optics present the virtual image at a fixed focal distance (often 1.5-2 meters or optical infinity), regardless of the simulated distance of the virtual object. &amp;lt;ref name=&quot;ARInsiderVAC&quot;&amp;gt;(2024-01-29) Understanding Vergence-Accommodation Conflict in AR/VR Headsets - AR Insider. Retrieved April 25, 2025, from https://arinsider.co/&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;2024&lt;/del&gt;/&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;01&lt;/del&gt;/&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;29&lt;/del&gt;/&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;understanding&lt;/del&gt;-&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;vergence&lt;/del&gt;-&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;accommodation&lt;/del&gt;-&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;conflict&lt;/del&gt;-&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;in&lt;/del&gt;-&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;ar&lt;/del&gt;-&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;vr&lt;/del&gt;-&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;headsets&lt;/del&gt;/&amp;lt;/ref&amp;gt; &amp;lt;ref name=&quot;WikiVAC&quot;&amp;gt;Vergence-accommodation conflict - Wikipedia. Retrieved April 25, 2025, from https://en.wikipedia.org/wiki/Vergence-accommodation_conflict&amp;lt;/ref&amp;gt; &amp;lt;ref name=&quot;DeliverContactsFocus&quot;&amp;gt;(2024-07-18) Exploring the Focal Distance in VR Headsets - Deliver Contacts. Retrieved April 25, 2025, from https://delivercontacts.com/&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;blog&lt;/del&gt;/&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;exploring&lt;/del&gt;-the-&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;focal&lt;/del&gt;-&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;distance&lt;/del&gt;-&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;in-vr-headsets&lt;/del&gt;&amp;lt;/ref&amp;gt; While the user&#039;s eyes converge appropriately for the virtual object&#039;s simulated distance (for example 0.5 meters), their eyes must maintain focus (accommodate) at the fixed optical distance of the display itself to keep the image sharp. This mismatch between the distance signaled by vergence and the distance signaled by accommodation is known as the &#039;&#039;&#039;[[vergence-accommodation conflict]]&#039;&#039;&#039; (VAC). &amp;lt;ref name=&quot;HoffmanVAC2008&quot;&amp;gt;Hoffman, D. M., Girshick, A. R., Akeley, K., &amp;amp; Banks, M. S. (2008). Vergence-accommodation conflicts hinder visual performance and cause visual fatigue. *Journal of Vision, 8*(3), 33. doi:10.1167/8.3.33&amp;lt;/ref&amp;gt; &amp;lt;ref name=&quot;FacebookVAC2019&quot;&amp;gt;Facebook Research. (2019, March 28). *Vergence-Accommodation Conflict: Facebook Research Explains Why Varifocal Matters For Future VR*. YouTube. [https://www.youtube.com/watch?v=YWA4gVibKJE]&amp;lt;/ref&amp;gt; &amp;lt;ref name=&quot;KramidaVAC2016&quot;&amp;gt;Kramida, Gregory. (2016). Resolving the vergence-accommodation conflict in head-mounted displays. *IEEE transactions on visualization and computer graphics, 22*(7), 1912-1931.&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;A major limitation in most current VR/AR displays is the mismatch between vergence and accommodation cues. Most headsets use [[fixed-focus display]]s, meaning the optics present the virtual image at a fixed focal distance (often 1.5-2 meters or optical infinity), regardless of the simulated distance of the virtual object. &amp;lt;ref name=&quot;ARInsiderVAC&quot;&amp;gt;(2024-01-29) Understanding Vergence-Accommodation Conflict in AR/VR Headsets - AR Insider. Retrieved April 25, 2025, from https://arinsider.co/&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;2022&lt;/ins&gt;/&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;06&lt;/ins&gt;/&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;22&lt;/ins&gt;/&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;5&lt;/ins&gt;-&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;ways&lt;/ins&gt;-&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;to&lt;/ins&gt;-&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;address&lt;/ins&gt;-&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;ars&lt;/ins&gt;-&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;vergence&lt;/ins&gt;-&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;accommodation&lt;/ins&gt;-&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;conflict&lt;/ins&gt;/&amp;lt;/ref&amp;gt; &amp;lt;ref name=&quot;WikiVAC&quot;&amp;gt;Vergence-accommodation conflict - Wikipedia. Retrieved April 25, 2025, from https://en.wikipedia.org/wiki/Vergence-accommodation_conflict&amp;lt;/ref&amp;gt; &amp;lt;ref name=&quot;DeliverContactsFocus&quot;&amp;gt;(2024-07-18) Exploring the Focal Distance in VR Headsets - Deliver Contacts. Retrieved April 25, 2025, from https://delivercontacts.com/&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;research&lt;/ins&gt;/&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;virtual-reality&lt;/ins&gt;-the-&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;vergence&lt;/ins&gt;-&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;accommodation&lt;/ins&gt;-&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;conflict/&lt;/ins&gt;&amp;lt;/ref&amp;gt; While the user&#039;s eyes converge appropriately for the virtual object&#039;s simulated distance (for example 0.5 meters), their eyes must maintain focus (accommodate) at the fixed optical distance of the display itself to keep the image sharp. This mismatch between the distance signaled by vergence and the distance signaled by accommodation is known as the &#039;&#039;&#039;[[vergence-accommodation conflict]]&#039;&#039;&#039; (VAC). &amp;lt;ref name=&quot;HoffmanVAC2008&quot;&amp;gt;Hoffman, D. M., Girshick, A. R., Akeley, K., &amp;amp; Banks, M. S. (2008). Vergence-accommodation conflicts hinder visual performance and cause visual fatigue. *Journal of Vision, 8*(3), 33. doi:10.1167/8.3.33&amp;lt;/ref&amp;gt; &amp;lt;ref name=&quot;FacebookVAC2019&quot;&amp;gt;Facebook Research. (2019, March 28). *Vergence-Accommodation Conflict: Facebook Research Explains Why Varifocal Matters For Future VR*. YouTube. [https://www.youtube.com/watch?v=YWA4gVibKJE]&amp;lt;/ref&amp;gt; &amp;lt;ref name=&quot;KramidaVAC2016&quot;&amp;gt;Kramida, Gregory. (2016). Resolving the vergence-accommodation conflict in head-mounted displays. *IEEE transactions on visualization and computer graphics, 22*(7), 1912-1931.&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The VAC forces the brain to deal with conflicting depth information, potentially leading to several issues:&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The VAC forces the brain to deal with conflicting depth information, potentially leading to several issues:&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l101&quot;&gt;Line 101:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 101:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;To mitigate or eliminate the VAC and provide more accurate depth cues, researchers and companies are actively developing advanced display technologies:&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;To mitigate or eliminate the VAC and provide more accurate depth cues, researchers and companies are actively developing advanced display technologies:&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*&#039;&#039;&#039;[[Varifocal Displays]]&#039;&#039;&#039;: These displays dynamically adjust the focal distance of the display optics (for example using physically moving lenses/screens, [[liquid lens]] technology, or [[deformable mirror]] devices) to match the simulated distance of the object the user is currently looking at. &amp;lt;ref name=&quot;KonradVAC2016&quot;&amp;gt;Konrad, R., Cooper, E. A., &amp;amp; Banks, M. S. (2016). Towards the next generation of virtual and augmented reality displays. *Optics Express, 24*(15), 16800-16809. doi:10.1364/OE.24.016800&amp;lt;/ref&amp;gt; &amp;lt;ref name=&quot;DunnVarifocal2017&quot;&amp;gt;Dunn, David, et al. (2017). Wide field of view varifocal near-eye display using see-through deformable membrane mirrors. *IEEE transactions on visualization and computer graphics, 23*(4), 1322-1331.&amp;lt;/ref&amp;gt; This typically requires fast and accurate [[eye tracking]] to determine the user&#039;s point of gaze and intended focus depth. Varifocal systems often simulate [[Depth of Field]] effects computationally, blurring parts of the scene not at the current focal distance. &amp;lt;ref name=&quot;ARInsiderVAC&quot;/&amp;gt; Prototypes like Meta Reality Labs&#039; &quot;Half Dome&quot; series have demonstrated this approach. &amp;lt;ref name=&quot;ARInsiderVAC&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*&#039;&#039;&#039;[[Varifocal Displays]]&#039;&#039;&#039;: These displays dynamically adjust the focal distance of the display optics (for example using physically moving lenses/screens, [[liquid lens]] technology, or [[deformable mirror]] devices) to match the simulated distance of the object the user is currently looking at. &amp;lt;ref name=&quot;KonradVAC2016&quot;&amp;gt;Konrad, R., Cooper, E. A., &amp;amp; Banks, M. S. (2016). Towards the next generation of virtual and augmented reality displays. *Optics Express, 24*(15), 16800-16809. doi:10.1364/OE.24.016800 &lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;https://www.computationalimaging.org/publications/accommodation-invariant-near-eye-displays-siggraph-2017/&lt;/ins&gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=&quot;DunnVarifocal2017&quot;&amp;gt;Dunn, David, et al. (2017). Wide field of view varifocal near-eye display using see-through deformable membrane mirrors. *IEEE transactions on visualization and computer graphics, 23*(4), 1322-1331.&amp;lt;/ref&amp;gt; This typically requires fast and accurate [[eye tracking]] to determine the user&#039;s point of gaze and intended focus depth. Varifocal systems often simulate [[Depth of Field]] effects computationally, blurring parts of the scene not at the current focal distance. &amp;lt;ref name=&quot;ARInsiderVAC&quot;/&amp;gt; Prototypes like Meta Reality Labs&#039; &quot;Half Dome&quot; series have demonstrated this approach. &amp;lt;ref name=&quot;ARInsiderVAC&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*&amp;#039;&amp;#039;&amp;#039;[[Multifocal Displays]] (Multi-Plane Displays)&amp;#039;&amp;#039;&amp;#039;: Instead of a single, continuously adjusting focus, these displays present content on multiple discrete focal planes simultaneously or in rapid succession. &amp;lt;ref name=&amp;quot;AkeleyMultifocal2004&amp;quot;&amp;gt;Akeley, Kurt, Watt, S. J., Girshick, A. R., &amp;amp; Banks, M. S. (2004). A stereo display prototype with multiple focal distances. *ACM transactions on graphics (TOG), 23*(3), 804-813.&amp;lt;/ref&amp;gt; The visual system can then accommodate to the plane closest to the target object&amp;#039;s depth. Examples include stacked display panels or systems using switchable lenses. Magic Leap 1 used a two-plane system. &amp;lt;ref name=&amp;quot;ARInsiderVAC&amp;quot;/&amp;gt; While reducing VAC, they can still exhibit quantization effects if an object lies between planes, and complexity increases with the number of planes.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*&amp;#039;&amp;#039;&amp;#039;[[Multifocal Displays]] (Multi-Plane Displays)&amp;#039;&amp;#039;&amp;#039;: Instead of a single, continuously adjusting focus, these displays present content on multiple discrete focal planes simultaneously or in rapid succession. &amp;lt;ref name=&amp;quot;AkeleyMultifocal2004&amp;quot;&amp;gt;Akeley, Kurt, Watt, S. J., Girshick, A. R., &amp;amp; Banks, M. S. (2004). A stereo display prototype with multiple focal distances. *ACM transactions on graphics (TOG), 23*(3), 804-813.&amp;lt;/ref&amp;gt; The visual system can then accommodate to the plane closest to the target object&amp;#039;s depth. Examples include stacked display panels or systems using switchable lenses. Magic Leap 1 used a two-plane system. &amp;lt;ref name=&amp;quot;ARInsiderVAC&amp;quot;/&amp;gt; While reducing VAC, they can still exhibit quantization effects if an object lies between planes, and complexity increases with the number of planes.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Xinreality</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Depth_cue&amp;diff=34813&amp;oldid=prev</id>
		<title>Xinreality at 16:00, 1 May 2025</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Depth_cue&amp;diff=34813&amp;oldid=prev"/>
		<updated>2025-05-01T16:00:49Z</updated>

		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 16:00, 1 May 2025&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l1&quot;&gt;Line 1:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 1:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;{{see also|Terms|Technical Terms}}&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;{{see also|Terms|Technical Terms}}&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[Depth cue]] is any of a variety of perceptual signals that allow the [[human visual system]] to infer the distance or depth of objects in a scene, enabling the brain to transform two-dimensional retinal images into a perception of three-dimensional space. &amp;lt;ref name=&quot;HowardRogers2012&quot;&amp;gt;Howard, I. P., &amp;amp; Rogers, B. J. (2012). *Perceiving in Depth, Volume 1: Basic Mechanisms*. Oxford University Press.&amp;lt;/ref&amp;gt; These cues are crucial for navigating the three-dimensional world and are fundamental to creating convincing, immersive, and comfortable experiences in [[Virtual Reality]] (VR) and [[Augmented Reality]] (AR), where reproducing accurate depth perception presents significant technical challenges. &amp;lt;ref name=&quot;HowardRogers1995&quot;&amp;gt;Howard, Ian P., and Brian J. Rogers. (1995). *Binocular vision and stereopsis*. Oxford University Press.&amp;lt;/ref&amp;gt; The brain automatically fuses multiple available depth cues to build a robust model of the spatial layout of the environment. &amp;lt;ref name=&quot;HITLCues1&quot;&amp;gt;(2014-06-20) Visual Depth Cues - Human Interface Technology Laboratory. Retrieved April 25, 2025, from https://www.hitl.washington.edu/projects/&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;knowledge_base&lt;/del&gt;/virtual-worlds/EVE/III.A.1.&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;c&lt;/del&gt;.&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;DepthCues&lt;/del&gt;.html&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[Depth cue]] is any of a variety of perceptual signals that allow the [[human visual system]] to infer the distance or depth of objects in a scene, enabling the brain to transform two-dimensional retinal images into a perception of three-dimensional space. &amp;lt;ref name=&quot;HowardRogers2012&quot;&amp;gt;Howard, I. P., &amp;amp; Rogers, B. J. (2012). *Perceiving in Depth, Volume 1: Basic Mechanisms*. Oxford University Press.&amp;lt;/ref&amp;gt; These cues are crucial for navigating the three-dimensional world and are fundamental to creating convincing, immersive, and comfortable experiences in [[Virtual Reality]] (VR) and [[Augmented Reality]] (AR), where reproducing accurate depth perception presents significant technical challenges. &amp;lt;ref name=&quot;HowardRogers1995&quot;&amp;gt;Howard, Ian P., and Brian J. Rogers. (1995). *Binocular vision and stereopsis*. Oxford University Press.&amp;lt;/ref&amp;gt; The brain automatically fuses multiple available depth cues to build a robust model of the spatial layout of the environment. &amp;lt;ref name=&quot;HITLCues1&quot;&amp;gt;(2014-06-20) Visual Depth Cues - Human Interface Technology Laboratory. Retrieved April 25, 2025, from https://www.hitl.washington.edu/projects/&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;knowledge-base&lt;/ins&gt;/virtual-worlds/EVE/III.A.1.&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;b&lt;/ins&gt;.&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;VisualDepthCues&lt;/ins&gt;.html&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Classification of Depth Cues ==&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Classification of Depth Cues ==&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l82&quot;&gt;Line 82:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 82:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;====The [[Vergence-Accommodation Conflict]] (VAC)====&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;====The [[Vergence-Accommodation Conflict]] (VAC)====&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;A major limitation in most current VR/AR displays is the mismatch between vergence and accommodation cues. Most headsets use [[fixed-focus display]]s, meaning the optics present the virtual image at a fixed focal distance (often 1.5-2 meters or optical infinity), regardless of the simulated distance of the virtual object. &amp;lt;ref name=&quot;ARInsiderVAC&quot;&amp;gt;(2024-01-29) Understanding Vergence-Accommodation Conflict in AR/VR Headsets - AR Insider. Retrieved April 25, 2025, from https://arinsider.co/&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;2022&lt;/del&gt;/&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;06&lt;/del&gt;/&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;22&lt;/del&gt;/&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;5&lt;/del&gt;-&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;ways&lt;/del&gt;-&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;to&lt;/del&gt;-&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;address&lt;/del&gt;-&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;ars&lt;/del&gt;-&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;vergence&lt;/del&gt;-&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;accommodation&lt;/del&gt;-&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;conflict&lt;/del&gt;/&amp;lt;/ref&amp;gt; &amp;lt;ref name=&quot;WikiVAC&quot;&amp;gt;Vergence-accommodation conflict - Wikipedia. Retrieved April 25, 2025, from https://en.wikipedia.org/wiki/Vergence-accommodation_conflict&amp;lt;/ref&amp;gt; &amp;lt;ref name=&quot;DeliverContactsFocus&quot;&amp;gt;(2024-07-18) Exploring the Focal Distance in VR Headsets - Deliver Contacts. Retrieved April 25, 2025, from https://delivercontacts.com/&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;research&lt;/del&gt;/&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;virtual&lt;/del&gt;-&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;reality&lt;/del&gt;-&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;the&lt;/del&gt;-&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;vergence&lt;/del&gt;-&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;accommodation&lt;/del&gt;-&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;conflict/&lt;/del&gt;&amp;lt;/ref&amp;gt; While the user&#039;s eyes converge appropriately for the virtual object&#039;s simulated distance (for example 0.5 meters), their eyes must maintain focus (accommodate) at the fixed optical distance of the display itself to keep the image sharp. This mismatch between the distance signaled by vergence and the distance signaled by accommodation is known as the &#039;&#039;&#039;[[vergence-accommodation conflict]]&#039;&#039;&#039; (VAC). &amp;lt;ref name=&quot;HoffmanVAC2008&quot;&amp;gt;Hoffman, D. M., Girshick, A. R., Akeley, K., &amp;amp; Banks, M. S. (2008). Vergence-accommodation conflicts hinder visual performance and cause visual fatigue. *Journal of Vision, 8*(3), 33. doi:10.1167/8.3.33&amp;lt;/ref&amp;gt; &amp;lt;ref name=&quot;FacebookVAC2019&quot;&amp;gt;Facebook Research. (2019, March 28). *Vergence-Accommodation Conflict: Facebook Research Explains Why Varifocal Matters For Future VR*. YouTube. [https://www.youtube.com/watch?v=YWA4gVibKJE]&amp;lt;/ref&amp;gt; &amp;lt;ref name=&quot;KramidaVAC2016&quot;&amp;gt;Kramida, Gregory. (2016). Resolving the vergence-accommodation conflict in head-mounted displays. *IEEE transactions on visualization and computer graphics, 22*(7), 1912-1931.&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;A major limitation in most current VR/AR displays is the mismatch between vergence and accommodation cues. Most headsets use [[fixed-focus display]]s, meaning the optics present the virtual image at a fixed focal distance (often 1.5-2 meters or optical infinity), regardless of the simulated distance of the virtual object. &amp;lt;ref name=&quot;ARInsiderVAC&quot;&amp;gt;(2024-01-29) Understanding Vergence-Accommodation Conflict in AR/VR Headsets - AR Insider. Retrieved April 25, 2025, from https://arinsider.co/&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;2024&lt;/ins&gt;/&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;01&lt;/ins&gt;/&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;29&lt;/ins&gt;/&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;understanding&lt;/ins&gt;-&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;vergence&lt;/ins&gt;-&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;accommodation&lt;/ins&gt;-&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;conflict&lt;/ins&gt;-&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;in&lt;/ins&gt;-&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;ar&lt;/ins&gt;-&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;vr&lt;/ins&gt;-&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;headsets&lt;/ins&gt;/&amp;lt;/ref&amp;gt; &amp;lt;ref name=&quot;WikiVAC&quot;&amp;gt;Vergence-accommodation conflict - Wikipedia. Retrieved April 25, 2025, from https://en.wikipedia.org/wiki/Vergence-accommodation_conflict&amp;lt;/ref&amp;gt; &amp;lt;ref name=&quot;DeliverContactsFocus&quot;&amp;gt;(2024-07-18) Exploring the Focal Distance in VR Headsets - Deliver Contacts. Retrieved April 25, 2025, from https://delivercontacts.com/&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;blog&lt;/ins&gt;/&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;exploring-the&lt;/ins&gt;-&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;focal&lt;/ins&gt;-&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;distance&lt;/ins&gt;-&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;in&lt;/ins&gt;-&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;vr&lt;/ins&gt;-&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;headsets&lt;/ins&gt;&amp;lt;/ref&amp;gt; While the user&#039;s eyes converge appropriately for the virtual object&#039;s simulated distance (for example 0.5 meters), their eyes must maintain focus (accommodate) at the fixed optical distance of the display itself to keep the image sharp. This mismatch between the distance signaled by vergence and the distance signaled by accommodation is known as the &#039;&#039;&#039;[[vergence-accommodation conflict]]&#039;&#039;&#039; (VAC). &amp;lt;ref name=&quot;HoffmanVAC2008&quot;&amp;gt;Hoffman, D. M., Girshick, A. R., Akeley, K., &amp;amp; Banks, M. S. (2008). Vergence-accommodation conflicts hinder visual performance and cause visual fatigue. *Journal of Vision, 8*(3), 33. doi:10.1167/8.3.33&amp;lt;/ref&amp;gt; &amp;lt;ref name=&quot;FacebookVAC2019&quot;&amp;gt;Facebook Research. (2019, March 28). *Vergence-Accommodation Conflict: Facebook Research Explains Why Varifocal Matters For Future VR*. YouTube. [https://www.youtube.com/watch?v=YWA4gVibKJE]&amp;lt;/ref&amp;gt; &amp;lt;ref name=&quot;KramidaVAC2016&quot;&amp;gt;Kramida, Gregory. (2016). Resolving the vergence-accommodation conflict in head-mounted displays. *IEEE transactions on visualization and computer graphics, 22*(7), 1912-1931.&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The VAC forces the brain to deal with conflicting depth information, potentially leading to several issues:&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The VAC forces the brain to deal with conflicting depth information, potentially leading to several issues:&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l101&quot;&gt;Line 101:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 101:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;To mitigate or eliminate the VAC and provide more accurate depth cues, researchers and companies are actively developing advanced display technologies:&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;To mitigate or eliminate the VAC and provide more accurate depth cues, researchers and companies are actively developing advanced display technologies:&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*&#039;&#039;&#039;[[Varifocal Displays]]&#039;&#039;&#039;: These displays dynamically adjust the focal distance of the display optics (for example using physically moving lenses/screens, [[liquid lens]] technology, or [[deformable mirror]] devices) to match the simulated distance of the object the user is currently looking at. &amp;lt;ref name=&quot;KonradVAC2016&quot;&amp;gt;Konrad, R., Cooper, E. A., &amp;amp; Banks, M. S. (2016). Towards the next generation of virtual and augmented reality displays. *Optics Express, 24*(15), 16800-16809. doi:10.1364/OE.24.016800 &lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;https://www.computationalimaging.org/publications/accommodation-invariant-near-eye-displays-siggraph-2017/&lt;/del&gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=&quot;DunnVarifocal2017&quot;&amp;gt;Dunn, David, et al. (2017). Wide field of view varifocal near-eye display using see-through deformable membrane mirrors. *IEEE transactions on visualization and computer graphics, 23*(4), 1322-1331.&amp;lt;/ref&amp;gt; This typically requires fast and accurate [[eye tracking]] to determine the user&#039;s point of gaze and intended focus depth. Varifocal systems often simulate [[Depth of Field]] effects computationally, blurring parts of the scene not at the current focal distance. &amp;lt;ref name=&quot;ARInsiderVAC&quot;/&amp;gt; Prototypes like Meta Reality Labs&#039; &quot;Half Dome&quot; series have demonstrated this approach. &amp;lt;ref name=&quot;ARInsiderVAC&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*&#039;&#039;&#039;[[Varifocal Displays]]&#039;&#039;&#039;: These displays dynamically adjust the focal distance of the display optics (for example using physically moving lenses/screens, [[liquid lens]] technology, or [[deformable mirror]] devices) to match the simulated distance of the object the user is currently looking at. &amp;lt;ref name=&quot;KonradVAC2016&quot;&amp;gt;Konrad, R., Cooper, E. A., &amp;amp; Banks, M. S. (2016). Towards the next generation of virtual and augmented reality displays. *Optics Express, 24*(15), 16800-16809. doi:10.1364/OE.24.016800&amp;lt;/ref&amp;gt; &amp;lt;ref name=&quot;DunnVarifocal2017&quot;&amp;gt;Dunn, David, et al. (2017). Wide field of view varifocal near-eye display using see-through deformable membrane mirrors. *IEEE transactions on visualization and computer graphics, 23*(4), 1322-1331.&amp;lt;/ref&amp;gt; This typically requires fast and accurate [[eye tracking]] to determine the user&#039;s point of gaze and intended focus depth. Varifocal systems often simulate [[Depth of Field]] effects computationally, blurring parts of the scene not at the current focal distance. &amp;lt;ref name=&quot;ARInsiderVAC&quot;/&amp;gt; Prototypes like Meta Reality Labs&#039; &quot;Half Dome&quot; series have demonstrated this approach. &amp;lt;ref name=&quot;ARInsiderVAC&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*&amp;#039;&amp;#039;&amp;#039;[[Multifocal Displays]] (Multi-Plane Displays)&amp;#039;&amp;#039;&amp;#039;: Instead of a single, continuously adjusting focus, these displays present content on multiple discrete focal planes simultaneously or in rapid succession. &amp;lt;ref name=&amp;quot;AkeleyMultifocal2004&amp;quot;&amp;gt;Akeley, Kurt, Watt, S. J., Girshick, A. R., &amp;amp; Banks, M. S. (2004). A stereo display prototype with multiple focal distances. *ACM transactions on graphics (TOG), 23*(3), 804-813.&amp;lt;/ref&amp;gt; The visual system can then accommodate to the plane closest to the target object&amp;#039;s depth. Examples include stacked display panels or systems using switchable lenses. Magic Leap 1 used a two-plane system. &amp;lt;ref name=&amp;quot;ARInsiderVAC&amp;quot;/&amp;gt; While reducing VAC, they can still exhibit quantization effects if an object lies between planes, and complexity increases with the number of planes.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*&amp;#039;&amp;#039;&amp;#039;[[Multifocal Displays]] (Multi-Plane Displays)&amp;#039;&amp;#039;&amp;#039;: Instead of a single, continuously adjusting focus, these displays present content on multiple discrete focal planes simultaneously or in rapid succession. &amp;lt;ref name=&amp;quot;AkeleyMultifocal2004&amp;quot;&amp;gt;Akeley, Kurt, Watt, S. J., Girshick, A. R., &amp;amp; Banks, M. S. (2004). A stereo display prototype with multiple focal distances. *ACM transactions on graphics (TOG), 23*(3), 804-813.&amp;lt;/ref&amp;gt; The visual system can then accommodate to the plane closest to the target object&amp;#039;s depth. Examples include stacked display panels or systems using switchable lenses. Magic Leap 1 used a two-plane system. &amp;lt;ref name=&amp;quot;ARInsiderVAC&amp;quot;/&amp;gt; While reducing VAC, they can still exhibit quantization effects if an object lies between planes, and complexity increases with the number of planes.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Xinreality</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Depth_cue&amp;diff=34812&amp;oldid=prev</id>
		<title>Xinreality at 16:00, 1 May 2025</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Depth_cue&amp;diff=34812&amp;oldid=prev"/>
		<updated>2025-05-01T16:00:23Z</updated>

		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 16:00, 1 May 2025&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l1&quot;&gt;Line 1:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 1:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;{{see also|Terms|Technical Terms}}&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;{{see also|Terms|Technical Terms}}&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[Depth cue]] is any of a variety of perceptual signals that allow the [[human visual system]] to infer the distance or depth of objects in a scene, enabling the brain to transform two-dimensional retinal images into a perception of three-dimensional space. &amp;lt;ref name=&quot;HowardRogers2012&quot;&amp;gt;Howard, I. P., &amp;amp; Rogers, B. J. (2012). *Perceiving in Depth, Volume 1: Basic Mechanisms*. Oxford University Press.&amp;lt;/ref&amp;gt; These cues are crucial for navigating the three-dimensional world and are fundamental to creating convincing, immersive, and comfortable experiences in [[Virtual Reality]] (VR) and [[Augmented Reality]] (AR), where reproducing accurate depth perception presents significant technical challenges. &amp;lt;ref name=&quot;HowardRogers1995&quot;&amp;gt;Howard, Ian P., and Brian J. Rogers. (1995). *Binocular vision and stereopsis*. Oxford University Press.&amp;lt;/ref&amp;gt; The brain automatically fuses multiple available depth cues to build a robust model of the spatial layout of the environment. &amp;lt;ref name=&quot;HITLCues1&quot;&amp;gt;(2014-06-20) Visual Depth Cues - Human Interface Technology Laboratory. Retrieved April 25, 2025, from https://www.hitl.washington.edu/projects/&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;knowledge-base&lt;/del&gt;/virtual-worlds/EVE/III.A.1.&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;b&lt;/del&gt;.&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;VisualDepthCues&lt;/del&gt;.html&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[Depth cue]] is any of a variety of perceptual signals that allow the [[human visual system]] to infer the distance or depth of objects in a scene, enabling the brain to transform two-dimensional retinal images into a perception of three-dimensional space. &amp;lt;ref name=&quot;HowardRogers2012&quot;&amp;gt;Howard, I. P., &amp;amp; Rogers, B. J. (2012). *Perceiving in Depth, Volume 1: Basic Mechanisms*. Oxford University Press.&amp;lt;/ref&amp;gt; These cues are crucial for navigating the three-dimensional world and are fundamental to creating convincing, immersive, and comfortable experiences in [[Virtual Reality]] (VR) and [[Augmented Reality]] (AR), where reproducing accurate depth perception presents significant technical challenges. &amp;lt;ref name=&quot;HowardRogers1995&quot;&amp;gt;Howard, Ian P., and Brian J. Rogers. (1995). *Binocular vision and stereopsis*. Oxford University Press.&amp;lt;/ref&amp;gt; The brain automatically fuses multiple available depth cues to build a robust model of the spatial layout of the environment. &amp;lt;ref name=&quot;HITLCues1&quot;&amp;gt;(2014-06-20) Visual Depth Cues - Human Interface Technology Laboratory. Retrieved April 25, 2025, from https://www.hitl.washington.edu/projects/&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;knowledge_base&lt;/ins&gt;/virtual-worlds/EVE/III.A.1.&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;c&lt;/ins&gt;.&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;DepthCues&lt;/ins&gt;.html&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Classification of Depth Cues ==&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Classification of Depth Cues ==&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l82&quot;&gt;Line 82:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 82:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;====The [[Vergence-Accommodation Conflict]] (VAC)====&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;====The [[Vergence-Accommodation Conflict]] (VAC)====&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;A major limitation in most current VR/AR displays is the mismatch between vergence and accommodation cues. Most headsets use [[fixed-focus display]]s, meaning the optics present the virtual image at a fixed focal distance (often 1.5-2 meters or optical infinity), regardless of the simulated distance of the virtual object. &amp;lt;ref name=&quot;ARInsiderVAC&quot;&amp;gt;(2024-01-29) Understanding Vergence-Accommodation Conflict in AR/VR Headsets - AR Insider. Retrieved April 25, 2025, from https://arinsider.co/&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;2024&lt;/del&gt;/&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;01&lt;/del&gt;/&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;29&lt;/del&gt;/&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;understanding&lt;/del&gt;-&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;vergence&lt;/del&gt;-&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;accommodation&lt;/del&gt;-&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;conflict&lt;/del&gt;-&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;in&lt;/del&gt;-&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;ar&lt;/del&gt;-&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;vr&lt;/del&gt;-&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;headsets&lt;/del&gt;/&amp;lt;/ref&amp;gt; &amp;lt;ref name=&quot;WikiVAC&quot;&amp;gt;Vergence-accommodation conflict - Wikipedia. Retrieved April 25, 2025, from https://en.wikipedia.org/wiki/Vergence-accommodation_conflict&amp;lt;/ref&amp;gt; &amp;lt;ref name=&quot;DeliverContactsFocus&quot;&amp;gt;(2024-07-18) Exploring the Focal Distance in VR Headsets - Deliver Contacts. Retrieved April 25, 2025, from https://delivercontacts.com/&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;blog&lt;/del&gt;/&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;exploring&lt;/del&gt;-the-&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;focal&lt;/del&gt;-&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;distance&lt;/del&gt;-&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;in-vr-headsets&lt;/del&gt;&amp;lt;/ref&amp;gt; While the user&#039;s eyes converge appropriately for the virtual object&#039;s simulated distance (for example 0.5 meters), their eyes must maintain focus (accommodate) at the fixed optical distance of the display itself to keep the image sharp. This mismatch between the distance signaled by vergence and the distance signaled by accommodation is known as the &#039;&#039;&#039;[[vergence-accommodation conflict]]&#039;&#039;&#039; (VAC). &amp;lt;ref name=&quot;HoffmanVAC2008&quot;&amp;gt;Hoffman, D. M., Girshick, A. R., Akeley, K., &amp;amp; Banks, M. S. (2008). Vergence-accommodation conflicts hinder visual performance and cause visual fatigue. *Journal of Vision, 8*(3), 33. doi:10.1167/8.3.33&amp;lt;/ref&amp;gt; &amp;lt;ref name=&quot;FacebookVAC2019&quot;&amp;gt;Facebook Research. (2019, March 28). *Vergence-Accommodation Conflict: Facebook Research Explains Why Varifocal Matters For Future VR*. YouTube. [https://www.youtube.com/watch?v=YWA4gVibKJE]&amp;lt;/ref&amp;gt; &amp;lt;ref name=&quot;KramidaVAC2016&quot;&amp;gt;Kramida, Gregory. (2016). Resolving the vergence-accommodation conflict in head-mounted displays. *IEEE transactions on visualization and computer graphics, 22*(7), 1912-1931.&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;A major limitation in most current VR/AR displays is the mismatch between vergence and accommodation cues. Most headsets use [[fixed-focus display]]s, meaning the optics present the virtual image at a fixed focal distance (often 1.5-2 meters or optical infinity), regardless of the simulated distance of the virtual object. &amp;lt;ref name=&quot;ARInsiderVAC&quot;&amp;gt;(2024-01-29) Understanding Vergence-Accommodation Conflict in AR/VR Headsets - AR Insider. Retrieved April 25, 2025, from https://arinsider.co/&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;2022&lt;/ins&gt;/&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;06&lt;/ins&gt;/&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;22&lt;/ins&gt;/&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;5&lt;/ins&gt;-&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;ways&lt;/ins&gt;-&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;to&lt;/ins&gt;-&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;address&lt;/ins&gt;-&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;ars&lt;/ins&gt;-&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;vergence&lt;/ins&gt;-&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;accommodation&lt;/ins&gt;-&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;conflict&lt;/ins&gt;/&amp;lt;/ref&amp;gt; &amp;lt;ref name=&quot;WikiVAC&quot;&amp;gt;Vergence-accommodation conflict - Wikipedia. Retrieved April 25, 2025, from https://en.wikipedia.org/wiki/Vergence-accommodation_conflict&amp;lt;/ref&amp;gt; &amp;lt;ref name=&quot;DeliverContactsFocus&quot;&amp;gt;(2024-07-18) Exploring the Focal Distance in VR Headsets - Deliver Contacts. Retrieved April 25, 2025, from https://delivercontacts.com/&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;research&lt;/ins&gt;/&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;virtual-reality&lt;/ins&gt;-the-&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;vergence&lt;/ins&gt;-&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;accommodation&lt;/ins&gt;-&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;conflict/&lt;/ins&gt;&amp;lt;/ref&amp;gt; While the user&#039;s eyes converge appropriately for the virtual object&#039;s simulated distance (for example 0.5 meters), their eyes must maintain focus (accommodate) at the fixed optical distance of the display itself to keep the image sharp. This mismatch between the distance signaled by vergence and the distance signaled by accommodation is known as the &#039;&#039;&#039;[[vergence-accommodation conflict]]&#039;&#039;&#039; (VAC). &amp;lt;ref name=&quot;HoffmanVAC2008&quot;&amp;gt;Hoffman, D. M., Girshick, A. R., Akeley, K., &amp;amp; Banks, M. S. (2008). Vergence-accommodation conflicts hinder visual performance and cause visual fatigue. *Journal of Vision, 8*(3), 33. doi:10.1167/8.3.33&amp;lt;/ref&amp;gt; &amp;lt;ref name=&quot;FacebookVAC2019&quot;&amp;gt;Facebook Research. (2019, March 28). *Vergence-Accommodation Conflict: Facebook Research Explains Why Varifocal Matters For Future VR*. YouTube. [https://www.youtube.com/watch?v=YWA4gVibKJE]&amp;lt;/ref&amp;gt; &amp;lt;ref name=&quot;KramidaVAC2016&quot;&amp;gt;Kramida, Gregory. (2016). Resolving the vergence-accommodation conflict in head-mounted displays. *IEEE transactions on visualization and computer graphics, 22*(7), 1912-1931.&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The VAC forces the brain to deal with conflicting depth information, potentially leading to several issues:&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The VAC forces the brain to deal with conflicting depth information, potentially leading to several issues:&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l101&quot;&gt;Line 101:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 101:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;To mitigate or eliminate the VAC and provide more accurate depth cues, researchers and companies are actively developing advanced display technologies:&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;To mitigate or eliminate the VAC and provide more accurate depth cues, researchers and companies are actively developing advanced display technologies:&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*&#039;&#039;&#039;[[Varifocal Displays]]&#039;&#039;&#039;: These displays dynamically adjust the focal distance of the display optics (for example using physically moving lenses/screens, [[liquid lens]] technology, or [[deformable mirror]] devices) to match the simulated distance of the object the user is currently looking at. &amp;lt;ref name=&quot;KonradVAC2016&quot;&amp;gt;Konrad, R., Cooper, E. A., &amp;amp; Banks, M. S. (2016). Towards the next generation of virtual and augmented reality displays. *Optics Express, 24*(15), 16800-16809. doi:10.1364/OE.24.016800&amp;lt;/ref&amp;gt; &amp;lt;ref name=&quot;DunnVarifocal2017&quot;&amp;gt;Dunn, David, et al. (2017). Wide field of view varifocal near-eye display using see-through deformable membrane mirrors. *IEEE transactions on visualization and computer graphics, 23*(4), 1322-1331.&amp;lt;/ref&amp;gt; This typically requires fast and accurate [[eye tracking]] to determine the user&#039;s point of gaze and intended focus depth. Varifocal systems often simulate [[Depth of Field]] effects computationally, blurring parts of the scene not at the current focal distance. &amp;lt;ref name=&quot;ARInsiderVAC&quot;/&amp;gt; Prototypes like Meta Reality Labs&#039; &quot;Half Dome&quot; series have demonstrated this approach. &amp;lt;ref name=&quot;ARInsiderVAC&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*&#039;&#039;&#039;[[Varifocal Displays]]&#039;&#039;&#039;: These displays dynamically adjust the focal distance of the display optics (for example using physically moving lenses/screens, [[liquid lens]] technology, or [[deformable mirror]] devices) to match the simulated distance of the object the user is currently looking at. &amp;lt;ref name=&quot;KonradVAC2016&quot;&amp;gt;Konrad, R., Cooper, E. A., &amp;amp; Banks, M. S. (2016). Towards the next generation of virtual and augmented reality displays. *Optics Express, 24*(15), 16800-16809. doi:10.1364/OE.24.016800 &lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;https://www.computationalimaging.org/publications/accommodation-invariant-near-eye-displays-siggraph-2017/&lt;/ins&gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=&quot;DunnVarifocal2017&quot;&amp;gt;Dunn, David, et al. (2017). Wide field of view varifocal near-eye display using see-through deformable membrane mirrors. *IEEE transactions on visualization and computer graphics, 23*(4), 1322-1331.&amp;lt;/ref&amp;gt; This typically requires fast and accurate [[eye tracking]] to determine the user&#039;s point of gaze and intended focus depth. Varifocal systems often simulate [[Depth of Field]] effects computationally, blurring parts of the scene not at the current focal distance. &amp;lt;ref name=&quot;ARInsiderVAC&quot;/&amp;gt; Prototypes like Meta Reality Labs&#039; &quot;Half Dome&quot; series have demonstrated this approach. &amp;lt;ref name=&quot;ARInsiderVAC&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*&amp;#039;&amp;#039;&amp;#039;[[Multifocal Displays]] (Multi-Plane Displays)&amp;#039;&amp;#039;&amp;#039;: Instead of a single, continuously adjusting focus, these displays present content on multiple discrete focal planes simultaneously or in rapid succession. &amp;lt;ref name=&amp;quot;AkeleyMultifocal2004&amp;quot;&amp;gt;Akeley, Kurt, Watt, S. J., Girshick, A. R., &amp;amp; Banks, M. S. (2004). A stereo display prototype with multiple focal distances. *ACM transactions on graphics (TOG), 23*(3), 804-813.&amp;lt;/ref&amp;gt; The visual system can then accommodate to the plane closest to the target object&amp;#039;s depth. Examples include stacked display panels or systems using switchable lenses. Magic Leap 1 used a two-plane system. &amp;lt;ref name=&amp;quot;ARInsiderVAC&amp;quot;/&amp;gt; While reducing VAC, they can still exhibit quantization effects if an object lies between planes, and complexity increases with the number of planes.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*&amp;#039;&amp;#039;&amp;#039;[[Multifocal Displays]] (Multi-Plane Displays)&amp;#039;&amp;#039;&amp;#039;: Instead of a single, continuously adjusting focus, these displays present content on multiple discrete focal planes simultaneously or in rapid succession. &amp;lt;ref name=&amp;quot;AkeleyMultifocal2004&amp;quot;&amp;gt;Akeley, Kurt, Watt, S. J., Girshick, A. R., &amp;amp; Banks, M. S. (2004). A stereo display prototype with multiple focal distances. *ACM transactions on graphics (TOG), 23*(3), 804-813.&amp;lt;/ref&amp;gt; The visual system can then accommodate to the plane closest to the target object&amp;#039;s depth. Examples include stacked display panels or systems using switchable lenses. Magic Leap 1 used a two-plane system. &amp;lt;ref name=&amp;quot;ARInsiderVAC&amp;quot;/&amp;gt; While reducing VAC, they can still exhibit quantization effects if an object lies between planes, and complexity increases with the number of planes.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;

&lt;!-- diff cache key mediawiki:diff:1.41:old-34659:rev-34812:php=table --&gt;
&lt;/table&gt;</summary>
		<author><name>Xinreality</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Depth_cue&amp;diff=34659&amp;oldid=prev</id>
		<title>Xinreality at 06:46, 29 April 2025</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Depth_cue&amp;diff=34659&amp;oldid=prev"/>
		<updated>2025-04-29T06:46:01Z</updated>

		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;a href=&quot;https://vrarwiki.com/index.php?title=Depth_cue&amp;amp;diff=34659&amp;amp;oldid=34638&quot;&gt;Show changes&lt;/a&gt;</summary>
		<author><name>Xinreality</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Depth_cue&amp;diff=34638&amp;oldid=prev</id>
		<title>Xinreality: Text replacement - &quot;e.g.,&quot; to &quot;for example&quot;</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Depth_cue&amp;diff=34638&amp;oldid=prev"/>
		<updated>2025-04-29T04:22:47Z</updated>

		<summary type="html">&lt;p&gt;Text replacement - &amp;quot;e.g.,&amp;quot; to &amp;quot;for example&amp;quot;&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 04:22, 29 April 2025&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l40&quot;&gt;Line 40:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 40:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;====[[Relative Height]] (Elevation in the Visual Field)====&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;====[[Relative Height]] (Elevation in the Visual Field)====&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;For objects resting on the same ground plane, those that are higher in the visual field (closer to the horizon line) are typically perceived as being farther away. For objects above the horizon line (&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;e.g., &lt;/del&gt;clouds), those lower in the visual field are perceived as farther. &amp;lt;ref name=&quot;CuttingVishton1995&quot;/&amp;gt; &amp;lt;ref name=&quot;OoiHeight2001&quot;&amp;gt;Ooi, Teng Leng, Bing Wu, and Zijiang J. He. (2001). Distance determined by the angular declination below the horizon. *Nature, 414*(6860), 197-200.&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;For objects resting on the same ground plane, those that are higher in the visual field (closer to the horizon line) are typically perceived as being farther away. For objects above the horizon line (&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;for example &lt;/ins&gt;clouds), those lower in the visual field are perceived as farther. &amp;lt;ref name=&quot;CuttingVishton1995&quot;/&amp;gt; &amp;lt;ref name=&quot;OoiHeight2001&quot;&amp;gt;Ooi, Teng Leng, Bing Wu, and Zijiang J. He. (2001). Distance determined by the angular declination below the horizon. *Nature, 414*(6860), 197-200.&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;====[[Linear Perspective]]====&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;====[[Linear Perspective]]====&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l82&quot;&gt;Line 82:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 82:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;====The [[Vergence-Accommodation Conflict]] (VAC)====&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;====The [[Vergence-Accommodation Conflict]] (VAC)====&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;A major limitation in most current VR/AR displays is the mismatch between vergence and accommodation cues. Most headsets use [[fixed-focus display]]s, meaning the optics present the virtual image at a fixed focal distance (often 1.5-2 meters or optical infinity), regardless of the simulated distance of the virtual object. &amp;lt;ref name=&quot;ARInsiderVAC&quot;&amp;gt;(2024-01-29) Understanding Vergence-Accommodation Conflict in AR/VR Headsets - AR Insider. Retrieved April 25, 2025, from https://arinsider.co/2024/01/29/understanding-vergence-accommodation-conflict-in-ar-vr-headsets/&amp;lt;/ref&amp;gt; &amp;lt;ref name=&quot;WikiVAC&quot;&amp;gt;Vergence-accommodation conflict - Wikipedia. Retrieved April 25, 2025, from https://en.wikipedia.org/wiki/Vergence-accommodation_conflict&amp;lt;/ref&amp;gt; &amp;lt;ref name=&quot;DeliverContactsFocus&quot;&amp;gt;(2024-07-18) Exploring the Focal Distance in VR Headsets - Deliver Contacts. Retrieved April 25, 2025, from https://delivercontacts.com/blog/exploring-the-focal-distance-in-vr-headsets&amp;lt;/ref&amp;gt; While the user&#039;s eyes converge appropriately for the virtual object&#039;s simulated distance (&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;e.g., &lt;/del&gt;0.5 meters), their eyes must maintain focus (accommodate) at the fixed optical distance of the display itself to keep the image sharp. This mismatch between the distance signaled by vergence and the distance signaled by accommodation is known as the &#039;&#039;&#039;[[vergence-accommodation conflict]]&#039;&#039;&#039; (VAC). &amp;lt;ref name=&quot;HoffmanVAC2008&quot;&amp;gt;Hoffman, D. M., Girshick, A. R., Akeley, K., &amp;amp; Banks, M. S. (2008). Vergence–accommodation conflicts hinder visual performance and cause visual fatigue. *Journal of Vision, 8*(3), 33. doi:10.1167/8.3.33&amp;lt;/ref&amp;gt; &amp;lt;ref name=&quot;FacebookVAC2019&quot;&amp;gt;Facebook Research. (2019, March 28). *Vergence-Accommodation Conflict: Facebook Research Explains Why Varifocal Matters For Future VR*. YouTube. [https://www.youtube.com/watch?v=YWA4gVibKJE]&amp;lt;/ref&amp;gt; &amp;lt;ref name=&quot;KramidaVAC2016&quot;&amp;gt;Kramida, Gregory. (2016). Resolving the vergence-accommodation conflict in head-mounted displays. *IEEE transactions on visualization and computer graphics, 22*(7), 1912-1931.&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;A major limitation in most current VR/AR displays is the mismatch between vergence and accommodation cues. Most headsets use [[fixed-focus display]]s, meaning the optics present the virtual image at a fixed focal distance (often 1.5-2 meters or optical infinity), regardless of the simulated distance of the virtual object. &amp;lt;ref name=&quot;ARInsiderVAC&quot;&amp;gt;(2024-01-29) Understanding Vergence-Accommodation Conflict in AR/VR Headsets - AR Insider. Retrieved April 25, 2025, from https://arinsider.co/2024/01/29/understanding-vergence-accommodation-conflict-in-ar-vr-headsets/&amp;lt;/ref&amp;gt; &amp;lt;ref name=&quot;WikiVAC&quot;&amp;gt;Vergence-accommodation conflict - Wikipedia. Retrieved April 25, 2025, from https://en.wikipedia.org/wiki/Vergence-accommodation_conflict&amp;lt;/ref&amp;gt; &amp;lt;ref name=&quot;DeliverContactsFocus&quot;&amp;gt;(2024-07-18) Exploring the Focal Distance in VR Headsets - Deliver Contacts. Retrieved April 25, 2025, from https://delivercontacts.com/blog/exploring-the-focal-distance-in-vr-headsets&amp;lt;/ref&amp;gt; While the user&#039;s eyes converge appropriately for the virtual object&#039;s simulated distance (&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;for example &lt;/ins&gt;0.5 meters), their eyes must maintain focus (accommodate) at the fixed optical distance of the display itself to keep the image sharp. This mismatch between the distance signaled by vergence and the distance signaled by accommodation is known as the &#039;&#039;&#039;[[vergence-accommodation conflict]]&#039;&#039;&#039; (VAC). &amp;lt;ref name=&quot;HoffmanVAC2008&quot;&amp;gt;Hoffman, D. M., Girshick, A. R., Akeley, K., &amp;amp; Banks, M. S. (2008). Vergence–accommodation conflicts hinder visual performance and cause visual fatigue. *Journal of Vision, 8*(3), 33. doi:10.1167/8.3.33&amp;lt;/ref&amp;gt; &amp;lt;ref name=&quot;FacebookVAC2019&quot;&amp;gt;Facebook Research. (2019, March 28). *Vergence-Accommodation Conflict: Facebook Research Explains Why Varifocal Matters For Future VR*. YouTube. [https://www.youtube.com/watch?v=YWA4gVibKJE]&amp;lt;/ref&amp;gt; &amp;lt;ref name=&quot;KramidaVAC2016&quot;&amp;gt;Kramida, Gregory. (2016). Resolving the vergence-accommodation conflict in head-mounted displays. *IEEE transactions on visualization and computer graphics, 22*(7), 1912-1931.&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The VAC forces the brain to deal with conflicting depth information, potentially leading to several issues:&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The VAC forces the brain to deal with conflicting depth information, potentially leading to several issues:&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l91&quot;&gt;Line 91:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 91:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*Reduced realism and immersion&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*Reduced realism and immersion&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The VAC is particularly problematic for interactions requiring sustained focus or high visual fidelity at close distances (&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;e.g., &lt;/del&gt;virtual surgery simulation, detailed object inspection, reading text on near virtual objects). &amp;lt;ref name=&quot;HowardRogers2012&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The VAC is particularly problematic for interactions requiring sustained focus or high visual fidelity at close distances (&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;for example &lt;/ins&gt;virtual surgery simulation, detailed object inspection, reading text on near virtual objects). &amp;lt;ref name=&quot;HowardRogers2012&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;====Other Limitations====&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;====Other Limitations====&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l101&quot;&gt;Line 101:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 101:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;To mitigate or eliminate the VAC and provide more accurate depth cues, researchers and companies are actively developing advanced display technologies:&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;To mitigate or eliminate the VAC and provide more accurate depth cues, researchers and companies are actively developing advanced display technologies:&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*&#039;&#039;&#039;[[Varifocal Displays]]&#039;&#039;&#039;: These displays dynamically adjust the focal distance of the display optics (&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;e.g., &lt;/del&gt;using physically moving lenses/screens, [[liquid lens]] technology, or [[deformable mirror]] devices) to match the simulated distance of the object the user is currently looking at. &amp;lt;ref name=&quot;KonradVAC2016&quot;&amp;gt;Konrad, R., Cooper, E. A., &amp;amp; Banks, M. S. (2016). Towards the next generation of virtual and augmented reality displays. *Optics Express, 24*(15), 16800-16809. doi:10.1364/OE.24.016800&amp;lt;/ref&amp;gt; &amp;lt;ref name=&quot;DunnVarifocal2017&quot;&amp;gt;Dunn, David, et al. (2017). Wide field of view varifocal near-eye display using see-through deformable membrane mirrors. *IEEE transactions on visualization and computer graphics, 23*(4), 1322-1331.&amp;lt;/ref&amp;gt; This typically requires fast and accurate [[eye tracking]] to determine the user&#039;s point of gaze and intended focus depth. Varifocal systems often simulate [[Depth of Field]] effects computationally, blurring parts of the scene not at the current focal distance. &amp;lt;ref name=&quot;ARInsiderVAC&quot;/&amp;gt; Prototypes like Meta Reality Labs&#039; &quot;Half Dome&quot; series have demonstrated this approach. &amp;lt;ref name=&quot;ARInsiderVAC&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*&#039;&#039;&#039;[[Varifocal Displays]]&#039;&#039;&#039;: These displays dynamically adjust the focal distance of the display optics (&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;for example &lt;/ins&gt;using physically moving lenses/screens, [[liquid lens]] technology, or [[deformable mirror]] devices) to match the simulated distance of the object the user is currently looking at. &amp;lt;ref name=&quot;KonradVAC2016&quot;&amp;gt;Konrad, R., Cooper, E. A., &amp;amp; Banks, M. S. (2016). Towards the next generation of virtual and augmented reality displays. *Optics Express, 24*(15), 16800-16809. doi:10.1364/OE.24.016800&amp;lt;/ref&amp;gt; &amp;lt;ref name=&quot;DunnVarifocal2017&quot;&amp;gt;Dunn, David, et al. (2017). Wide field of view varifocal near-eye display using see-through deformable membrane mirrors. *IEEE transactions on visualization and computer graphics, 23*(4), 1322-1331.&amp;lt;/ref&amp;gt; This typically requires fast and accurate [[eye tracking]] to determine the user&#039;s point of gaze and intended focus depth. Varifocal systems often simulate [[Depth of Field]] effects computationally, blurring parts of the scene not at the current focal distance. &amp;lt;ref name=&quot;ARInsiderVAC&quot;/&amp;gt; Prototypes like Meta Reality Labs&#039; &quot;Half Dome&quot; series have demonstrated this approach. &amp;lt;ref name=&quot;ARInsiderVAC&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*&amp;#039;&amp;#039;&amp;#039;[[Multifocal Displays]] (Multi-Plane Displays)&amp;#039;&amp;#039;&amp;#039;: Instead of a single, continuously adjusting focus, these displays present content on multiple discrete focal planes simultaneously or in rapid succession. &amp;lt;ref name=&amp;quot;AkeleyMultifocal2004&amp;quot;&amp;gt;Akeley, Kurt, Watt, S. J., Girshick, A. R., &amp;amp; Banks, M. S. (2004). A stereo display prototype with multiple focal distances. *ACM transactions on graphics (TOG), 23*(3), 804-813.&amp;lt;/ref&amp;gt; The visual system can then accommodate to the plane closest to the target object&amp;#039;s depth. Examples include stacked display panels or systems using switchable lenses. Magic Leap 1 used a two-plane system. &amp;lt;ref name=&amp;quot;ARInsiderVAC&amp;quot;/&amp;gt; While reducing VAC, they can still exhibit quantization effects if an object lies between planes, and complexity increases with the number of planes.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*&amp;#039;&amp;#039;&amp;#039;[[Multifocal Displays]] (Multi-Plane Displays)&amp;#039;&amp;#039;&amp;#039;: Instead of a single, continuously adjusting focus, these displays present content on multiple discrete focal planes simultaneously or in rapid succession. &amp;lt;ref name=&amp;quot;AkeleyMultifocal2004&amp;quot;&amp;gt;Akeley, Kurt, Watt, S. J., Girshick, A. R., &amp;amp; Banks, M. S. (2004). A stereo display prototype with multiple focal distances. *ACM transactions on graphics (TOG), 23*(3), 804-813.&amp;lt;/ref&amp;gt; The visual system can then accommodate to the plane closest to the target object&amp;#039;s depth. Examples include stacked display panels or systems using switchable lenses. Magic Leap 1 used a two-plane system. &amp;lt;ref name=&amp;quot;ARInsiderVAC&amp;quot;/&amp;gt; While reducing VAC, they can still exhibit quantization effects if an object lies between planes, and complexity increases with the number of planes.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l107&quot;&gt;Line 107:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 107:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*&amp;#039;&amp;#039;&amp;#039;[[Light Field Displays]]&amp;#039;&amp;#039;&amp;#039;: These displays aim to reconstruct the [[light field]] of a scene – the distribution of light rays in space – more completely. By emitting rays with the correct origin and direction, they allow the viewer&amp;#039;s eye to naturally focus at different depths within the virtual scene, as if viewing a real 3D environment. &amp;lt;ref name=&amp;quot;WetzsteinLightField2011&amp;quot;&amp;gt;Wetzstein, Gordon, et al. (2011). Computational plenoptic imaging. *Computer Graphics Forum, 30*(8), 2397-2426.&amp;lt;/ref&amp;gt; &amp;lt;ref name=&amp;quot;Lanman2013&amp;quot;&amp;gt;Lanman, D., &amp;amp; Luebke, D. (2013). Near-eye light field displays. *ACM Transactions on Graphics (TOG), 32*(6), 1-10. doi:10.1145/2508363.2508366&amp;lt;/ref&amp;gt; This can potentially solve the VAC without requiring eye tracking. However, generating the necessary dense light fields poses significant computational and hardware challenges, often involving trade-offs between resolution, field of view, and form factor. &amp;lt;ref name=&amp;quot;ARInsiderVAC&amp;quot;/&amp;gt; Companies like CREAL are developing light field modules for AR/VR. &amp;lt;ref name=&amp;quot;WikiVAC&amp;quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*&amp;#039;&amp;#039;&amp;#039;[[Light Field Displays]]&amp;#039;&amp;#039;&amp;#039;: These displays aim to reconstruct the [[light field]] of a scene – the distribution of light rays in space – more completely. By emitting rays with the correct origin and direction, they allow the viewer&amp;#039;s eye to naturally focus at different depths within the virtual scene, as if viewing a real 3D environment. &amp;lt;ref name=&amp;quot;WetzsteinLightField2011&amp;quot;&amp;gt;Wetzstein, Gordon, et al. (2011). Computational plenoptic imaging. *Computer Graphics Forum, 30*(8), 2397-2426.&amp;lt;/ref&amp;gt; &amp;lt;ref name=&amp;quot;Lanman2013&amp;quot;&amp;gt;Lanman, D., &amp;amp; Luebke, D. (2013). Near-eye light field displays. *ACM Transactions on Graphics (TOG), 32*(6), 1-10. doi:10.1145/2508363.2508366&amp;lt;/ref&amp;gt; This can potentially solve the VAC without requiring eye tracking. However, generating the necessary dense light fields poses significant computational and hardware challenges, often involving trade-offs between resolution, field of view, and form factor. &amp;lt;ref name=&amp;quot;ARInsiderVAC&amp;quot;/&amp;gt; Companies like CREAL are developing light field modules for AR/VR. &amp;lt;ref name=&amp;quot;WikiVAC&amp;quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*   &#039;&#039;&#039;[[Holographic Displays]]&#039;&#039;&#039;: True [[holography|holographic]] displays aim to reconstruct the wavefront of light from the virtual scene using diffraction, which would inherently provide all depth cues, including accommodation, correctly and continuously. &amp;lt;ref name=&quot;MaimoneHolo2017&quot;&amp;gt;Maimone, A., Georgiou, A., &amp;amp; Kollin, J. S. (2017). Holographic near-eye displays for virtual and augmented reality. *ACM Transactions on Graphics (TOG), 36*(4), 1-16. doi:10.1145/3072959.3073610&amp;lt;/ref&amp;gt; This is often considered an ultimate goal for visual displays. However, current implementations suitable for near-eye displays face major challenges in computational load, achievable [[field of view]], image quality (&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;e.g., &lt;/del&gt;[[speckle noise]]), and component size. &amp;lt;ref name=&quot;MaimoneHolo2017&quot;/&amp;gt; &amp;lt;ref name=&quot;ARInsiderVAC&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*   &#039;&#039;&#039;[[Holographic Displays]]&#039;&#039;&#039;: True [[holography|holographic]] displays aim to reconstruct the wavefront of light from the virtual scene using diffraction, which would inherently provide all depth cues, including accommodation, correctly and continuously. &amp;lt;ref name=&quot;MaimoneHolo2017&quot;&amp;gt;Maimone, A., Georgiou, A., &amp;amp; Kollin, J. S. (2017). Holographic near-eye displays for virtual and augmented reality. *ACM Transactions on Graphics (TOG), 36*(4), 1-16. doi:10.1145/3072959.3073610&amp;lt;/ref&amp;gt; This is often considered an ultimate goal for visual displays. However, current implementations suitable for near-eye displays face major challenges in computational load, achievable [[field of view]], image quality (&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;for example &lt;/ins&gt;[[speckle noise]]), and component size. &amp;lt;ref name=&quot;MaimoneHolo2017&quot;/&amp;gt; &amp;lt;ref name=&quot;ARInsiderVAC&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*&amp;#039;&amp;#039;&amp;#039;[[Retinal Projection]] (Retinal Scan Displays)&amp;#039;&amp;#039;&amp;#039;: These systems bypass intermediate screens and project images directly onto the viewer&amp;#039;s retina, often using low-power lasers or micro-LED arrays. &amp;lt;ref name=&amp;quot;ARInsiderVAC&amp;quot;/&amp;gt; Because the image is formed on the retina, it can appear in focus regardless of the eye&amp;#039;s accommodation state, potentially eliminating VAC. This approach could enable very compact form factors. Challenges include achieving a sufficiently large [[eye-box]] (the area where the eye can see the image), potential sensitivity to eye floaters or optical path debris, and safety considerations. &amp;lt;ref name=&amp;quot;ARInsiderVAC&amp;quot;/&amp;gt; Examples include the discontinued North Focals smart glasses.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*&amp;#039;&amp;#039;&amp;#039;[[Retinal Projection]] (Retinal Scan Displays)&amp;#039;&amp;#039;&amp;#039;: These systems bypass intermediate screens and project images directly onto the viewer&amp;#039;s retina, often using low-power lasers or micro-LED arrays. &amp;lt;ref name=&amp;quot;ARInsiderVAC&amp;quot;/&amp;gt; Because the image is formed on the retina, it can appear in focus regardless of the eye&amp;#039;s accommodation state, potentially eliminating VAC. This approach could enable very compact form factors. Challenges include achieving a sufficiently large [[eye-box]] (the area where the eye can see the image), potential sensitivity to eye floaters or optical path debris, and safety considerations. &amp;lt;ref name=&amp;quot;ARInsiderVAC&amp;quot;/&amp;gt; Examples include the discontinued North Focals smart glasses.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l127&quot;&gt;Line 127:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 127:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*&amp;#039;&amp;#039;&amp;#039;Visual Fatigue and Discomfort:&amp;#039;&amp;#039;&amp;#039; The [[vergence-accommodation conflict]] is a primary contributor to eye strain, headaches, blurred vision, and general visual discomfort, especially during prolonged use. &amp;lt;ref name=&amp;quot;HoffmanVAC2008&amp;quot;/&amp;gt; &amp;lt;ref name=&amp;quot;ARInsiderVAC&amp;quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*&amp;#039;&amp;#039;&amp;#039;Visual Fatigue and Discomfort:&amp;#039;&amp;#039;&amp;#039; The [[vergence-accommodation conflict]] is a primary contributor to eye strain, headaches, blurred vision, and general visual discomfort, especially during prolonged use. &amp;lt;ref name=&amp;quot;HoffmanVAC2008&amp;quot;/&amp;gt; &amp;lt;ref name=&amp;quot;ARInsiderVAC&amp;quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*&amp;#039;&amp;#039;&amp;#039;Spatial Perception Errors:&amp;#039;&amp;#039;&amp;#039; Inaccurate or conflicting depth cues can lead to misjudgments of distance, size, and the spatial relationships between objects, potentially affecting user performance in tasks requiring precise spatial awareness or interaction. &amp;lt;ref name=&amp;quot;JonesVAC2008&amp;quot;/&amp;gt; &amp;lt;ref name=&amp;quot;WillemsenHMD2009&amp;quot;&amp;gt;Willemsen, Peter, Colton, M. B., Creem-Regehr, S. H., &amp;amp; Thompson, W. B. (2009). The effects of head-mounted display mechanical properties and field of view on distance judgments in virtual environments. &amp;#039;&amp;#039;ACM Transactions on Applied Perception (TAP), 6&amp;#039;&amp;#039;(2), 1-14.&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*&amp;#039;&amp;#039;&amp;#039;Spatial Perception Errors:&amp;#039;&amp;#039;&amp;#039; Inaccurate or conflicting depth cues can lead to misjudgments of distance, size, and the spatial relationships between objects, potentially affecting user performance in tasks requiring precise spatial awareness or interaction. &amp;lt;ref name=&amp;quot;JonesVAC2008&amp;quot;/&amp;gt; &amp;lt;ref name=&amp;quot;WillemsenHMD2009&amp;quot;&amp;gt;Willemsen, Peter, Colton, M. B., Creem-Regehr, S. H., &amp;amp; Thompson, W. B. (2009). The effects of head-mounted display mechanical properties and field of view on distance judgments in virtual environments. &amp;#039;&amp;#039;ACM Transactions on Applied Perception (TAP), 6&amp;#039;&amp;#039;(2), 1-14.&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*&#039;&#039;&#039;[[Simulator Sickness]]:&#039;&#039;&#039; Inconsistencies between visual depth cues and other sensory information (&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;e.g., &lt;/del&gt;vestibular signals from the inner ear) can contribute to symptoms like nausea, disorientation, and dizziness. &amp;lt;ref name=&quot;VosVAC2005&quot;/&amp;gt; &amp;lt;ref name=&quot;WannAdaptation1995&quot;&amp;gt;Wann, John P., Simon Rushton, and Mark Mon-Williams. (1995). Natural problems for stereoscopic depth perception in virtual environments. *Vision research, 35*(19), 2731-2736.&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*&#039;&#039;&#039;[[Simulator Sickness]]:&#039;&#039;&#039; Inconsistencies between visual depth cues and other sensory information (&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;for example &lt;/ins&gt;vestibular signals from the inner ear) can contribute to symptoms like nausea, disorientation, and dizziness. &amp;lt;ref name=&quot;VosVAC2005&quot;/&amp;gt; &amp;lt;ref name=&quot;WannAdaptation1995&quot;&amp;gt;Wann, John P., Simon Rushton, and Mark Mon-Williams. (1995). Natural problems for stereoscopic depth perception in virtual environments. *Vision research, 35*(19), 2731-2736.&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;==Design Considerations for VR/AR Developers==&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;==Design Considerations for VR/AR Developers==&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l144&quot;&gt;Line 144:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 144:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*&amp;#039;&amp;#039;&amp;#039;Perceptual Adaptation:&amp;#039;&amp;#039;&amp;#039; Studying how users adapt to inconsistent or unnatural depth cues over time, potentially leading to training paradigms or design strategies that improve comfort on current hardware. &amp;lt;ref name=&amp;quot;WannAdaptation1995&amp;quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*&amp;#039;&amp;#039;&amp;#039;Perceptual Adaptation:&amp;#039;&amp;#039;&amp;#039; Studying how users adapt to inconsistent or unnatural depth cues over time, potentially leading to training paradigms or design strategies that improve comfort on current hardware. &amp;lt;ref name=&amp;quot;WannAdaptation1995&amp;quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*&#039;&#039;&#039;Personalized Depth Rendering:&#039;&#039;&#039; Calibrating depth cue presentation based on individual user characteristics (&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;e.g., &lt;/del&gt;IPD, visual acuity, refractive error, sensitivity to VAC) for optimized comfort and performance. &amp;lt;ref name=&quot;WillemsenHMD2009&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*&#039;&#039;&#039;Personalized Depth Rendering:&#039;&#039;&#039; Calibrating depth cue presentation based on individual user characteristics (&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;for example &lt;/ins&gt;IPD, visual acuity, refractive error, sensitivity to VAC) for optimized comfort and performance. &amp;lt;ref name=&quot;WillemsenHMD2009&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*&#039;&#039;&#039;[[Cross-modal interaction|Cross-Modal Integration]]:** Investigating how integrating depth information from other senses (&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;e.g., &lt;/del&gt;[[spatial audio]], [[haptic feedback]]) can enhance or reinforce visual depth perception. &amp;lt;ref name=&quot;ErnstCrossModal2002&quot;&amp;gt;Ernst, Marc O., and Martin S. Banks. (2002). Humans integrate visual and haptic information in a statistically optimal fashion. &#039;&#039;Nature, 415&#039;&#039;(6870), 429-433.&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*&#039;&#039;&#039;[[Cross-modal interaction|Cross-Modal Integration]]:** Investigating how integrating depth information from other senses (&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;for example &lt;/ins&gt;[[spatial audio]], [[haptic feedback]]) can enhance or reinforce visual depth perception. &amp;lt;ref name=&quot;ErnstCrossModal2002&quot;&amp;gt;Ernst, Marc O., and Martin S. Banks. (2002). Humans integrate visual and haptic information in a statistically optimal fashion. &#039;&#039;Nature, 415&#039;&#039;(6870), 429-433.&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*&#039;&#039;&#039;[[Neural rendering|Neural Rendering]] and AI:&#039;&#039;&#039; Utilizing machine learning techniques (&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;e.g., &lt;/del&gt;[[Neural Radiance Fields]] (NeRF)) to potentially render complex scenes with perceptually accurate depth cues more efficiently by learning implicit scene representations. &amp;lt;ref name=&quot;MildenhallNeRF2020&quot;&amp;gt;Mildenhall, Ben, et al. (2020). Nerf: Representing scenes as neural radiance fields for view synthesis. *European conference on computer vision*. Springer, Cham.&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;*&#039;&#039;&#039;[[Neural rendering|Neural Rendering]] and AI:&#039;&#039;&#039; Utilizing machine learning techniques (&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;for example &lt;/ins&gt;[[Neural Radiance Fields]] (NeRF)) to potentially render complex scenes with perceptually accurate depth cues more efficiently by learning implicit scene representations. &amp;lt;ref name=&quot;MildenhallNeRF2020&quot;&amp;gt;Mildenhall, Ben, et al. (2020). Nerf: Representing scenes as neural radiance fields for view synthesis. *European conference on computer vision*. Springer, Cham.&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;==References==&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;==References==&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Xinreality</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Depth_cue&amp;diff=34504&amp;oldid=prev</id>
		<title>Xinreality: /* References */</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Depth_cue&amp;diff=34504&amp;oldid=prev"/>
		<updated>2025-04-25T12:00:48Z</updated>

		<summary type="html">&lt;p&gt;&lt;span class=&quot;autocomment&quot;&gt;References&lt;/span&gt;&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 12:00, 25 April 2025&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l176&quot;&gt;Line 176:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 176:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;ref name=&amp;quot;FacebookVAC2019&amp;quot;&amp;gt;Facebook Research. (2019, March 28). *Vergence-Accommodation Conflict: Facebook Research Explains Why Varifocal Matters For Future VR*. YouTube. [https://www.youtube.com/watch?v=YWA4gVibKJE]&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;ref name=&amp;quot;FacebookVAC2019&amp;quot;&amp;gt;Facebook Research. (2019, March 28). *Vergence-Accommodation Conflict: Facebook Research Explains Why Varifocal Matters For Future VR*. YouTube. [https://www.youtube.com/watch?v=YWA4gVibKJE]&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;ref name=&amp;quot;KramidaVAC2016&amp;quot;&amp;gt;Kramida, Gregory. (2016). Resolving the vergence-accommodation conflict in head-mounted displays. *IEEE transactions on visualization and computer graphics, 22*(7), 1912-1931.&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;ref name=&amp;quot;KramidaVAC2016&amp;quot;&amp;gt;Kramida, Gregory. (2016). Resolving the vergence-accommodation conflict in head-mounted displays. *IEEE transactions on visualization and computer graphics, 22*(7), 1912-1931.&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&amp;lt;ref name=&quot;VosVAC2005&quot;&amp;gt;Vos, G. A., Barfield, W., &amp;amp; Yamamoto, T. (2005). The Virtual Vertical: Depth Perception and Discomfort in Stereoscopic Displays. *Presence: Teleoperators &amp;amp; Virtual Environments, 14*(6), 649-664.&amp;lt;/ref&amp;gt;&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-added&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;ref name=&amp;quot;JonesVAC2008&amp;quot;&amp;gt;Jones, J. A., Swan II, J. E., Singh, G., &amp;amp; Ellis, S. R. (2008). The effects of virtual reality, augmented reality, and motion parallax on egocentric depth perception. *Proceedings of the 5th symposium on Applied perception in graphics and visualization*, 9-16.&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;ref name=&amp;quot;JonesVAC2008&amp;quot;&amp;gt;Jones, J. A., Swan II, J. E., Singh, G., &amp;amp; Ellis, S. R. (2008). The effects of virtual reality, augmented reality, and motion parallax on egocentric depth perception. *Proceedings of the 5th symposium on Applied perception in graphics and visualization*, 9-16.&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;ref name=&amp;quot;KonradVAC2016&amp;quot;&amp;gt;Konrad, R., Cooper, E. A., &amp;amp; Banks, M. S. (2016). Towards the next generation of virtual and augmented reality displays. *Optics Express, 24*(15), 16800-16809. doi:10.1364/OE.24.016800&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;ref name=&amp;quot;KonradVAC2016&amp;quot;&amp;gt;Konrad, R., Cooper, E. A., &amp;amp; Banks, M. S. (2016). Towards the next generation of virtual and augmented reality displays. *Optics Express, 24*(15), 16800-16809. doi:10.1364/OE.24.016800&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l184&quot;&gt;Line 184:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 183:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;ref name=&amp;quot;Lanman2013&amp;quot;&amp;gt;Lanman, D., &amp;amp; Luebke, D. (2013). Near-eye light field displays. *ACM Transactions on Graphics (TOG), 32*(6), 1-10. doi:10.1145/2508363.2508366&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;ref name=&amp;quot;Lanman2013&amp;quot;&amp;gt;Lanman, D., &amp;amp; Luebke, D. (2013). Near-eye light field displays. *ACM Transactions on Graphics (TOG), 32*(6), 1-10. doi:10.1145/2508363.2508366&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;ref name=&amp;quot;MaimoneHolo2017&amp;quot;&amp;gt;Maimone, A., Georgiou, A., &amp;amp; Kollin, J. S. (2017). Holographic near-eye displays for virtual and augmented reality. *ACM Transactions on Graphics (TOG), 36*(4), 1-16. doi:10.1145/3072959.3073610&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;ref name=&amp;quot;MaimoneHolo2017&amp;quot;&amp;gt;Maimone, A., Georgiou, A., &amp;amp; Kollin, J. S. (2017). Holographic near-eye displays for virtual and augmented reality. *ACM Transactions on Graphics (TOG), 36*(4), 1-16. doi:10.1145/3072959.3073610&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&amp;lt;ref name=&quot;HowardRogers2012Vol3&quot;&amp;gt;Howard, I. P., &amp;amp; Rogers, B. J. (2012). *Perceiving in Depth, Volume 3: Other Mechanisms of Depth Perception*. Oxford University Press.&amp;lt;/ref&amp;gt;&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-added&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&amp;lt;ref name=&quot;PubMedOcclusionAR&quot;&amp;gt;Kiyokawa, K., Billinghurst, M., Hayes, S. E., &amp;amp; Gupta, A. (2003). An occlusion-capable optical see-through head mount display for supporting co-located collaboration. *Proceedings. ISMAR 2003. Second IEEE and ACM International Symposium on Mixed and Augmented Reality*, 133-141. doi:10.1109/ISMAR.2003.1240688&amp;lt;/ref&amp;gt;&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-added&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&amp;lt;ref name=&quot;ChangEyeTrack2020&quot;&amp;gt;Chang, Jen-Hao Rick, et al. (2020). Toward a unified framework for hand-eye coordination in virtual reality. *2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)*. IEEE.&amp;lt;/ref&amp;gt;&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-added&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&amp;lt;ref name=&quot;WillemsenHMD2009&quot;&amp;gt;Willemsen, Peter, Colton, M. B., Creem-Regehr, S. H., &amp;amp; Thompson, W. B. (2009). The effects of head-mounted display mechanical properties and field of view on distance judgments in virtual environments. *ACM Transactions on Applied Perception (TAP), 6*(2), 1-14.&amp;lt;/ref&amp;gt;&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-added&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;ref name=&amp;quot;WannAdaptation1995&amp;quot;&amp;gt;Wann, John P., Simon Rushton, and Mark Mon-Williams. (1995). Natural problems for stereoscopic depth perception in virtual environments. *Vision research, 35*(19), 2731-2736.&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;ref name=&amp;quot;WannAdaptation1995&amp;quot;&amp;gt;Wann, John P., Simon Rushton, and Mark Mon-Williams. (1995). Natural problems for stereoscopic depth perception in virtual environments. *Vision research, 35*(19), 2731-2736.&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;ref name=&amp;quot;ShibataComfortZone2011&amp;quot;&amp;gt;Shibata, Takashi, Kim, J., Hoffman, D. M., &amp;amp; Banks, M. S. (2011). The zone of comfort: Predicting visual discomfort with stereo displays. *Journal of vision, 11*(8), 11-11.&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;ref name=&amp;quot;ShibataComfortZone2011&amp;quot;&amp;gt;Shibata, Takashi, Kim, J., Hoffman, D. M., &amp;amp; Banks, M. S. (2011). The zone of comfort: Predicting visual discomfort with stereo displays. *Journal of vision, 11*(8), 11-11.&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;ref name=&amp;quot;DuchowskiDoF2014&amp;quot;&amp;gt;Duchowski, Andrew T., et al. (2014). Reducing visual discomfort with HMDs using dynamic depth of field. *IEEE computer graphics and applications, 34*(5), 34-41.&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;ref name=&amp;quot;DuchowskiDoF2014&amp;quot;&amp;gt;Duchowski, Andrew T., et al. (2014). Reducing visual discomfort with HMDs using dynamic depth of field. *IEEE computer graphics and applications, 34*(5), 34-41.&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&amp;lt;ref name=&quot;ErnstCrossModal2002&quot;&amp;gt;Ernst, Marc O., and Martin S. Banks. (2002). Humans integrate visual and haptic information in a statistically optimal fashion. *Nature, 415*(6870), 429-433.&amp;lt;/ref&amp;gt;&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-added&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;ref name=&amp;quot;MildenhallNeRF2020&amp;quot;&amp;gt;Mildenhall, Ben, et al. (2020). Nerf: Representing scenes as neural radiance fields for view synthesis. *European conference on computer vision*. Springer, Cham.&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;ref name=&amp;quot;MildenhallNeRF2020&amp;quot;&amp;gt;Mildenhall, Ben, et al. (2020). Nerf: Representing scenes as neural radiance fields for view synthesis. *European conference on computer vision*. Springer, Cham.&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;/references&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;/references&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Xinreality</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Depth_cue&amp;diff=34503&amp;oldid=prev</id>
		<title>Xinreality at 11:59, 25 April 2025</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Depth_cue&amp;diff=34503&amp;oldid=prev"/>
		<updated>2025-04-25T11:59:16Z</updated>

		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;a href=&quot;https://vrarwiki.com/index.php?title=Depth_cue&amp;amp;diff=34503&amp;amp;oldid=34497&quot;&gt;Show changes&lt;/a&gt;</summary>
		<author><name>Xinreality</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Depth_cue&amp;diff=34497&amp;oldid=prev</id>
		<title>Xinreality at 10:46, 25 April 2025</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Depth_cue&amp;diff=34497&amp;oldid=prev"/>
		<updated>2025-04-25T10:46:04Z</updated>

		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 10:46, 25 April 2025&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l1&quot;&gt;Line 1:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 1:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&#039;&#039;&#039;&lt;/del&gt;Depth cue&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&#039;&#039;&#039; &lt;/del&gt;is any of a variety of perceptual signals that &lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;allows &lt;/del&gt;the [[human visual system]] to infer the distance or depth of objects in a scene, enabling the brain to transform two-dimensional retinal images into a perception of three-dimensional space. &amp;lt;ref name=&quot;HowardRogers2012&quot;&amp;gt;Howard, I. P., &amp;amp; Rogers, B. J. (2012). *Perceiving in Depth, Volume 1: Basic Mechanisms*. Oxford University Press.&amp;lt;/ref&amp;gt; These cues are crucial for navigating the three-dimensional world and are fundamental to creating convincing, immersive, and comfortable experiences in [[Virtual Reality]] (VR) and [[Augmented Reality]] (AR), where reproducing accurate depth perception presents significant technical challenges. &amp;lt;ref name=&quot;HowardRogers1995&quot;&amp;gt;Howard, Ian P., and Brian J. Rogers. (1995). *Binocular vision and stereopsis*. Oxford University Press.&amp;lt;/ref&amp;gt; The brain automatically fuses multiple available depth cues to build a robust model of the spatial layout of the environment. &amp;lt;ref name=&quot;HITLCues1&quot;&amp;gt;(2014-06-20) Visual Depth Cues - Human Interface Technology Laboratory. Retrieved April 25, 2025, from https://www.hitl.washington.edu/projects/knowledge-base/virtual-worlds/EVE/III.A.1.b.VisualDepthCues.html&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;{{see also|Terms|Technical Terms}}&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-deleted&quot;&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;[[&lt;/ins&gt;Depth cue&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;]] &lt;/ins&gt;is any of a variety of perceptual signals that &lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;allow &lt;/ins&gt;the [[human visual system]] to infer the distance or depth of objects in a scene, enabling the brain to transform two-dimensional retinal images into a perception of three-dimensional space. &amp;lt;ref name=&quot;HowardRogers2012&quot;&amp;gt;Howard, I. P., &amp;amp; Rogers, B. J. (2012). *Perceiving in Depth, Volume 1: Basic Mechanisms*. Oxford University Press.&amp;lt;/ref&amp;gt; These cues are crucial for navigating the three-dimensional world and are fundamental to creating convincing, immersive, and comfortable experiences in [[Virtual Reality]] (VR) and [[Augmented Reality]] (AR), where reproducing accurate depth perception presents significant technical challenges. &amp;lt;ref name=&quot;HowardRogers1995&quot;&amp;gt;Howard, Ian P., and Brian J. Rogers. (1995). *Binocular vision and stereopsis*. Oxford University Press.&amp;lt;/ref&amp;gt; The brain automatically fuses multiple available depth cues to build a robust model of the spatial layout of the environment. &amp;lt;ref name=&quot;HITLCues1&quot;&amp;gt;(2014-06-20) Visual Depth Cues - Human Interface Technology Laboratory. Retrieved April 25, 2025, from https://www.hitl.washington.edu/projects/knowledge-base/virtual-worlds/EVE/III.A.1.b.VisualDepthCues.html&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Classification of Depth Cues ==&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Classification of Depth Cues ==&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Xinreality</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Depth_cue&amp;diff=34496&amp;oldid=prev</id>
		<title>Xinreality: /* References */</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Depth_cue&amp;diff=34496&amp;oldid=prev"/>
		<updated>2025-04-25T10:45:34Z</updated>

		<summary type="html">&lt;p&gt;&lt;span class=&quot;autocomment&quot;&gt;References&lt;/span&gt;&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 10:45, 25 April 2025&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l173&quot;&gt;Line 173:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 173:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;ref name=&amp;quot;ScienceLearnParallax&amp;quot;&amp;gt;Depth perception. Science Learning Hub – Pokapū Akoranga Pūtaiao. Retrieved April 25, 2025, from https://www.sciencelearn.org.nz/resources/107-depth-perception&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;ref name=&amp;quot;ScienceLearnParallax&amp;quot;&amp;gt;Depth perception. Science Learning Hub – Pokapū Akoranga Pūtaiao. Retrieved April 25, 2025, from https://www.sciencelearn.org.nz/resources/107-depth-perception&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;ref name=&amp;quot;WallachOConnell1953&amp;quot;&amp;gt;Wallach, H., &amp;amp; O&amp;#039;Connell, D. N. (1953). The kinetic depth effect. *Journal of Experimental Psychology, 45*(4), 205–217. doi:10.1037/h0058000&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;ref name=&amp;quot;WallachOConnell1953&amp;quot;&amp;gt;Wallach, H., &amp;amp; O&amp;#039;Connell, D. N. (1953). The kinetic depth effect. *Journal of Experimental Psychology, 45*(4), 205–217. doi:10.1037/h0058000&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&amp;lt;ref name=&quot;NawrotMotion2003&quot;&amp;gt;Nawrot, Mark. (2003). Eye movements provide the extra-retinal signal required for the perception of depth from motion parallax. *Vision research, 43*(14), 1553-1562.&amp;lt;/ref&amp;gt;&lt;/del&gt;&lt;/div&gt;&lt;/td&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-added&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;ref name=&amp;quot;KudoOcularParallax1988&amp;quot;&amp;gt;Kudo, Hiromi, and Hirohiko Ono. (1988). Depth perception, ocular parallax, and stereopsis. *Perception, 17*(4), 473-480.&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;ref name=&amp;quot;KudoOcularParallax1988&amp;quot;&amp;gt;Kudo, Hiromi, and Hirohiko Ono. (1988). Depth perception, ocular parallax, and stereopsis. *Perception, 17*(4), 473-480.&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;ref name=&amp;quot;ARInsiderVAC&amp;quot;&amp;gt;(2024-01-29) Understanding Vergence-Accommodation Conflict in AR/VR Headsets - AR Insider. Retrieved April 25, 2025, from https://arinsider.co/2024/01/29/understanding-vergence-accommodation-conflict-in-ar-vr-headsets/&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;ref name=&amp;quot;ARInsiderVAC&amp;quot;&amp;gt;(2024-01-29) Understanding Vergence-Accommodation Conflict in AR/VR Headsets - AR Insider. Retrieved April 25, 2025, from https://arinsider.co/2024/01/29/understanding-vergence-accommodation-conflict-in-ar-vr-headsets/&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Xinreality</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Depth_cue&amp;diff=34495&amp;oldid=prev</id>
		<title>Xinreality at 10:45, 25 April 2025</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Depth_cue&amp;diff=34495&amp;oldid=prev"/>
		<updated>2025-04-25T10:45:16Z</updated>

		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;a href=&quot;https://vrarwiki.com/index.php?title=Depth_cue&amp;amp;diff=34495&amp;amp;oldid=34494&quot;&gt;Show changes&lt;/a&gt;</summary>
		<author><name>Xinreality</name></author>
	</entry>
</feed>