<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://vrarwiki.com/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=RealEditor</id>
	<title>VR &amp; AR Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://vrarwiki.com/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=RealEditor"/>
	<link rel="alternate" type="text/html" href="https://vrarwiki.com/wiki/Special:Contributions/RealEditor"/>
	<updated>2026-04-15T07:15:58Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.0</generator>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Unknown_Page&amp;diff=36458</id>
		<title>Unknown Page</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Unknown_Page&amp;diff=36458"/>
		<updated>2025-09-09T16:58:08Z</updated>

		<summary type="html">&lt;p&gt;RealEditor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The page you requested cannot be found. [[Main Page|Go Home]]&lt;/div&gt;</summary>
		<author><name>RealEditor</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Unknown_Page&amp;diff=36456</id>
		<title>Unknown Page</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Unknown_Page&amp;diff=36456"/>
		<updated>2025-09-09T16:57:31Z</updated>

		<summary type="html">&lt;p&gt;RealEditor: Created page with &amp;quot;The page you requested cannot be found.  Go Home&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The page you requested cannot be found.&lt;br /&gt;
&lt;br /&gt;
[[Main Page|Go Home]]&lt;/div&gt;</summary>
		<author><name>RealEditor</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Samsung_Gear_VR_Controller&amp;diff=36429</id>
		<title>Samsung Gear VR Controller</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Samsung_Gear_VR_Controller&amp;diff=36429"/>
		<updated>2025-09-05T14:59:28Z</updated>

		<summary type="html">&lt;p&gt;RealEditor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Device Infobox&lt;br /&gt;
|image={{#ev:youtube|o4MXRwB04KI|350}}&lt;br /&gt;
|VR/AR=&lt;br /&gt;
|Type=[[Input Device]], [[Motion Tracker]]&lt;br /&gt;
|Subtype=[[Hands/Fingers Tracking]]&lt;br /&gt;
|Platform=[[Samsung Gear VR]]&lt;br /&gt;
|Creator=&lt;br /&gt;
|Developer=[[Samsung]]&lt;br /&gt;
|Manufacturer=&lt;br /&gt;
|Operating System=[[Android]]&lt;br /&gt;
|Versions=&lt;br /&gt;
|Requires=&lt;br /&gt;
|Predecessor=&lt;br /&gt;
|Successor=&lt;br /&gt;
|CPU=&lt;br /&gt;
|GPU=&lt;br /&gt;
|HPU=&lt;br /&gt;
|Memory=&lt;br /&gt;
|Storage=&lt;br /&gt;
|Display=&lt;br /&gt;
|Resolution=&lt;br /&gt;
|Pixel Density=&lt;br /&gt;
|Refresh Rate=&lt;br /&gt;
|Persistence=&lt;br /&gt;
|Precision=&lt;br /&gt;
|Field of View=&lt;br /&gt;
|Optics=&lt;br /&gt;
|Tracking=3DOF&lt;br /&gt;
|Rotational Tracking=IMUs&lt;br /&gt;
|Positional Tracking=&lt;br /&gt;
|Update Rate=&lt;br /&gt;
|Tracking Volume=&lt;br /&gt;
|Latency=&lt;br /&gt;
|Audio=&lt;br /&gt;
|Camera=&lt;br /&gt;
|Sensors=&lt;br /&gt;
|Input=trackpad, trigger, 4 buttons&lt;br /&gt;
|Connectivity=&lt;br /&gt;
|Power=&lt;br /&gt;
|Weight=&lt;br /&gt;
|Size=&lt;br /&gt;
|Cable Length=&lt;br /&gt;
|Release Date=April 21, 2017&lt;br /&gt;
|Price=$39&lt;br /&gt;
|Website=&lt;br /&gt;
}}&lt;br /&gt;
[[Samsung]] and [[Oculus]] have jointly developed a [[Input Devices|motion controller]] for the [[Gear VR]] [[headset]]. Gear VR Controller has [[IMU]]s for [[rotational tracking]] only.&lt;br /&gt;
&lt;br /&gt;
Before the launch of the [[Gear VR Controller]], users had two ways to control the virtual world. They either had to use a Bluetooth gamepad or use the trackpad/controls on the headset. Both these methods offered limited features and functionality. The new Samsung Gear VR Controller offers the users a more interactive, immersive, and intuitive virtual reality experience. &lt;br /&gt;
&lt;br /&gt;
Samsung launched the controller along with the company’s latest Gear headset. It’s believed that the controller is compatible with all previous versions of the Gear VR headsets, except the earliest release. The Samsung Gear VR Controller can be bought along with the virtual reality headset or separately.&lt;br /&gt;
&lt;br /&gt;
==Hardware==&lt;br /&gt;
The Samsung Gear VR Controller is no bigger than the [[Daydream View Controller]], but it’s designed to be more comfortable, practical, and efficient. The more ergonomic design allows users to have long gaming sessions without straining their hands and fingers. &lt;br /&gt;
&lt;br /&gt;
Similarities between Gear VR Controller and [[Oculus’s Touch]] gamepad are unmissable, even though Oculus gamepad has a better and smoother finish. The Samsung Gear VR Controller has a smooth texture that’s easy to touch and hold. There is no denying that Samsung has put a lot of thought and hard work to make the perfect controller for its phone supported virtual reality headset. &lt;br /&gt;
&lt;br /&gt;
==The Controls==&lt;br /&gt;
When users hold the controller, the thumb would naturally rest on the trackpad, which also acts as a button. Below the trackpad, there is the Home button on the right and the Back button on the left. Underneath the two buttons are the +/- volume controls. On the rear side of the controller, where the index finger naturally rests, there is a trigger button. The controller perfectly fits the hand; the user need not awkwardly move the fingers to reach the buttons. &lt;br /&gt;
&lt;br /&gt;
The index finger controlled trigger button is the most interesting feature of the Samsung Gear VR Controller. The trigger button on the controller does more than one function. Although the basic use of the index finger trigger button is shooting, it can also be used as a button to make the player hold an object in the virtual world. Normally, the shooting and holding function are seen only in high-end virtual reality systems such as HTC Vive and Oculus Rift. &lt;br /&gt;
&lt;br /&gt;
The controller does improve the previously [[gaze]]-only input of the Gear VR. If the controller drifts,  you can hold down the Home button on the controller at any time to quickly recenter it.&lt;br /&gt;
&lt;br /&gt;
==Setting Up the Gear VR Controller==&lt;br /&gt;
Samsung Gear VR Controller setup involves many steps; making the setting up process slightly complicated than other controllers. Begin the setup process by inserting 2 AAA batteries into the controller and pairing the Bluetooth. &lt;br /&gt;
&lt;br /&gt;
Users can’t use the controller yet. The Gear VR controller needs to be calibrated. The Oculus Home app will provide a set of tests and instructions to help users calibrate the controller. The user will be asked to place the controller on a flat surface, wave in the air in a specific manner, and perform a few other simple tasks. Once the controller is calibrated the user is free to use the device to control any compatible app. &lt;br /&gt;
&lt;br /&gt;
==Supporting Apps==&lt;br /&gt;
As of today, 20 or so Gear VR Controller compatible apps can be found in the Oculus store. Oculus has promised to increase the number of compatible apps to 50 in the coming months. At present, around 700 apps are compatible with the Samsung Gear VR headset, so it might take some time for the controller to support all the apps. [[Night Sky]], [[Star Chart]], and [[Drop Dead]] are just some of the compatible apps made available for Gear Controller. &lt;br /&gt;
&lt;br /&gt;
Some experts compare Gear Controller to Daydream’s remote device. Both gadgets are not high-end controllers, but the motion tracking, response, and accuracy of both are excellent.&lt;br /&gt;
&lt;br /&gt;
==Oculus Home==&lt;br /&gt;
Samsung Gear VR Controller’s use extends beyond games and apps. Inside the Oculus Home interface, the controller can be used as a navigation tool. The Gear Controller makes the menu selection and browsing the internet an enjoyable experience. The higher resolution picture quality offered by Oculus makes web page browsing and reading fun.   &lt;br /&gt;
&lt;br /&gt;
==PC and Windows Support==&lt;br /&gt;
PC and Windows support mainly depends on third-party applications like [https://github.com/ShimuraWorkshop/Gear-VR-Controller-Motion-Pointer-for-Windows/ Gear VR Controller Motion Pointer for Windows]. Without the need of the Gear VR headset and a mobile phone, this kind of applications with buttons and motions remapping capability can convert the controller into an air mouse, motion pointer, wireless presenter, or even gyro and motion gamepad. Light gun emulation is possible if supported by games or emulators.&lt;br /&gt;
&lt;br /&gt;
These applications open a door to adapting the lightweight controller for different flat-screen non-VR settings. For example, the controller becomes able to support gyro aiming or light gun emulation in games like THE HOUSE OF THE DEAD Remake series, classic rail shooters such as Virtua Cop and Time Crisis running on MAME emulator, Virtua Cop 3 running on Cxbx-Reloaded emulator, PS3 games like Time Crisis Razing Storm (Time Crisis 4 Arcade Ver, Razing Storm, Deadstorm Pirates) running on RPCS3 emulator, also Time Crisis 5, Operation GHOST, etc running on TeknoParrot emulator.&lt;br /&gt;
&lt;br /&gt;
Bluetooth pairing is also made simple by some of the applications with no manual pairing needed.&lt;br /&gt;
&lt;br /&gt;
==Cost==&lt;br /&gt;
The [[Samsung Gear VR]] headset and the controller together cost around $129. The VR controller alone can be bought for $39. The company is offering the virtual reality controller for free to those who pre-order Samsung’s Galaxy S8 phone. &lt;br /&gt;
&lt;br /&gt;
The price may seem a bit high if you compare Samsung’s VR headset and controller with Google’s Daydream View. But, the Gear VR controller is a much more sophisticated gadget with better design and features. The Samsung Gear VR Controller is a high-end gadget available at a low-end product price. &lt;br /&gt;
&lt;br /&gt;
It should be noted that the Gear VR headset and controller are compatible only with Samsung phones without third-party applications. Samsung Galaxy S6 and later models can be used with the Gear headset. Samsung’s collaboration with Oculus is sure to speed up the evolution of the virtual reality headset and accessories.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
*https://www.engadget.com/2017/04/18/samsung-gear-vr-controller-review/&lt;br /&gt;
*http://www.theverge.com/2017/4/18/15331602/samsung-oculus-gear-vr-motion-controller-review&lt;br /&gt;
*http://www.roadtovr.com/samsung-gear-vr-with-controller-review/ &lt;br /&gt;
*http://www.samsung.com/global/galaxy/gear-vr/&lt;br /&gt;
*https://github.com/ShimuraWorkshop/Gear-VR-Controller-Motion-Pointer-for-Windows/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Input Devices]] [[Category:Samsung Gear VR]]&lt;/div&gt;</summary>
		<author><name>RealEditor</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Samsung_Gear_VR_Controller&amp;diff=36428</id>
		<title>Samsung Gear VR Controller</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Samsung_Gear_VR_Controller&amp;diff=36428"/>
		<updated>2025-09-05T14:58:30Z</updated>

		<summary type="html">&lt;p&gt;RealEditor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Device Infobox&lt;br /&gt;
|image={{#ev:youtube|o4MXRwB04KI|350}}&lt;br /&gt;
|VR/AR=&lt;br /&gt;
|Type=[[Input Device]], [[Motion Tracker]]&lt;br /&gt;
|Subtype=[[Hands/Fingers Tracking]]&lt;br /&gt;
|Platform=[[Samsung Gear VR]]&lt;br /&gt;
|Creator=&lt;br /&gt;
|Developer=[[Samsung]]&lt;br /&gt;
|Manufacturer=&lt;br /&gt;
|Operating System=[[Android]]&lt;br /&gt;
|Versions=&lt;br /&gt;
|Requires=&lt;br /&gt;
|Predecessor=&lt;br /&gt;
|Successor=&lt;br /&gt;
|CPU=&lt;br /&gt;
|GPU=&lt;br /&gt;
|HPU=&lt;br /&gt;
|Memory=&lt;br /&gt;
|Storage=&lt;br /&gt;
|Display=&lt;br /&gt;
|Resolution=&lt;br /&gt;
|Pixel Density=&lt;br /&gt;
|Refresh Rate=&lt;br /&gt;
|Persistence=&lt;br /&gt;
|Precision=&lt;br /&gt;
|Field of View=&lt;br /&gt;
|Optics=&lt;br /&gt;
|Tracking=3DOF&lt;br /&gt;
|Rotational Tracking=IMUs&lt;br /&gt;
|Positional Tracking=&lt;br /&gt;
|Update Rate=&lt;br /&gt;
|Tracking Volume=&lt;br /&gt;
|Latency=&lt;br /&gt;
|Audio=&lt;br /&gt;
|Camera=&lt;br /&gt;
|Sensors=&lt;br /&gt;
|Input=trackpad, trigger, 4 buttons&lt;br /&gt;
|Connectivity=&lt;br /&gt;
|Power=&lt;br /&gt;
|Weight=&lt;br /&gt;
|Size=&lt;br /&gt;
|Cable Length=&lt;br /&gt;
|Release Date=April 21, 2017&lt;br /&gt;
|Price=$39&lt;br /&gt;
|Website=&lt;br /&gt;
}}&lt;br /&gt;
[[Samsung]] and [[Oculus]] have jointly developed a [[Input Devices|motion controller]] for the [[Gear VR]] [[headset]]. Gear VR Controller has [[IMU]]s for [[rotational tracking]] only. It does not have [[positional tracking]] like the [[Oculus Touch]]. &lt;br /&gt;
&lt;br /&gt;
Before the launch of the [[Gear VR Controller]], users had two ways to control the virtual world. They either had to use a Bluetooth gamepad or use the trackpad/controls on the headset. Both these methods offered limited features and functionality. The new Samsung Gear VR Controller offers the users a more interactive, immersive, and intuitive virtual reality experience. &lt;br /&gt;
&lt;br /&gt;
Samsung launched the controller along with the company’s latest Gear headset. It’s believed that the controller is compatible with all previous versions of the Gear VR headsets, except the earliest release. The Samsung Gear VR Controller can be bought along with the virtual reality headset or separately.&lt;br /&gt;
&lt;br /&gt;
==Hardware==&lt;br /&gt;
The Samsung Gear VR Controller is no bigger than the [[Daydream View Controller]], but it’s designed to be more comfortable, practical, and efficient. The more ergonomic design allows users to have long gaming sessions without straining their hands and fingers. &lt;br /&gt;
&lt;br /&gt;
Similarities between Gear VR Controller and [[Oculus’s Touch]] gamepad are unmissable, even though Oculus gamepad has a better and smoother finish. The Samsung Gear VR Controller has a smooth texture that’s easy to touch and hold. There is no denying that Samsung has put a lot of thought and hard work to make the perfect controller for its phone supported virtual reality headset. &lt;br /&gt;
&lt;br /&gt;
==The Controls==&lt;br /&gt;
When users hold the controller, the thumb would naturally rest on the trackpad, which also acts as a button. Below the trackpad, there is the Home button on the right and the Back button on the left. Underneath the two buttons are the +/- volume controls. On the rear side of the controller, where the index finger naturally rests, there is a trigger button. The controller perfectly fits the hand; the user need not awkwardly move the fingers to reach the buttons. &lt;br /&gt;
&lt;br /&gt;
The index finger controlled trigger button is the most interesting feature of the Samsung Gear VR Controller. The trigger button on the controller does more than one function. Although the basic use of the index finger trigger button is shooting, it can also be used as a button to make the player hold an object in the virtual world. Normally, the shooting and holding function are seen only in high-end virtual reality systems such as HTC Vive and Oculus Rift. &lt;br /&gt;
&lt;br /&gt;
The controller does improve the previously [[gaze]]-only input of the Gear VR. If the controller drifts,  you can hold down the Home button on the controller at any time to quickly recenter it.&lt;br /&gt;
&lt;br /&gt;
==Setting Up the Gear VR Controller==&lt;br /&gt;
Samsung Gear VR Controller setup involves many steps; making the setting up process slightly complicated than other controllers. Begin the setup process by inserting 2 AAA batteries into the controller and pairing the Bluetooth. &lt;br /&gt;
&lt;br /&gt;
Users can’t use the controller yet. The Gear VR controller needs to be calibrated. The Oculus Home app will provide a set of tests and instructions to help users calibrate the controller. The user will be asked to place the controller on a flat surface, wave in the air in a specific manner, and perform a few other simple tasks. Once the controller is calibrated the user is free to use the device to control any compatible app. &lt;br /&gt;
&lt;br /&gt;
==Supporting Apps==&lt;br /&gt;
As of today, 20 or so Gear VR Controller compatible apps can be found in the Oculus store. Oculus has promised to increase the number of compatible apps to 50 in the coming months. At present, around 700 apps are compatible with the Samsung Gear VR headset, so it might take some time for the controller to support all the apps. [[Night Sky]], [[Star Chart]], and [[Drop Dead]] are just some of the compatible apps made available for Gear Controller. &lt;br /&gt;
&lt;br /&gt;
Some experts compare Gear Controller to Daydream’s remote device. Both gadgets are not high-end controllers, but the motion tracking, response, and accuracy of both are excellent.&lt;br /&gt;
&lt;br /&gt;
==Oculus Home==&lt;br /&gt;
Samsung Gear VR Controller’s use extends beyond games and apps. Inside the Oculus Home interface, the controller can be used as a navigation tool. The Gear Controller makes the menu selection and browsing the internet an enjoyable experience. The higher resolution picture quality offered by Oculus makes web page browsing and reading fun.   &lt;br /&gt;
&lt;br /&gt;
==PC and Windows Support==&lt;br /&gt;
PC and Windows support mainly depends on third-party applications like [https://github.com/ShimuraWorkshop/Gear-VR-Controller-Motion-Pointer-for-Windows/ Gear VR Controller Motion Pointer for Windows]. Without the need of the Gear VR headset and a mobile phone, this kind of applications with buttons and motions remapping capability can convert the controller into an air mouse, motion pointer, wireless presenter, or even gyro and motion gamepad. Light gun emulation is possible if supported by games or emulators.&lt;br /&gt;
&lt;br /&gt;
These applications open a door to adapting the lightweight controller for different flat-screen non-VR settings. For example, the controller becomes able to support gyro aiming or light gun emulation in games like THE HOUSE OF THE DEAD Remake series, classic rail shooters such as Virtua Cop and Time Crisis running on MAME emulator, Virtua Cop 3 running on Cxbx-Reloaded emulator, PS3 games like Time Crisis Razing Storm (Time Crisis 4 Arcade Ver, Razing Storm, Deadstorm Pirates) running on RPCS3 emulator, also Time Crisis 5, Operation GHOST, etc running on TeknoParrot emulator.&lt;br /&gt;
&lt;br /&gt;
Bluetooth pairing is also made simple by some of the applications with no manual pairing needed.&lt;br /&gt;
&lt;br /&gt;
==Cost==&lt;br /&gt;
The [[Samsung Gear VR]] headset and the controller together cost around $129. The VR controller alone can be bought for $39. The company is offering the virtual reality controller for free to those who pre-order Samsung’s Galaxy S8 phone. &lt;br /&gt;
&lt;br /&gt;
The price may seem a bit high if you compare Samsung’s VR headset and controller with Google’s Daydream View. But, the Gear VR controller is a much more sophisticated gadget with better design and features. The Samsung Gear VR Controller is a high-end gadget available at a low-end product price. &lt;br /&gt;
&lt;br /&gt;
It should be noted that the Gear VR headset and controller are compatible only with Samsung phones without third-party applications. Samsung Galaxy S6 and later models can be used with the Gear headset. Samsung’s collaboration with Oculus is sure to speed up the evolution of the virtual reality headset and accessories.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
*https://www.engadget.com/2017/04/18/samsung-gear-vr-controller-review/&lt;br /&gt;
*http://www.theverge.com/2017/4/18/15331602/samsung-oculus-gear-vr-motion-controller-review&lt;br /&gt;
*http://www.roadtovr.com/samsung-gear-vr-with-controller-review/ &lt;br /&gt;
*http://www.samsung.com/global/galaxy/gear-vr/&lt;br /&gt;
*https://github.com/ShimuraWorkshop/Gear-VR-Controller-Motion-Pointer-for-Windows/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Input Devices]] [[Category:Samsung Gear VR]]&lt;/div&gt;</summary>
		<author><name>RealEditor</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Samsung_Gear_VR_Controller&amp;diff=36427</id>
		<title>Samsung Gear VR Controller</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Samsung_Gear_VR_Controller&amp;diff=36427"/>
		<updated>2025-09-05T14:57:29Z</updated>

		<summary type="html">&lt;p&gt;RealEditor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Device Infobox&lt;br /&gt;
|image={{#ev:youtube|o4MXRwB04KI|350}}&lt;br /&gt;
|VR/AR=&lt;br /&gt;
|Type=[[Input Device]], [[Motion Tracker]]&lt;br /&gt;
|Subtype=[[Hands/Fingers Tracking]]&lt;br /&gt;
|Platform=[[Samsung Gear VR]]&lt;br /&gt;
|Creator=&lt;br /&gt;
|Developer=[[Samsung]]&lt;br /&gt;
|Manufacturer=&lt;br /&gt;
|Operating System=[[Android]]&lt;br /&gt;
|Versions=&lt;br /&gt;
|Requires=&lt;br /&gt;
|Predecessor=&lt;br /&gt;
|Successor=&lt;br /&gt;
|CPU=&lt;br /&gt;
|GPU=&lt;br /&gt;
|HPU=&lt;br /&gt;
|Memory=&lt;br /&gt;
|Storage=&lt;br /&gt;
|Display=&lt;br /&gt;
|Resolution=&lt;br /&gt;
|Pixel Density=&lt;br /&gt;
|Refresh Rate=&lt;br /&gt;
|Persistence=&lt;br /&gt;
|Precision=&lt;br /&gt;
|Field of View=&lt;br /&gt;
|Optics=&lt;br /&gt;
|Tracking=3DOF&lt;br /&gt;
|Rotational Tracking=IMUs&lt;br /&gt;
|Positional Tracking=&lt;br /&gt;
|Update Rate=&lt;br /&gt;
|Tracking Volume=&lt;br /&gt;
|Latency=&lt;br /&gt;
|Audio=&lt;br /&gt;
|Camera=&lt;br /&gt;
|Sensors=&lt;br /&gt;
|Input=trackpad, trigger, 4 buttons&lt;br /&gt;
|Connectivity=&lt;br /&gt;
|Power=&lt;br /&gt;
|Weight=&lt;br /&gt;
|Size=&lt;br /&gt;
|Cable Length=&lt;br /&gt;
|Release Date=April 21, 2017&lt;br /&gt;
|Price=$39&lt;br /&gt;
|Website=&lt;br /&gt;
}}&lt;br /&gt;
[[Samsung]] and [[Oculus]] have jointly developed a [[Input Devices|motion controller]] for the [[Gear VR]] [[headset]]. Before the launch of the [[Gear VR Controller]], users had two ways to control the virtual world. They either had to use a Bluetooth gamepad or use the trackpad/controls on the headset. Both these methods offered limited features and functionality. The new Samsung Gear VR Controller offers the users a more interactive, immersive, and intuitive virtual reality experience. &lt;br /&gt;
&lt;br /&gt;
Samsung launched the controller along with the company’s latest Gear headset. It’s believed that the controller is compatible with all previous versions of the Gear VR headsets, except the earliest release. The Samsung Gear VR Controller can be bought along with the virtual reality headset or separately.&lt;br /&gt;
&lt;br /&gt;
==Hardware==&lt;br /&gt;
The Samsung Gear VR Controller is no bigger than the [[Daydream View Controller]], but it’s designed to be more comfortable, practical, and efficient. The more ergonomic design allows users to have long gaming sessions without straining their hands and fingers. &lt;br /&gt;
&lt;br /&gt;
Similarities between Gear VR Controller and [[Oculus’s Touch]] gamepad are unmissable, even though Oculus gamepad has a better and smoother finish. The Samsung Gear VR Controller has a smooth texture that’s easy to touch and hold. There is no denying that Samsung has put a lot of thought and hard work to make the perfect controller for its phone supported virtual reality headset. &lt;br /&gt;
&lt;br /&gt;
==The Controls==&lt;br /&gt;
When users hold the controller, the thumb would naturally rest on the trackpad, which also acts as a button. Below the trackpad, there is the Home button on the right and the Back button on the left. Underneath the two buttons are the +/- volume controls. On the rear side of the controller, where the index finger naturally rests, there is a trigger button. The controller perfectly fits the hand; the user need not awkwardly move the fingers to reach the buttons. &lt;br /&gt;
&lt;br /&gt;
The index finger controlled trigger button is the most interesting feature of the Samsung Gear VR Controller. The trigger button on the controller does more than one function. Although the basic use of the index finger trigger button is shooting, it can also be used as a button to make the player hold an object in the virtual world. Normally, the shooting and holding function are seen only in high-end virtual reality systems such as HTC Vive and Oculus Rift. &lt;br /&gt;
&lt;br /&gt;
==Tracking==&lt;br /&gt;
Gear VR Controller has [[IMU]]s for [[rotational tracking]] only. It does not have [[positional tracking]] like the [[Oculus Touch]]. The controller does improve the previously [[gaze]]-only input of the Gear VR. If the controller drifts,  you can hold down the Home button on the controller at any time to quickly recenter it.&lt;br /&gt;
&lt;br /&gt;
==Setting Up the Gear VR Controller==&lt;br /&gt;
Samsung Gear VR Controller setup involves many steps; making the setting up process slightly complicated than other controllers. Begin the setup process by inserting 2 AAA batteries into the controller and pairing the Bluetooth. &lt;br /&gt;
&lt;br /&gt;
Users can’t use the controller yet. The Gear VR controller needs to be calibrated. The Oculus Home app will provide a set of tests and instructions to help users calibrate the controller. The user will be asked to place the controller on a flat surface, wave in the air in a specific manner, and perform a few other simple tasks. Once the controller is calibrated the user is free to use the device to control any compatible app. &lt;br /&gt;
&lt;br /&gt;
==Supporting Apps==&lt;br /&gt;
As of today, 20 or so Gear VR Controller compatible apps can be found in the Oculus store. Oculus has promised to increase the number of compatible apps to 50 in the coming months. At present, around 700 apps are compatible with the Samsung Gear VR headset, so it might take some time for the controller to support all the apps. [[Night Sky]], [[Star Chart]], and [[Drop Dead]] are just some of the compatible apps made available for Gear Controller. &lt;br /&gt;
&lt;br /&gt;
Some experts compare Gear Controller to Daydream’s remote device. Both gadgets are not high-end controllers, but the motion tracking, response, and accuracy of both are excellent.&lt;br /&gt;
&lt;br /&gt;
==Oculus Home==&lt;br /&gt;
Samsung Gear VR Controller’s use extends beyond games and apps. Inside the Oculus Home interface, the controller can be used as a navigation tool. The Gear Controller makes the menu selection and browsing the internet an enjoyable experience. The higher resolution picture quality offered by Oculus makes web page browsing and reading fun.   &lt;br /&gt;
&lt;br /&gt;
==PC and Windows Support==&lt;br /&gt;
PC and Windows support mainly depends on third-party applications like [https://github.com/ShimuraWorkshop/Gear-VR-Controller-Motion-Pointer-for-Windows/ Gear VR Controller Motion Pointer for Windows]. Without the need of the Gear VR headset and a mobile phone, this kind of applications with buttons and motions remapping capability can convert the controller into an air mouse, motion pointer, wireless presenter, or even gyro and motion gamepad. Light gun emulation is possible if supported by games or emulators.&lt;br /&gt;
&lt;br /&gt;
These applications open a door to adapting the lightweight controller for different flat-screen non-VR settings. For example, the controller becomes able to support gyro aiming or light gun emulation in games like THE HOUSE OF THE DEAD Remake series, classic rail shooters such as Virtua Cop and Time Crisis running on MAME emulator, Virtua Cop 3 running on Cxbx-Reloaded emulator, PS3 games like Time Crisis Razing Storm (Time Crisis 4 Arcade Ver, Razing Storm, Deadstorm Pirates) running on RPCS3 emulator, also Time Crisis 5, Operation GHOST, etc running on TeknoParrot emulator.&lt;br /&gt;
&lt;br /&gt;
Bluetooth pairing is also made simple by some of the applications with no manual pairing needed.&lt;br /&gt;
&lt;br /&gt;
==Cost==&lt;br /&gt;
The [[Samsung Gear VR]] headset and the controller together cost around $129. The VR controller alone can be bought for $39. The company is offering the virtual reality controller for free to those who pre-order Samsung’s Galaxy S8 phone. &lt;br /&gt;
&lt;br /&gt;
The price may seem a bit high if you compare Samsung’s VR headset and controller with Google’s Daydream View. But, the Gear VR controller is a much more sophisticated gadget with better design and features. The Samsung Gear VR Controller is a high-end gadget available at a low-end product price. &lt;br /&gt;
&lt;br /&gt;
It should be noted that the Gear VR headset and controller are compatible only with Samsung phones without third-party applications. Samsung Galaxy S6 and later models can be used with the Gear headset. Samsung’s collaboration with Oculus is sure to speed up the evolution of the virtual reality headset and accessories.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
*https://www.engadget.com/2017/04/18/samsung-gear-vr-controller-review/&lt;br /&gt;
*http://www.theverge.com/2017/4/18/15331602/samsung-oculus-gear-vr-motion-controller-review&lt;br /&gt;
*http://www.roadtovr.com/samsung-gear-vr-with-controller-review/ &lt;br /&gt;
*http://www.samsung.com/global/galaxy/gear-vr/&lt;br /&gt;
*https://github.com/ShimuraWorkshop/Gear-VR-Controller-Motion-Pointer-for-Windows/&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Input Devices]] [[Category:Samsung Gear VR]]&lt;/div&gt;</summary>
		<author><name>RealEditor</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Seated_VR&amp;diff=36393</id>
		<title>Seated VR</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Seated_VR&amp;diff=36393"/>
		<updated>2025-08-19T21:31:00Z</updated>

		<summary type="html">&lt;p&gt;RealEditor: /* Seated VR and main HMD’s */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Seated VR (figure 1) is a form of experiencing [[virtual reality]] in which the user is sitting down. It contrasts with standing VR or [[room-scale VR]], which require the user to be standing up or moving around in a specified area. In seated VR experiences, a chair is commonly used, and it is seen as a manner of experiencing VR in a more relaxed way.&lt;br /&gt;
&lt;br /&gt;
All of the main headsets in the market, [[Oculus Rift]], [[HTC Vive]] and [[PlayStation VR]], allow for seated VR, generally using mouse and keyboard or a gamepad instead of motion-based controllers &amp;lt;ref name=”1”&amp;gt; Holly, R. (2016). Can you enjoy the HTC Vive sitting down? Retrieved from https://www.vrheads.com/can-you-enjoy-htc-vive-sitting-down&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
VR game developers can take into account the fact that users are going to experience their games sitting down by inserting the player into a cockpit, for example. This creates a deeper sense of immersion since the position of the player is matched with its virtual reality avatar &amp;lt;ref name=”2”&amp;gt; Allen, D. (2016). How to create comfortable seated locomotion in VR. Retrieved from http://www.blockinterval.com/project-updates/2016/4/4/how-we-achieved-comfortable-locomotion-in-life-of-lon&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[[File:Seated VR.jpg|thumb|1. Seated VR (Image: vrperception.com)]]&lt;br /&gt;
&lt;br /&gt;
==Seated VR and main HMDs==&lt;br /&gt;
The [[HTC Vive]] allows the use of VR apps that are designed for seated experiences &amp;lt;ref name=”3”&amp;gt; Vive. Will VR apps for seated/standing-only experiences work with room-scale setup? Retrieved from https://www.vive.com/us/support/category_howto/839445.html&amp;lt;/ref&amp;gt;, although it is most well-known for its room-scale VR. Indeed, HTC and [[Valve]] are investing in room-scale being the standard for VR, and so their system comes out-of-the-box with motion controls, a tracking system, and a boundary system &amp;lt;ref name=”4”&amp;gt; Lang, B. (2016). HTC show Vive pre working great for seated VR at CES. Retrieved from http://www.roadtovr.com/htc-shows-vive-pre-working-great-for-seated-vr-at-ces/&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt; Oscillada, J. M. (2017). Oculus introduces Guardian, a boundary system for Touch. Retrieved from http://virtualrealitytimes.com/2017/02/18/oculus-guardian-boundary-system/&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[[Steam VR]] has a function that shows the users if a gamepad can be used with a game, or if the [[Vive]] controllers are necessary. For Vive, in most cases, the gamepad is not the preferred setup for play &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
The first official demonstration of seated VR was on the original Vive development kit, during EGX 2015. In 2016, HTC made another demonstration with the Vive Pre at CES 2016, with [[Elite Dangerous]] as a seated experience. The showcase had two [[Lighthouse]] base stations with three seated VR rigs equipped with gaming chairs and HOTAS controls. The seated experience has been reported as on the same level of the [[Oculus Rift]], without any noticeable differences in the headtracking performance &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”6”&amp;gt; VRperception. Seated HTC Vive experiences with one Lighthouse station is possible. Retrieved from https://vrperception.com/2016/03/08/seated-htc-vive-experiences-with-one-lighthouse-station-is-possible/&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Oculus’s strategy has been different by betting on the seated experience as the base level for VR. According to Palmer Luckey, founder of [[Oculus VR]], this choice was made for “reasons of practicality, not functionality.” &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt; Indeed, this mode reduces the need for a specific space requirement, such as in the case of the room-scale setup. Although Oculus introduced the Oculus Rift without hand tracking input, in December 2016 the Touch controllers were released. This, along with implementation and improvements in the tracking system ([[Constellation]]) and a boundary system ([[Oculus Guardian System]]), has since allowed for [[room-scale VR]] in this device &amp;lt;ref name=”7”&amp;gt; Borrow the Light Studios. Room scale vs. seated VR. Retrieved from http://www.borrowedlightvr.com/2016/02/29/room-scale-vs-seated-vr/&amp;lt;/ref&amp;gt; &amp;lt;ref name=”8”&amp;gt; James, P. (2017). Oculus Rift &amp;amp; Touch 1.11 update brings improved Touch roomscale &amp;amp; multi-sensor support. Retrieved from http://www.roadtovr.com/oculus-rift-touch-1-11-update-brings-improved-touch-roomscale-multi-sensor-support/&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
With the [[PlayStation VR]] (PS VR), the focus is on seated play for its games. It is the cheapest VR headset, and so it could shape how a lot of users experience VR for the first time. While some tech demos have allowed users to play while standing up, the planned PS VR titles recommend that users remain seated. This seems to be due to the lack of capacity for the Playstation Move camera to track a large enough area &amp;lt;ref&amp;gt; Wan, S. (2016). Sony Playstation VR will focus on seated play. Retrieved from http://www.eteknix.com/sony-playstation-vr-will-focus-on-seated-play/&amp;lt;/ref&amp;gt;. Since its release, the PS VR has been praised for its comfort, affordability, and solid lineup of games. The same cannot be said of the tracking system and Move controllers, which hold the device back. Some journalists have referred to it as a great seated VR experience, but that “it starts to show blemishes if you attempt to get up and active during a game.” &amp;lt;ref&amp;gt; Jagneaux, D. (2016). How ‘The Brookhaven Experiment’ developers achieved 360 ‘Roomscale’ gameplay on PS VR. Retrieved from https://uploadvr.com/brookhaven-psvr-roomscale/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A way to improve the seated VR experience is to have a VR specific seating that allows for full rotation without the risk of getting tangled in the headset wires. This setup, along with a proper tracking area, would allow for a 360 degrees seated experience &amp;lt;ref name=”7”&amp;gt;&amp;lt;/ref&amp;gt;. An example of such a chair is the Roto VR chair (figure 2), which is designed for all head-mounted displays available &amp;lt;ref&amp;gt; Roto. Interactive virtual reality seat. Retrieved from http://www.rotovr.com/about-roto-vr&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[[File:Roto VR.jpg|thumb|2. Roto VR chair (image: rotovr.com)]]&lt;br /&gt;
&lt;br /&gt;
==Selected seated VR games==&lt;br /&gt;
&lt;br /&gt;
* American Truck Simulator&lt;br /&gt;
&lt;br /&gt;
* [[Elite Dangerous]]&lt;br /&gt;
&lt;br /&gt;
* Jack Assault&lt;br /&gt;
&lt;br /&gt;
* Lucky’s Tale&lt;br /&gt;
&lt;br /&gt;
* [[Project CARS]]&lt;br /&gt;
&lt;br /&gt;
* Revive&lt;br /&gt;
&lt;br /&gt;
* [[Vector 36]]&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Terms]] [[Category:Technical Terms]]&lt;/div&gt;</summary>
		<author><name>RealEditor</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Answers_About_Dog_Grooming&amp;diff=36387</id>
		<title>Answers About Dog Grooming</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Answers_About_Dog_Grooming&amp;diff=36387"/>
		<updated>2025-08-09T21:05:33Z</updated>

		<summary type="html">&lt;p&gt;RealEditor: Blanked the page&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>RealEditor</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=User:DinahClick54754&amp;diff=36385</id>
		<title>User:DinahClick54754</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=User:DinahClick54754&amp;diff=36385"/>
		<updated>2025-08-04T06:30:52Z</updated>

		<summary type="html">&lt;p&gt;RealEditor: Blanked the page&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>RealEditor</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=User:HughHeinig38&amp;diff=36384</id>
		<title>User:HughHeinig38</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=User:HughHeinig38&amp;diff=36384"/>
		<updated>2025-08-04T06:30:46Z</updated>

		<summary type="html">&lt;p&gt;RealEditor: Blanked the page&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>RealEditor</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=File:Vuzix_horizontal_logo.gif&amp;diff=36383</id>
		<title>File:Vuzix horizontal logo.gif</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=File:Vuzix_horizontal_logo.gif&amp;diff=36383"/>
		<updated>2025-08-04T06:29:23Z</updated>

		<summary type="html">&lt;p&gt;RealEditor: Source: https://www.xvrwiki.org/wiki/File:Vuzix_horizontal_logo.gif&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
Source: https://www.xvrwiki.org/wiki/File:Vuzix_horizontal_logo.gif&lt;/div&gt;</summary>
		<author><name>RealEditor</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Vuzix&amp;diff=36382</id>
		<title>Vuzix</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Vuzix&amp;diff=36382"/>
		<updated>2025-08-04T06:28:53Z</updated>

		<summary type="html">&lt;p&gt;RealEditor: Copy from https://www.xvrwiki.org/wiki/Vuzix&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:Vuzix horizontal logo.gif|thumb|Vuzix logo]]&lt;br /&gt;
&#039;&#039;&#039;Vuzix&#039;&#039;&#039; is a company in New York that sells [[waveguide]]-based head-mounted displays. It has developed waveguide-based optical see through glasses touted as having [[augmented reality]] capability.&lt;br /&gt;
&lt;br /&gt;
Vuzix was previously named Icuiti.&lt;br /&gt;
&lt;br /&gt;
Vuzix is a publicly traded company. [[Intel]] bought about a 30 percent stake in Vuzix for about 25 million dollars.&amp;lt;ref name=&amp;quot;i814&amp;quot;&amp;gt;{{cite web | last=Ltd | first=SPIE Europe | title=Intel invests in Google Glass rival | website=optics.org - The Business of Photonics | url=https://optics.org/news/6/1/2 | access-date=2024-05-29}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Products==&lt;br /&gt;
[[File:Wrap 1200DXAR.png|thumb|Vuzix 1200DXAR]]&lt;br /&gt;
* [[iWear VR920]]&lt;br /&gt;
* iWear AV920&amp;lt;ref name=&amp;quot;p044&amp;quot;/&amp;gt;&lt;br /&gt;
* iWear AV310 Widescreen&amp;lt;ref name=&amp;quot;p044&amp;quot;/&amp;gt;&lt;br /&gt;
* iWear AV230 XL+&amp;lt;ref name=&amp;quot;p044&amp;quot;/&amp;gt;&lt;br /&gt;
* iWear AV230&amp;lt;ref name=&amp;quot;p044&amp;quot;/&amp;gt;&lt;br /&gt;
* iWear IP230&amp;lt;ref name=&amp;quot;p044&amp;quot;/&amp;gt;&lt;br /&gt;
* iWear DV920&amp;lt;ref name=&amp;quot;p044&amp;quot;/&amp;gt;&lt;br /&gt;
* iWear M920&amp;lt;ref name=&amp;quot;p044&amp;quot;/&amp;gt;&lt;br /&gt;
* Wrap 230&amp;lt;ref name=&amp;quot;p044&amp;quot;/&amp;gt;&lt;br /&gt;
* Wrap 310&amp;lt;ref name=&amp;quot;p044&amp;quot;&amp;gt;{{cite web | title=Discontinued Products | website=vuzix.com | date=2012-04-12 | url=http://www.vuzix.com/consumer/discontinued_products.html | archive-url=https://web.archive.org/web/20120522150114/http://www.vuzix.com/consumer/discontinued_products.html | archive-date=2012-05-22 | url-status=dead | access-date=2025-02-04}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
* Wrap 310XL&amp;lt;ref name=&amp;quot;c029&amp;quot;&amp;gt;{{cite web | title=3D Videos: Side-by-Side Format | website=vuzix.com | date=2012-01-07 | url=http://www.vuzix.com/consumer/3d_games_videos.html | archive-url=https://web.archive.org/web/20120514183146/http://www.vuzix.com/consumer/3d_games_videos.html | archive-date=2012-05-14 | url-status=dead | access-date=2025-02-04}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
* [[Vuzix Wrap 920]]&amp;lt;ref name=&amp;quot;c029&amp;quot;/&amp;gt;&lt;br /&gt;
* Wrap 920AR&amp;lt;ref name=&amp;quot;c029&amp;quot;/&amp;gt;&lt;br /&gt;
* Wrap 1200&amp;lt;ref name=&amp;quot;c029&amp;quot;/&amp;gt;&lt;br /&gt;
* Vuzix Wrap 1200VR&amp;lt;ref name=&amp;quot;a437&amp;quot;&amp;gt;{{cite web | title=Wayback Machine | website=vuzix.com | date=2012-02-16 | url=http://www.vuzix.com/site/_news/2011_News/vuzix-wrap-1200VR-availability-release.pdf | archive-url=https://web.archive.org/web/20120317125016/http://www.vuzix.com/site/_news/2011_News/vuzix-wrap-1200VR-availability-release.pdf | archive-date=2012-03-17 | url-status=dead | access-date=2025-02-04}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
* [[Vuzix M100]]&lt;br /&gt;
* [[Vuzix M300]]&amp;lt;ref name=pcmag&amp;gt;https://www.pcmag.com/news/hands-on-vuzixs-no-nonsense-ar-smart-glasses&amp;lt;/ref&amp;gt;&lt;br /&gt;
* [[Vuzix M2000AR]], a single-eye near-eye display&lt;br /&gt;
* [[Vuzix M3000]]&amp;lt;ref name=pcmag/&amp;gt;&lt;br /&gt;
* [[Vuzix M400]]&lt;br /&gt;
* [[Vuzix M400C]]&lt;br /&gt;
* [[Vuzix Blade]]&lt;br /&gt;
* [[Vuzix Shield]]&lt;br /&gt;
* [[Vuzix Blade 2]]&lt;br /&gt;
* [[Vuzix Z100]]&lt;br /&gt;
* Star 1200&amp;lt;ref name=&amp;quot;y350&amp;quot;&amp;gt;{{cite web | title=Vuzix STAR 1200 Augmented Reality System | website=vuzix.com | date=2012-01-06 | url=http://www.vuzix.com/ar/products_star1200.html | archive-url=https://web.archive.org/web/20120401043742/http://www.vuzix.com/ar/products_star1200.html | archive-date=2012-04-01 | url-status=dead | access-date=2025-02-04}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
* [[Vuzix Star 1200XLD]]&lt;br /&gt;
* Wrap 1200DXAR, a video passthrough head mounted display.&amp;lt;ref name=&amp;quot;p767&amp;quot;&amp;gt;{{cite web | title=Head Mounted Displays | website=Inition | date=2015-02-26 | url=https://www.inition.co.uk/product/vuzix-wrap-1200dxar/ | access-date=2024-05-29}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Technology==&lt;br /&gt;
Vuzix makes their waveguides using a surface relief method, based on a process developed by Nokia.&amp;lt;ref name=&amp;quot;w791&amp;quot;&amp;gt;{{cite web | last=Chinnock | first=Chris | title=A Tour of the Vuzix Waveguide Factory | website=Insight Media | date=2019-03-20 | url=https://www.insightmedia.info/a-tour-of-the-vuzix-waveguide-factory/ | access-date=2025-02-09}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The combiner in the Vuzix Blade is made of two waveguides pressed together: one for the blue-green and one for green-red portions of the visible spectrum.&amp;lt;ref name=&amp;quot;w791&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
{{Reflist}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Headset companies]]&lt;br /&gt;
[[Category:Optical combiner companies]]&lt;/div&gt;</summary>
		<author><name>RealEditor</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Optical_see-through_head-mounted_display&amp;diff=36381</id>
		<title>Optical see-through head-mounted display</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Optical_see-through_head-mounted_display&amp;diff=36381"/>
		<updated>2025-08-04T06:28:23Z</updated>

		<summary type="html">&lt;p&gt;RealEditor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Optical see-through head-mounted display (OST-HMD)&#039;&#039;&#039;, also called &#039;&#039;&#039;Optical head-mounted display&#039;&#039;&#039; or &#039;&#039;&#039;OHMD&#039;&#039;&#039;, is a type of [[head-mounted display]] that projects images and allows the user to see through its display. OHMDs are used in [[augmented reality]] (AR). Unlike [[head-mounted display#Virtual Reality HMDs|virtual reality HMDs]] that obscure our vision of the real world, OHMDs allow us to see our surroundings while streaming data and image overlays in front of our eyes.&lt;br /&gt;
&lt;br /&gt;
The focus of OST-HMDs such as the Hololens and Magic Leap 1 is usually set to about 1 or 2 meters in front of the face.&lt;br /&gt;
&lt;br /&gt;
[[Vuzix]] is a provider of OST-HMDs.&lt;br /&gt;
&lt;br /&gt;
A number of companies have marketed [[waveguide]]s for OST-HMDs, including [[Dispelix]], [[Digilens]], and [[Lumus]].&lt;br /&gt;
&lt;br /&gt;
Optical [[head-mounted display]] can cover only 1 eye such as the [[Google Glass]] or both eyes. Wearers can interact with the projected digital content through input methods such as voice commands, gestures and controllers.&lt;br /&gt;
&lt;br /&gt;
==Features==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Terms]]&lt;/div&gt;</summary>
		<author><name>RealEditor</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Optical_see-through_head-mounted_display&amp;diff=36380</id>
		<title>Optical see-through head-mounted display</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Optical_see-through_head-mounted_display&amp;diff=36380"/>
		<updated>2025-08-04T06:27:24Z</updated>

		<summary type="html">&lt;p&gt;RealEditor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Optical see-through head-mounted display (OST-HMD)&#039;&#039;&#039;, also called &#039;&#039;&#039;Optical head-mounted display&#039;&#039;&#039; or &#039;&#039;&#039;OHMD&#039;&#039;&#039;, is a type of [[head-mounted display]] that projects images and allows the user to see through its display. OHMDs are used in [[augmented reality]] (AR). Unlike [[head-mounted display#Virtual Reality HMDs|virtual reality HMDs]] that obscure our vision of the real world, OHMDs allow us to see our surroundings while streaming data and image overlays in front of our eyes.&lt;br /&gt;
&lt;br /&gt;
The focus of OST-HMDs such as the Hololens and Magic Leap 1 is usually set to about 1 or 2 meters in front of the face.&lt;br /&gt;
&lt;br /&gt;
[[Vuzix]] is a provider of OST-HMDs.&lt;br /&gt;
&lt;br /&gt;
A number of companies have marketed [[waveguide]]s for OST-HMDs, including [[Dispelix]], [[Digilens]], and [[Lumus]]. However, these are largely not in products that an individual is able to purchase without acting on behalf of a larger organization.&lt;br /&gt;
&lt;br /&gt;
Optical [[head-mounted display]] can cover only 1 eye such as the [[Google Glass]] or both eyes. Wearers can interact with the projected digital content through input methods such as voice commands, gestures and controllers.&lt;br /&gt;
&lt;br /&gt;
==Features==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Terms]]&lt;/div&gt;</summary>
		<author><name>RealEditor</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Microsoft_HoloLens&amp;diff=36379</id>
		<title>Microsoft HoloLens</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Microsoft_HoloLens&amp;diff=36379"/>
		<updated>2025-08-04T06:27:03Z</updated>

		<summary type="html">&lt;p&gt;RealEditor: /* Commands */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Device Infobox&lt;br /&gt;
|image=[[File:microsoft hololens2.jpg|350px]]&lt;br /&gt;
|VR/AR=[[Augmented reality]]&lt;br /&gt;
|Type=[[Optical see-through head-mounted display]]&lt;br /&gt;
|Subtype=[[Standalone AR]]&lt;br /&gt;
|Platform=[[Windows Mixed Reality]]&lt;br /&gt;
|Creator=[[Alex Kipman]]&lt;br /&gt;
|Developer=[[Microsoft]]&lt;br /&gt;
|Manufacturer=Microsoft&lt;br /&gt;
|Operating System=[[Windows 10]]&lt;br /&gt;
|Versions=&lt;br /&gt;
|Requires=Nothing&lt;br /&gt;
|Predecessor=None&lt;br /&gt;
|Successor=[[Microsoft HoloLens 2]]&lt;br /&gt;
|CPU=Intel 32 bit architecture&lt;br /&gt;
|GPU=&lt;br /&gt;
|HPU=[[Holographic processing unit]]&lt;br /&gt;
|Memory=2 GB&lt;br /&gt;
|Storage=64 GB Flash&lt;br /&gt;
|Display=2 HD 16:9 light engines&lt;br /&gt;
|Resolution=Holographic resolution: 2.3M total light points&lt;br /&gt;
|Pixel Density=Holographic density: over 2.5k radiants (light points per radian)&lt;br /&gt;
|Refresh Rate=240Hz (60Hz content rate, each frame consists of four sequential colors: R-G-B-G)&lt;br /&gt;
|Persistence=2.5ms&lt;br /&gt;
|Precision=&lt;br /&gt;
|Field of View=30°H and 17.5°V&lt;br /&gt;
|Optics=See-through holographic lenses (waveguides)&lt;br /&gt;
|Tracking=6DOF&lt;br /&gt;
|Rotational Tracking=[[Gyroscope]], [[Magnetometer]], [[Accelerometer]]&lt;br /&gt;
|Positional Tracking=Depth Camera with 120°×120° FOV, 4 greyscale cameras&lt;br /&gt;
|Update Rate=&lt;br /&gt;
|Latency=Motion to Photon: less than 2ms&lt;br /&gt;
|Audio=Built-in speakers, Audio 3.5mm jack&lt;br /&gt;
|Camera=2MP photo / HD video camera, depth camera, 4 greyscale cameras&lt;br /&gt;
|Sensors=ambient light sensor, array of 4 microphones&lt;br /&gt;
|Input=Gaze, Gesture, Voice, HoloLens Clicker, Keyboard, Mouse&lt;br /&gt;
|Connectivity=WiFi, Bluetooth&lt;br /&gt;
|Power=Battery (2.5 to 5.5 hours per charge)&lt;br /&gt;
|Weight=579g&lt;br /&gt;
|Size=&lt;br /&gt;
|Cable Length=Wireless&lt;br /&gt;
|Release Date=March 30, 2016&lt;br /&gt;
|Price=$3,000 / £2,000&lt;br /&gt;
|Website=[http://www.microsoft.com/microsoft-hololens/en-us Microsoft HoloLens]&lt;br /&gt;
}}&lt;br /&gt;
[[Microsoft HoloLens]] is an [[augmented reality headset]] developed by [[Microsoft]]. It is part of the [[Windows Mixed Reality]] [[AR Platform]] incorporated with [[Windows 10]] OS. HoloLens is an optical see-through head-mounted display. It may be similar to other [[OHMD]]s (optical head-mounted displays). Unlike the [[Oculus Rift]] and other [[Virtual Reality#Devices|VR Devices]], the eye-piece component of HoloLens is transparent and the headset requires neither PC nor smartphone. It is able to project high-definition (HD) virtual content over real world objects. &amp;lt;ref name=”one”&amp;gt;Microsoft. Microsoft HoloLens. Retrieved from https://www.microsoft.com/en-us/hololens&amp;lt;/ref&amp;gt; &amp;lt;ref name=”two”&amp;gt;Microsoft. Why HoloLens. Retrieved from https://www.microsoft.com/en-us/hololens/why-hololens&amp;lt;/ref&amp;gt;&lt;br /&gt;
__TOC__&lt;br /&gt;
==General Information==&lt;br /&gt;
Microsoft HoloLens runs a self-contained Windows 10 computer.  It features an HD 3D optical head-mounted display, spatial sound projection and advanced sensors to allow its users to interact with AR applications through head movements, [[#Gesture|gestures]] and [[#Voice|voices]].&lt;br /&gt;
&lt;br /&gt;
HoloLens has various sensors and a high-end CPU and GPU, which Microsoft says gives the headset more processing power than an average laptop. &amp;lt;ref name=”four”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The potential uses of the HoloLens are vast. From social apps to games, to navigation, there’s an incredible potential that this [[mixed reality]] device can tap into. Indeed, Microsoft collaborated with NASA in the making of HoloLens, and there is the potential to control the Mars rover Curiosity via the headset, allowing Nasa staff to work as if they were on the planet themselves. Microsoft also partnered with Volvo to showcase another possible use - using it in car showrooms for customers to view different color configurations for their chosen car and see features in action. &amp;lt;ref name=”four”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
At the end of March 2016, holoportation was showcased. The video demonstration showed how it could be possible - through the use of multiple cameras - to use the HoloLens to view a 3D version of a person. &amp;lt;ref name=”four”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
While the HoloLens price is high, it is an impressive piece of hardware and indicates that Microsoft is taking the augmented reality and virtual reality markets seriously. &amp;lt;ref name=”three”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Announcement and Release==&lt;br /&gt;
Microsoft HoloLens was announced during a Windows 10 Event on January 21st, 2015. The Development Edition was released on March 30, 2016, for $3,000 or £2,000. It allowed developers to start making apps and games for the headset. Months later, it became available to anyone with a Microsoft account. During the last quarter of 2016, the program expanded beyond the United States into countries like the United Kingdom, Ireland, France, Germany, Australia and New Zealand. Currently, there’s still no information regarding a consumer edition release date. &amp;lt;ref name=”three”&amp;gt;Sophie, C. (2017). Microsoft HoloLens: Everything you need to know about the $3,000 AR headset. Retrieved from https://www.wareable.com/microsoft/microsoft-hololens-everything-you-need-to-know-about-the-futuristic-ar-headset-735&amp;lt;/ref&amp;gt; &amp;lt;ref name=”four”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”five”&amp;gt;Spence, E. (2017). Microsoft HoloLens Review: Winning the reality wars. Retrieved from https://www.forbes.com/sites/ewanspence/2017/01/14/microsoft-hololens-review-experience-review/2/#4053cf3d43f9&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Features==&lt;br /&gt;
Realistic 3D objects that can be anchored onto real life locations. These virtual objects are projected at about 60 cm (near plane) to few meters. &lt;br /&gt;
&lt;br /&gt;
[[Spatial Mapping]] - scans the environment in real time to create a mesh of an X/Y/Z coordinate plane. Objects can be accurately projected into the mesh.&lt;br /&gt;
&lt;br /&gt;
[[Spatial Audio]] - in-app audio will come from different directions which depend on where you are in relation to the virtual object making the sound&lt;br /&gt;
&lt;br /&gt;
[[#Voice|Voice Recognition]] - recognizes various voice commands.&lt;br /&gt;
&lt;br /&gt;
[[#Gesture|Gesture Recognition]] - recognizes various gesture commands such as the [[Air Tap]].&lt;br /&gt;
&lt;br /&gt;
[[#Gaze|Gaze Recognition]] - HoloLens tracks your gaze.&lt;br /&gt;
&lt;br /&gt;
==Hardware==&lt;br /&gt;
===Review===&lt;br /&gt;
&#039;&#039;&#039;Headset and Display&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
HoloLens requires neither cords nor phones. It features an optical [[HMD]] on top of a plastic ring that wraps around the head. The plastic ring has a soft foam cushion on the inside. Like other HMDs, the weight of HoloLens is front loaded and feels a bit bulky. HoloLens can be used with most prescription glasses.&lt;br /&gt;
&lt;br /&gt;
The transparent dual displays are made of three layers of glass (red, blue and green). A light engine is mounted above the displays and projects light on the lenses. The tiny corrugated grooves in each layer of glass diffract these light particles, making them bounce around and helping to trick your eyes into perceiving virtual objects at virtual distances.&lt;br /&gt;
&lt;br /&gt;
The [[field of view]] where the virtual objects appear is quite small - 30° horizontal and 17.5° vertical. It is the same as a 16:9 monitor with 15 feet diagonal, 2 feet away from you face.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Sensors&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Sensors include head tracking [[IMU]]s (Inertial Measuring Unit); a sound capture system consisting of an array of 4 microphones; an energy efficient depth camera with 120°×120° [[FOV]], an RGB 2-megapixel photo / HD video camera and an ambient light sensor. Additionally, it has 4 greyscale environment sensing cameras that work with the depth camera to track the head, hands and the surrounding environment.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Processors&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For processors, in addition to [[CPU]] and [[GPU]], HoloLens possess an [[HPU]], ([[holographic processing unit]]). The HPU is a coprocessor dedicated to integrating real world and virtually generated content. It consolidates and processes all the data from various sensors and produces a thin stream of useful information to the other processors. HPU removes the burden of handling heavy external data from the CPU and GPU, allowing them to focus on creating content.&lt;br /&gt;
&lt;br /&gt;
[[HPU]] - processes all of the data from its sensors, depth camera, microphone etc.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Audio&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The [[3D audio|Spatial sound system]] consists of 2 small speakers are located on the sides of the OHMD, sitting above the ears. Unlike headphones, these speakers do not prevent the user from hearing external sounds. In-app audio will come from different directions which depend on where you are in relation to the virtual object making the sound.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Input and Interface&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A pair of buttons responsible for brightness is above the left ear while another pair of buttons responsible for volume is above the right ear. In each pair, one of the buttons is concave while the other one is convex. There is also a Power button. These are the only physical inputs - HoloLens is largely controlled by [[#Voice|voice]], [[#Gesture|gesture]] and [[#Gaze|gaze]] along with [[HoloLens Clicker|a bluetooth clicker]]&lt;br /&gt;
&lt;br /&gt;
5 LEDs are present on the left side of the OHMD. These LEDs display various system statuses such as power and battery conditions. A microUSB port is present for charging and connection. It is possible to use Microsoft HoloLens while it’s charging over microUSB. &amp;lt;ref name=”four”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Power and Connectivity&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The battery in HoloLens lasts around 2.5 hours during processor intensive use. It lasts around 5.5 hours during regular use. &lt;br /&gt;
&lt;br /&gt;
HoloLens can connect to any WiFi or Bluetooth-equipped device. &lt;br /&gt;
&lt;br /&gt;
HoloLens can run any universal Windows 10 app.&lt;br /&gt;
&lt;br /&gt;
===In the Box===&lt;br /&gt;
*HoloLens Development Edition&lt;br /&gt;
*[[HoloLens Clicker]]&lt;br /&gt;
*Carrying case&lt;br /&gt;
*Charger and cable&lt;br /&gt;
*Microfiber cloth&lt;br /&gt;
*Nose pads&lt;br /&gt;
*Overhead strap&lt;br /&gt;
&lt;br /&gt;
==Specifications==&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Part&lt;br /&gt;
!Spec&lt;br /&gt;
|-&lt;br /&gt;
| CPU || Intel 32 bit architecture&lt;br /&gt;
|-&lt;br /&gt;
| GPU || ??&lt;br /&gt;
|-&lt;br /&gt;
|[[HPU]] || Custom-built Microsoft Holographic Processing Unit (HPU 1.0)&lt;br /&gt;
|-&lt;br /&gt;
|RAM || 2 GB&lt;br /&gt;
|-&lt;br /&gt;
|Storage || 64 GB Flash&lt;br /&gt;
|-&lt;br /&gt;
|Display || 2 HD 16:9 light engines&lt;br /&gt;
|-&lt;br /&gt;
|Optics || See-through holographic lenses (waveguides)&lt;br /&gt;
|-&lt;br /&gt;
|[[IPD]] || Automatic pupillary distance calibration&lt;br /&gt;
|-&lt;br /&gt;
|Holographic Resolution || 2.3M total light points&lt;br /&gt;
|-&lt;br /&gt;
|Holographic Density|| &amp;gt;2.5k radiants (light points per radian)&lt;br /&gt;
|-&lt;br /&gt;
|Field of View || 30°H and 17.5°V&lt;br /&gt;
|-&lt;br /&gt;
|Cameras || 2 Mega-pixel photo / HD video camera, depth camera, 4 greyscale environment understanding cameras&lt;br /&gt;
|-&lt;br /&gt;
|Sensors || ambient light sensor, 4 microphones&lt;br /&gt;
|-&lt;br /&gt;
|[[Tracking]] || 6 degrees of freedom&lt;br /&gt;
|-&lt;br /&gt;
|[[Rotational tracking]] || [[Gyroscope]], [[Magnetometer]], [[Accelerometer]]&lt;br /&gt;
|-&lt;br /&gt;
|[[Positional tracking]] || depth camera, 4 greyscale environment understanding cameras&lt;br /&gt;
|-&lt;br /&gt;
|Update Rate || &lt;br /&gt;
|-&lt;br /&gt;
|[[#Tracking volume|Tracking Volume]] || &lt;br /&gt;
|-&lt;br /&gt;
|Latency || Motion to Photon: less than 2ms&lt;br /&gt;
|-&lt;br /&gt;
|Audio || Built-in speakers, Audio 3.5mm jack&lt;br /&gt;
|-&lt;br /&gt;
|Connectivity || Wi-Fi 802.11ac, Micro USB 2.0, Bluetooth 4.1 LE&lt;br /&gt;
|-&lt;br /&gt;
|Power || Battery: 2-3 hours of active use, Up to 2 weeks of standby time&lt;br /&gt;
|-&lt;br /&gt;
|Weight || 579g&lt;br /&gt;
|-&lt;br /&gt;
|User Input || [[Gaze]], [[voice]], [[gesture]]&lt;br /&gt;
|-&lt;br /&gt;
|Buttons || Brightness, volume, power&lt;br /&gt;
|-&lt;br /&gt;
|OS || Windows 10&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Setup Tutorial==&lt;br /&gt;
&lt;br /&gt;
==Commands==&lt;br /&gt;
===Gaze===&lt;br /&gt;
HoloLens tracks your gaze. When you perform a gesture such as air tap, look at the part of the virtual object where you want to place your tap. &lt;br /&gt;
===Gesture===&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Action&lt;br /&gt;
!Description&lt;br /&gt;
!Effect&lt;br /&gt;
|-&lt;br /&gt;
|[[Air Tap]] || With your index finger pointed upward, bend it forward || Simulates a mouse click in a desktop environment. Activates the interactive component&lt;br /&gt;
|-&lt;br /&gt;
|Home/Start || Opening your hand with palm facing up || Simulates the Windows key on a keyboard or Home button on a Windows Tablet. Opens up the holographic start menu. &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Voice===&lt;br /&gt;
Microsoft&#039;s virtual assistant [[Cortana]] is incorporated into the HoloLens. Users can interact with her with natural language commands. &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Action&lt;br /&gt;
!Effect&lt;br /&gt;
|-&lt;br /&gt;
|&amp;quot;Follow me&amp;quot; || The window follows the user, along the wall. &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Input Devices==&lt;br /&gt;
&#039;&#039;&#039;[[HoloLens Clicker]]&#039;&#039;&#039; - a small clicker with a loop that wraps around your middle or index finger. It is held with the microUSB port towards your body and your thumb resting on top of the click, in the indentation. The clicker features a single button and [[rotational tracking]]. It allows a user to click and scroll with minimal hand motion as a replacement for the air-tap gesture.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Bluetooth Mouse and Keyboard&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Apps== &lt;br /&gt;
HoloLens can project various Windows 10 Apps, programs, and browsers onto walls and other objects. One of the examples Microsoft used was Windows-like interfaces projected onto walls and furniture. Users can interact with these projections with gaze, gestures and voice commands.&lt;br /&gt;
&lt;br /&gt;
[[SketchUp]] &lt;br /&gt;
&lt;br /&gt;
[[Holo Studio]] - Allows the user to create 3D models used for [[3D Printing]]. In addition to gesture commands, it also accepts voice commands.&lt;br /&gt;
&lt;br /&gt;
[[Minecraft]] - An Augmented reality version of Minecraft.&lt;br /&gt;
&lt;br /&gt;
[[Project Xray]] - A [[mixed reality]] shooter game.&lt;br /&gt;
&lt;br /&gt;
[[Actiongram]] - Place 3D models into real world environments and record videos with them, mixing reality with digital overlays.&lt;br /&gt;
&lt;br /&gt;
[[HoloGuide]] - Guides a user through low visibility areas.&lt;br /&gt;
&lt;br /&gt;
[[HoloHear]] - Instantly translates speech into sign language for deaf people.&lt;br /&gt;
&lt;br /&gt;
[[Teomirn]] - Overlays prompts and instructions on a real piano to help people learn how to play. &amp;lt;ref name=”three”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Developer==&lt;br /&gt;
[[Windows Mixed Reality]] is Microsoft&#039;s AR platform incorporated in Windows 10 OS. Windows Mixed Reality API is implemented in all devices running Windows 10 including smartphones and tablets.&lt;br /&gt;
&lt;br /&gt;
To develop for HoloLens, you need a Windows 10 PC able to run [[Visual Studio 2015]] and [[Unity]].&lt;br /&gt;
&lt;br /&gt;
===Tools===&lt;br /&gt;
[[Unity]]&lt;br /&gt;
&lt;br /&gt;
[[Visual Studio 2015]]&lt;br /&gt;
&lt;br /&gt;
[[Windows SDK]]&lt;br /&gt;
&lt;br /&gt;
[[Windows Device Portal]]&lt;br /&gt;
====HoloLens Emulator====&lt;br /&gt;
[[HoloLens Emulator]] allows the user to test Holographic apps on their PCs without the need of a physical HoloLens. The human and environmental inputs that would usually be read by the sensors on the HoloLens are instead simulated using your keyboard, mouse, or Xbox controller. Apps don&#039;t need to be modified to run on the emulator and don&#039;t know that they aren&#039;t running on a real HoloLens. &amp;lt;ref&amp;gt;Microsoft. Using the HoloLens emulator. Retrieved from https://developer.microsoft.com/en-us/windows/holographic/using_the_hololens_emulator&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==History==&lt;br /&gt;
&#039;&#039;&#039;January 21, 2015&#039;&#039;&#039; - Microsoft HoloLens was officially announced.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;April 28, 2015&#039;&#039;&#039; - First live stage presentation of the HoloLens.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;March 30, 2016&#039;&#039;&#039; - Developer Edition of the HoloLens is officially released.&lt;br /&gt;
&lt;br /&gt;
==Images==&lt;br /&gt;
[[File:microsoft hololens3.jpg|300px]] [[File:microsoft hololens4.jpg|300px]] [[File:microsoft hololens5.jpg|300px]] [[File:microsoft hololens6.jpg|300px]]&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Augmented Reality Devices]]&lt;/div&gt;</summary>
		<author><name>RealEditor</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Microsoft_HoloLens&amp;diff=36378</id>
		<title>Microsoft HoloLens</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Microsoft_HoloLens&amp;diff=36378"/>
		<updated>2025-08-04T06:26:50Z</updated>

		<summary type="html">&lt;p&gt;RealEditor: /* Features */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Device Infobox&lt;br /&gt;
|image=[[File:microsoft hololens2.jpg|350px]]&lt;br /&gt;
|VR/AR=[[Augmented reality]]&lt;br /&gt;
|Type=[[Optical see-through head-mounted display]]&lt;br /&gt;
|Subtype=[[Standalone AR]]&lt;br /&gt;
|Platform=[[Windows Mixed Reality]]&lt;br /&gt;
|Creator=[[Alex Kipman]]&lt;br /&gt;
|Developer=[[Microsoft]]&lt;br /&gt;
|Manufacturer=Microsoft&lt;br /&gt;
|Operating System=[[Windows 10]]&lt;br /&gt;
|Versions=&lt;br /&gt;
|Requires=Nothing&lt;br /&gt;
|Predecessor=None&lt;br /&gt;
|Successor=[[Microsoft HoloLens 2]]&lt;br /&gt;
|CPU=Intel 32 bit architecture&lt;br /&gt;
|GPU=&lt;br /&gt;
|HPU=[[Holographic processing unit]]&lt;br /&gt;
|Memory=2 GB&lt;br /&gt;
|Storage=64 GB Flash&lt;br /&gt;
|Display=2 HD 16:9 light engines&lt;br /&gt;
|Resolution=Holographic resolution: 2.3M total light points&lt;br /&gt;
|Pixel Density=Holographic density: over 2.5k radiants (light points per radian)&lt;br /&gt;
|Refresh Rate=240Hz (60Hz content rate, each frame consists of four sequential colors: R-G-B-G)&lt;br /&gt;
|Persistence=2.5ms&lt;br /&gt;
|Precision=&lt;br /&gt;
|Field of View=30°H and 17.5°V&lt;br /&gt;
|Optics=See-through holographic lenses (waveguides)&lt;br /&gt;
|Tracking=6DOF&lt;br /&gt;
|Rotational Tracking=[[Gyroscope]], [[Magnetometer]], [[Accelerometer]]&lt;br /&gt;
|Positional Tracking=Depth Camera with 120°×120° FOV, 4 greyscale cameras&lt;br /&gt;
|Update Rate=&lt;br /&gt;
|Latency=Motion to Photon: less than 2ms&lt;br /&gt;
|Audio=Built-in speakers, Audio 3.5mm jack&lt;br /&gt;
|Camera=2MP photo / HD video camera, depth camera, 4 greyscale cameras&lt;br /&gt;
|Sensors=ambient light sensor, array of 4 microphones&lt;br /&gt;
|Input=Gaze, Gesture, Voice, HoloLens Clicker, Keyboard, Mouse&lt;br /&gt;
|Connectivity=WiFi, Bluetooth&lt;br /&gt;
|Power=Battery (2.5 to 5.5 hours per charge)&lt;br /&gt;
|Weight=579g&lt;br /&gt;
|Size=&lt;br /&gt;
|Cable Length=Wireless&lt;br /&gt;
|Release Date=March 30, 2016&lt;br /&gt;
|Price=$3,000 / £2,000&lt;br /&gt;
|Website=[http://www.microsoft.com/microsoft-hololens/en-us Microsoft HoloLens]&lt;br /&gt;
}}&lt;br /&gt;
[[Microsoft HoloLens]] is an [[augmented reality headset]] developed by [[Microsoft]]. It is part of the [[Windows Mixed Reality]] [[AR Platform]] incorporated with [[Windows 10]] OS. HoloLens is an optical see-through head-mounted display. It may be similar to other [[OHMD]]s (optical head-mounted displays). Unlike the [[Oculus Rift]] and other [[Virtual Reality#Devices|VR Devices]], the eye-piece component of HoloLens is transparent and the headset requires neither PC nor smartphone. It is able to project high-definition (HD) virtual content over real world objects. &amp;lt;ref name=”one”&amp;gt;Microsoft. Microsoft HoloLens. Retrieved from https://www.microsoft.com/en-us/hololens&amp;lt;/ref&amp;gt; &amp;lt;ref name=”two”&amp;gt;Microsoft. Why HoloLens. Retrieved from https://www.microsoft.com/en-us/hololens/why-hololens&amp;lt;/ref&amp;gt;&lt;br /&gt;
__TOC__&lt;br /&gt;
==General Information==&lt;br /&gt;
Microsoft HoloLens runs a self-contained Windows 10 computer.  It features an HD 3D optical head-mounted display, spatial sound projection and advanced sensors to allow its users to interact with AR applications through head movements, [[#Gesture|gestures]] and [[#Voice|voices]].&lt;br /&gt;
&lt;br /&gt;
HoloLens has various sensors and a high-end CPU and GPU, which Microsoft says gives the headset more processing power than an average laptop. &amp;lt;ref name=”four”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The potential uses of the HoloLens are vast. From social apps to games, to navigation, there’s an incredible potential that this [[mixed reality]] device can tap into. Indeed, Microsoft collaborated with NASA in the making of HoloLens, and there is the potential to control the Mars rover Curiosity via the headset, allowing Nasa staff to work as if they were on the planet themselves. Microsoft also partnered with Volvo to showcase another possible use - using it in car showrooms for customers to view different color configurations for their chosen car and see features in action. &amp;lt;ref name=”four”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
At the end of March 2016, holoportation was showcased. The video demonstration showed how it could be possible - through the use of multiple cameras - to use the HoloLens to view a 3D version of a person. &amp;lt;ref name=”four”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
While the HoloLens price is high, it is an impressive piece of hardware and indicates that Microsoft is taking the augmented reality and virtual reality markets seriously. &amp;lt;ref name=”three”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Announcement and Release==&lt;br /&gt;
Microsoft HoloLens was announced during a Windows 10 Event on January 21st, 2015. The Development Edition was released on March 30, 2016, for $3,000 or £2,000. It allowed developers to start making apps and games for the headset. Months later, it became available to anyone with a Microsoft account. During the last quarter of 2016, the program expanded beyond the United States into countries like the United Kingdom, Ireland, France, Germany, Australia and New Zealand. Currently, there’s still no information regarding a consumer edition release date. &amp;lt;ref name=”three”&amp;gt;Sophie, C. (2017). Microsoft HoloLens: Everything you need to know about the $3,000 AR headset. Retrieved from https://www.wareable.com/microsoft/microsoft-hololens-everything-you-need-to-know-about-the-futuristic-ar-headset-735&amp;lt;/ref&amp;gt; &amp;lt;ref name=”four”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”five”&amp;gt;Spence, E. (2017). Microsoft HoloLens Review: Winning the reality wars. Retrieved from https://www.forbes.com/sites/ewanspence/2017/01/14/microsoft-hololens-review-experience-review/2/#4053cf3d43f9&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Features==&lt;br /&gt;
Realistic 3D objects that can be anchored onto real life locations. These virtual objects are projected at about 60 cm (near plane) to few meters. &lt;br /&gt;
&lt;br /&gt;
[[Spatial Mapping]] - scans the environment in real time to create a mesh of an X/Y/Z coordinate plane. Objects can be accurately projected into the mesh.&lt;br /&gt;
&lt;br /&gt;
[[Spatial Audio]] - in-app audio will come from different directions which depend on where you are in relation to the virtual object making the sound&lt;br /&gt;
&lt;br /&gt;
[[#Voice|Voice Recognition]] - recognizes various voice commands.&lt;br /&gt;
&lt;br /&gt;
[[#Gesture|Gesture Recognition]] - recognizes various gesture commands such as the [[Air Tap]].&lt;br /&gt;
&lt;br /&gt;
[[#Gaze|Gaze Recognition]] - HoloLens tracks your gaze.&lt;br /&gt;
&lt;br /&gt;
==Hardware==&lt;br /&gt;
===Review===&lt;br /&gt;
&#039;&#039;&#039;Headset and Display&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
HoloLens requires neither cords nor phones. It features an optical [[HMD]] on top of a plastic ring that wraps around the head. The plastic ring has a soft foam cushion on the inside. Like other HMDs, the weight of HoloLens is front loaded and feels a bit bulky. HoloLens can be used with most prescription glasses.&lt;br /&gt;
&lt;br /&gt;
The transparent dual displays are made of three layers of glass (red, blue and green). A light engine is mounted above the displays and projects light on the lenses. The tiny corrugated grooves in each layer of glass diffract these light particles, making them bounce around and helping to trick your eyes into perceiving virtual objects at virtual distances.&lt;br /&gt;
&lt;br /&gt;
The [[field of view]] where the virtual objects appear is quite small - 30° horizontal and 17.5° vertical. It is the same as a 16:9 monitor with 15 feet diagonal, 2 feet away from you face.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Sensors&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Sensors include head tracking [[IMU]]s (Inertial Measuring Unit); a sound capture system consisting of an array of 4 microphones; an energy efficient depth camera with 120°×120° [[FOV]], an RGB 2-megapixel photo / HD video camera and an ambient light sensor. Additionally, it has 4 greyscale environment sensing cameras that work with the depth camera to track the head, hands and the surrounding environment.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Processors&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For processors, in addition to [[CPU]] and [[GPU]], HoloLens possess an [[HPU]], ([[holographic processing unit]]). The HPU is a coprocessor dedicated to integrating real world and virtually generated content. It consolidates and processes all the data from various sensors and produces a thin stream of useful information to the other processors. HPU removes the burden of handling heavy external data from the CPU and GPU, allowing them to focus on creating content.&lt;br /&gt;
&lt;br /&gt;
[[HPU]] - processes all of the data from its sensors, depth camera, microphone etc.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Audio&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The [[3D audio|Spatial sound system]] consists of 2 small speakers are located on the sides of the OHMD, sitting above the ears. Unlike headphones, these speakers do not prevent the user from hearing external sounds. In-app audio will come from different directions which depend on where you are in relation to the virtual object making the sound.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Input and Interface&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A pair of buttons responsible for brightness is above the left ear while another pair of buttons responsible for volume is above the right ear. In each pair, one of the buttons is concave while the other one is convex. There is also a Power button. These are the only physical inputs - HoloLens is largely controlled by [[#Voice|voice]], [[#Gesture|gesture]] and [[#Gaze|gaze]] along with [[HoloLens Clicker|a bluetooth clicker]]&lt;br /&gt;
&lt;br /&gt;
5 LEDs are present on the left side of the OHMD. These LEDs display various system statuses such as power and battery conditions. A microUSB port is present for charging and connection. It is possible to use Microsoft HoloLens while it’s charging over microUSB. &amp;lt;ref name=”four”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Power and Connectivity&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The battery in HoloLens lasts around 2.5 hours during processor intensive use. It lasts around 5.5 hours during regular use. &lt;br /&gt;
&lt;br /&gt;
HoloLens can connect to any WiFi or Bluetooth-equipped device. &lt;br /&gt;
&lt;br /&gt;
HoloLens can run any universal Windows 10 app.&lt;br /&gt;
&lt;br /&gt;
===In the Box===&lt;br /&gt;
*HoloLens Development Edition&lt;br /&gt;
*[[HoloLens Clicker]]&lt;br /&gt;
*Carrying case&lt;br /&gt;
*Charger and cable&lt;br /&gt;
*Microfiber cloth&lt;br /&gt;
*Nose pads&lt;br /&gt;
*Overhead strap&lt;br /&gt;
&lt;br /&gt;
==Specifications==&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Part&lt;br /&gt;
!Spec&lt;br /&gt;
|-&lt;br /&gt;
| CPU || Intel 32 bit architecture&lt;br /&gt;
|-&lt;br /&gt;
| GPU || ??&lt;br /&gt;
|-&lt;br /&gt;
|[[HPU]] || Custom-built Microsoft Holographic Processing Unit (HPU 1.0)&lt;br /&gt;
|-&lt;br /&gt;
|RAM || 2 GB&lt;br /&gt;
|-&lt;br /&gt;
|Storage || 64 GB Flash&lt;br /&gt;
|-&lt;br /&gt;
|Display || 2 HD 16:9 light engines&lt;br /&gt;
|-&lt;br /&gt;
|Optics || See-through holographic lenses (waveguides)&lt;br /&gt;
|-&lt;br /&gt;
|[[IPD]] || Automatic pupillary distance calibration&lt;br /&gt;
|-&lt;br /&gt;
|Holographic Resolution || 2.3M total light points&lt;br /&gt;
|-&lt;br /&gt;
|Holographic Density|| &amp;gt;2.5k radiants (light points per radian)&lt;br /&gt;
|-&lt;br /&gt;
|Field of View || 30°H and 17.5°V&lt;br /&gt;
|-&lt;br /&gt;
|Cameras || 2 Mega-pixel photo / HD video camera, depth camera, 4 greyscale environment understanding cameras&lt;br /&gt;
|-&lt;br /&gt;
|Sensors || ambient light sensor, 4 microphones&lt;br /&gt;
|-&lt;br /&gt;
|[[Tracking]] || 6 degrees of freedom&lt;br /&gt;
|-&lt;br /&gt;
|[[Rotational tracking]] || [[Gyroscope]], [[Magnetometer]], [[Accelerometer]]&lt;br /&gt;
|-&lt;br /&gt;
|[[Positional tracking]] || depth camera, 4 greyscale environment understanding cameras&lt;br /&gt;
|-&lt;br /&gt;
|Update Rate || &lt;br /&gt;
|-&lt;br /&gt;
|[[#Tracking volume|Tracking Volume]] || &lt;br /&gt;
|-&lt;br /&gt;
|Latency || Motion to Photon: less than 2ms&lt;br /&gt;
|-&lt;br /&gt;
|Audio || Built-in speakers, Audio 3.5mm jack&lt;br /&gt;
|-&lt;br /&gt;
|Connectivity || Wi-Fi 802.11ac, Micro USB 2.0, Bluetooth 4.1 LE&lt;br /&gt;
|-&lt;br /&gt;
|Power || Battery: 2-3 hours of active use, Up to 2 weeks of standby time&lt;br /&gt;
|-&lt;br /&gt;
|Weight || 579g&lt;br /&gt;
|-&lt;br /&gt;
|User Input || [[Gaze]], [[voice]], [[gesture]]&lt;br /&gt;
|-&lt;br /&gt;
|Buttons || Brightness, volume, power&lt;br /&gt;
|-&lt;br /&gt;
|OS || Windows 10&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Setup Tutorial==&lt;br /&gt;
&lt;br /&gt;
==Commands==&lt;br /&gt;
===Gaze===&lt;br /&gt;
HoloLens tracks your gaze. When you perform a gesture such as air tap, look at the part of hologram where you want to place your tap. &lt;br /&gt;
===Gesture===&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Action&lt;br /&gt;
!Description&lt;br /&gt;
!Effect&lt;br /&gt;
|-&lt;br /&gt;
|[[Air Tap]] || With your index finger pointed upward, bend it forward || Simulates a mouse click in a desktop environment. Activates the interactive component&lt;br /&gt;
|-&lt;br /&gt;
|Home/Start || Opening your hand with palm facing up || Simulates the Windows key on a keyboard or Home button on a Windows Tablet. Opens up the holographic start menu. &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Voice===&lt;br /&gt;
Microsoft&#039;s virtual assistant [[Cortana]] is incorporated into the HoloLens. Users can interact with her with natural language commands. &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Action&lt;br /&gt;
!Effect&lt;br /&gt;
|-&lt;br /&gt;
|&amp;quot;Follow me&amp;quot; || The window follows the user, along the wall. &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Input Devices==&lt;br /&gt;
&#039;&#039;&#039;[[HoloLens Clicker]]&#039;&#039;&#039; - a small clicker with a loop that wraps around your middle or index finger. It is held with the microUSB port towards your body and your thumb resting on top of the click, in the indentation. The clicker features a single button and [[rotational tracking]]. It allows a user to click and scroll with minimal hand motion as a replacement for the air-tap gesture.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Bluetooth Mouse and Keyboard&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Apps== &lt;br /&gt;
HoloLens can project various Windows 10 Apps, programs, and browsers onto walls and other objects. One of the examples Microsoft used was Windows-like interfaces projected onto walls and furniture. Users can interact with these projections with gaze, gestures and voice commands.&lt;br /&gt;
&lt;br /&gt;
[[SketchUp]] &lt;br /&gt;
&lt;br /&gt;
[[Holo Studio]] - Allows the user to create 3D models used for [[3D Printing]]. In addition to gesture commands, it also accepts voice commands.&lt;br /&gt;
&lt;br /&gt;
[[Minecraft]] - An Augmented reality version of Minecraft.&lt;br /&gt;
&lt;br /&gt;
[[Project Xray]] - A [[mixed reality]] shooter game.&lt;br /&gt;
&lt;br /&gt;
[[Actiongram]] - Place 3D models into real world environments and record videos with them, mixing reality with digital overlays.&lt;br /&gt;
&lt;br /&gt;
[[HoloGuide]] - Guides a user through low visibility areas.&lt;br /&gt;
&lt;br /&gt;
[[HoloHear]] - Instantly translates speech into sign language for deaf people.&lt;br /&gt;
&lt;br /&gt;
[[Teomirn]] - Overlays prompts and instructions on a real piano to help people learn how to play. &amp;lt;ref name=”three”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Developer==&lt;br /&gt;
[[Windows Mixed Reality]] is Microsoft&#039;s AR platform incorporated in Windows 10 OS. Windows Mixed Reality API is implemented in all devices running Windows 10 including smartphones and tablets.&lt;br /&gt;
&lt;br /&gt;
To develop for HoloLens, you need a Windows 10 PC able to run [[Visual Studio 2015]] and [[Unity]].&lt;br /&gt;
&lt;br /&gt;
===Tools===&lt;br /&gt;
[[Unity]]&lt;br /&gt;
&lt;br /&gt;
[[Visual Studio 2015]]&lt;br /&gt;
&lt;br /&gt;
[[Windows SDK]]&lt;br /&gt;
&lt;br /&gt;
[[Windows Device Portal]]&lt;br /&gt;
====HoloLens Emulator====&lt;br /&gt;
[[HoloLens Emulator]] allows the user to test Holographic apps on their PCs without the need of a physical HoloLens. The human and environmental inputs that would usually be read by the sensors on the HoloLens are instead simulated using your keyboard, mouse, or Xbox controller. Apps don&#039;t need to be modified to run on the emulator and don&#039;t know that they aren&#039;t running on a real HoloLens. &amp;lt;ref&amp;gt;Microsoft. Using the HoloLens emulator. Retrieved from https://developer.microsoft.com/en-us/windows/holographic/using_the_hololens_emulator&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==History==&lt;br /&gt;
&#039;&#039;&#039;January 21, 2015&#039;&#039;&#039; - Microsoft HoloLens was officially announced.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;April 28, 2015&#039;&#039;&#039; - First live stage presentation of the HoloLens.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;March 30, 2016&#039;&#039;&#039; - Developer Edition of the HoloLens is officially released.&lt;br /&gt;
&lt;br /&gt;
==Images==&lt;br /&gt;
[[File:microsoft hololens3.jpg|300px]] [[File:microsoft hololens4.jpg|300px]] [[File:microsoft hololens5.jpg|300px]] [[File:microsoft hololens6.jpg|300px]]&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Augmented Reality Devices]]&lt;/div&gt;</summary>
		<author><name>RealEditor</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Microsoft_HoloLens&amp;diff=36377</id>
		<title>Microsoft HoloLens</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Microsoft_HoloLens&amp;diff=36377"/>
		<updated>2025-08-04T06:26:34Z</updated>

		<summary type="html">&lt;p&gt;RealEditor: /* Hardware */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Device Infobox&lt;br /&gt;
|image=[[File:microsoft hololens2.jpg|350px]]&lt;br /&gt;
|VR/AR=[[Augmented reality]]&lt;br /&gt;
|Type=[[Optical see-through head-mounted display]]&lt;br /&gt;
|Subtype=[[Standalone AR]]&lt;br /&gt;
|Platform=[[Windows Mixed Reality]]&lt;br /&gt;
|Creator=[[Alex Kipman]]&lt;br /&gt;
|Developer=[[Microsoft]]&lt;br /&gt;
|Manufacturer=Microsoft&lt;br /&gt;
|Operating System=[[Windows 10]]&lt;br /&gt;
|Versions=&lt;br /&gt;
|Requires=Nothing&lt;br /&gt;
|Predecessor=None&lt;br /&gt;
|Successor=[[Microsoft HoloLens 2]]&lt;br /&gt;
|CPU=Intel 32 bit architecture&lt;br /&gt;
|GPU=&lt;br /&gt;
|HPU=[[Holographic processing unit]]&lt;br /&gt;
|Memory=2 GB&lt;br /&gt;
|Storage=64 GB Flash&lt;br /&gt;
|Display=2 HD 16:9 light engines&lt;br /&gt;
|Resolution=Holographic resolution: 2.3M total light points&lt;br /&gt;
|Pixel Density=Holographic density: over 2.5k radiants (light points per radian)&lt;br /&gt;
|Refresh Rate=240Hz (60Hz content rate, each frame consists of four sequential colors: R-G-B-G)&lt;br /&gt;
|Persistence=2.5ms&lt;br /&gt;
|Precision=&lt;br /&gt;
|Field of View=30°H and 17.5°V&lt;br /&gt;
|Optics=See-through holographic lenses (waveguides)&lt;br /&gt;
|Tracking=6DOF&lt;br /&gt;
|Rotational Tracking=[[Gyroscope]], [[Magnetometer]], [[Accelerometer]]&lt;br /&gt;
|Positional Tracking=Depth Camera with 120°×120° FOV, 4 greyscale cameras&lt;br /&gt;
|Update Rate=&lt;br /&gt;
|Latency=Motion to Photon: less than 2ms&lt;br /&gt;
|Audio=Built-in speakers, Audio 3.5mm jack&lt;br /&gt;
|Camera=2MP photo / HD video camera, depth camera, 4 greyscale cameras&lt;br /&gt;
|Sensors=ambient light sensor, array of 4 microphones&lt;br /&gt;
|Input=Gaze, Gesture, Voice, HoloLens Clicker, Keyboard, Mouse&lt;br /&gt;
|Connectivity=WiFi, Bluetooth&lt;br /&gt;
|Power=Battery (2.5 to 5.5 hours per charge)&lt;br /&gt;
|Weight=579g&lt;br /&gt;
|Size=&lt;br /&gt;
|Cable Length=Wireless&lt;br /&gt;
|Release Date=March 30, 2016&lt;br /&gt;
|Price=$3,000 / £2,000&lt;br /&gt;
|Website=[http://www.microsoft.com/microsoft-hololens/en-us Microsoft HoloLens]&lt;br /&gt;
}}&lt;br /&gt;
[[Microsoft HoloLens]] is an [[augmented reality headset]] developed by [[Microsoft]]. It is part of the [[Windows Mixed Reality]] [[AR Platform]] incorporated with [[Windows 10]] OS. HoloLens is an optical see-through head-mounted display. It may be similar to other [[OHMD]]s (optical head-mounted displays). Unlike the [[Oculus Rift]] and other [[Virtual Reality#Devices|VR Devices]], the eye-piece component of HoloLens is transparent and the headset requires neither PC nor smartphone. It is able to project high-definition (HD) virtual content over real world objects. &amp;lt;ref name=”one”&amp;gt;Microsoft. Microsoft HoloLens. Retrieved from https://www.microsoft.com/en-us/hololens&amp;lt;/ref&amp;gt; &amp;lt;ref name=”two”&amp;gt;Microsoft. Why HoloLens. Retrieved from https://www.microsoft.com/en-us/hololens/why-hololens&amp;lt;/ref&amp;gt;&lt;br /&gt;
__TOC__&lt;br /&gt;
==General Information==&lt;br /&gt;
Microsoft HoloLens runs a self-contained Windows 10 computer.  It features an HD 3D optical head-mounted display, spatial sound projection and advanced sensors to allow its users to interact with AR applications through head movements, [[#Gesture|gestures]] and [[#Voice|voices]].&lt;br /&gt;
&lt;br /&gt;
HoloLens has various sensors and a high-end CPU and GPU, which Microsoft says gives the headset more processing power than an average laptop. &amp;lt;ref name=”four”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The potential uses of the HoloLens are vast. From social apps to games, to navigation, there’s an incredible potential that this [[mixed reality]] device can tap into. Indeed, Microsoft collaborated with NASA in the making of HoloLens, and there is the potential to control the Mars rover Curiosity via the headset, allowing Nasa staff to work as if they were on the planet themselves. Microsoft also partnered with Volvo to showcase another possible use - using it in car showrooms for customers to view different color configurations for their chosen car and see features in action. &amp;lt;ref name=”four”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
At the end of March 2016, holoportation was showcased. The video demonstration showed how it could be possible - through the use of multiple cameras - to use the HoloLens to view a 3D version of a person. &amp;lt;ref name=”four”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
While the HoloLens price is high, it is an impressive piece of hardware and indicates that Microsoft is taking the augmented reality and virtual reality markets seriously. &amp;lt;ref name=”three”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Announcement and Release==&lt;br /&gt;
Microsoft HoloLens was announced during a Windows 10 Event on January 21st, 2015. The Development Edition was released on March 30, 2016, for $3,000 or £2,000. It allowed developers to start making apps and games for the headset. Months later, it became available to anyone with a Microsoft account. During the last quarter of 2016, the program expanded beyond the United States into countries like the United Kingdom, Ireland, France, Germany, Australia and New Zealand. Currently, there’s still no information regarding a consumer edition release date. &amp;lt;ref name=”three”&amp;gt;Sophie, C. (2017). Microsoft HoloLens: Everything you need to know about the $3,000 AR headset. Retrieved from https://www.wareable.com/microsoft/microsoft-hololens-everything-you-need-to-know-about-the-futuristic-ar-headset-735&amp;lt;/ref&amp;gt; &amp;lt;ref name=”four”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”five”&amp;gt;Spence, E. (2017). Microsoft HoloLens Review: Winning the reality wars. Retrieved from https://www.forbes.com/sites/ewanspence/2017/01/14/microsoft-hololens-review-experience-review/2/#4053cf3d43f9&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Features==&lt;br /&gt;
[[Holograms]] - realistic 3D projections that can be anchored onto real life objects. These virtual objects are projected at about 60 cm (near plane) to few meters. &lt;br /&gt;
&lt;br /&gt;
[[Spatial Mapping]] - scans the environment in real time to create a mesh of an X/Y/Z coordinate plane. Objects can be accurately projected into the mesh.&lt;br /&gt;
&lt;br /&gt;
[[Spatial Audio]] - in-app audio will come from different directions which depend on where you are in relation to the virtual object making the sound&lt;br /&gt;
&lt;br /&gt;
[[#Voice|Voice Recognition]] - recognizes various voice commands.&lt;br /&gt;
&lt;br /&gt;
[[#Gesture|Gesture Recognition]] - recognizes various gesture commands such as the [[Air Tap]].&lt;br /&gt;
&lt;br /&gt;
[[#Gaze|Gaze Recognition]] - HoloLens tracks your gaze.&lt;br /&gt;
&lt;br /&gt;
==Hardware==&lt;br /&gt;
===Review===&lt;br /&gt;
&#039;&#039;&#039;Headset and Display&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
HoloLens requires neither cords nor phones. It features an optical [[HMD]] on top of a plastic ring that wraps around the head. The plastic ring has a soft foam cushion on the inside. Like other HMDs, the weight of HoloLens is front loaded and feels a bit bulky. HoloLens can be used with most prescription glasses.&lt;br /&gt;
&lt;br /&gt;
The transparent dual displays are made of three layers of glass (red, blue and green). A light engine is mounted above the displays and projects light on the lenses. The tiny corrugated grooves in each layer of glass diffract these light particles, making them bounce around and helping to trick your eyes into perceiving virtual objects at virtual distances.&lt;br /&gt;
&lt;br /&gt;
The [[field of view]] where the virtual objects appear is quite small - 30° horizontal and 17.5° vertical. It is the same as a 16:9 monitor with 15 feet diagonal, 2 feet away from you face.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Sensors&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Sensors include head tracking [[IMU]]s (Inertial Measuring Unit); a sound capture system consisting of an array of 4 microphones; an energy efficient depth camera with 120°×120° [[FOV]], an RGB 2-megapixel photo / HD video camera and an ambient light sensor. Additionally, it has 4 greyscale environment sensing cameras that work with the depth camera to track the head, hands and the surrounding environment.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Processors&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For processors, in addition to [[CPU]] and [[GPU]], HoloLens possess an [[HPU]], ([[holographic processing unit]]). The HPU is a coprocessor dedicated to integrating real world and virtually generated content. It consolidates and processes all the data from various sensors and produces a thin stream of useful information to the other processors. HPU removes the burden of handling heavy external data from the CPU and GPU, allowing them to focus on creating content.&lt;br /&gt;
&lt;br /&gt;
[[HPU]] - processes all of the data from its sensors, depth camera, microphone etc.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Audio&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The [[3D audio|Spatial sound system]] consists of 2 small speakers are located on the sides of the OHMD, sitting above the ears. Unlike headphones, these speakers do not prevent the user from hearing external sounds. In-app audio will come from different directions which depend on where you are in relation to the virtual object making the sound.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Input and Interface&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A pair of buttons responsible for brightness is above the left ear while another pair of buttons responsible for volume is above the right ear. In each pair, one of the buttons is concave while the other one is convex. There is also a Power button. These are the only physical inputs - HoloLens is largely controlled by [[#Voice|voice]], [[#Gesture|gesture]] and [[#Gaze|gaze]] along with [[HoloLens Clicker|a bluetooth clicker]]&lt;br /&gt;
&lt;br /&gt;
5 LEDs are present on the left side of the OHMD. These LEDs display various system statuses such as power and battery conditions. A microUSB port is present for charging and connection. It is possible to use Microsoft HoloLens while it’s charging over microUSB. &amp;lt;ref name=”four”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Power and Connectivity&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The battery in HoloLens lasts around 2.5 hours during processor intensive use. It lasts around 5.5 hours during regular use. &lt;br /&gt;
&lt;br /&gt;
HoloLens can connect to any WiFi or Bluetooth-equipped device. &lt;br /&gt;
&lt;br /&gt;
HoloLens can run any universal Windows 10 app.&lt;br /&gt;
&lt;br /&gt;
===In the Box===&lt;br /&gt;
*HoloLens Development Edition&lt;br /&gt;
*[[HoloLens Clicker]]&lt;br /&gt;
*Carrying case&lt;br /&gt;
*Charger and cable&lt;br /&gt;
*Microfiber cloth&lt;br /&gt;
*Nose pads&lt;br /&gt;
*Overhead strap&lt;br /&gt;
&lt;br /&gt;
==Specifications==&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Part&lt;br /&gt;
!Spec&lt;br /&gt;
|-&lt;br /&gt;
| CPU || Intel 32 bit architecture&lt;br /&gt;
|-&lt;br /&gt;
| GPU || ??&lt;br /&gt;
|-&lt;br /&gt;
|[[HPU]] || Custom-built Microsoft Holographic Processing Unit (HPU 1.0)&lt;br /&gt;
|-&lt;br /&gt;
|RAM || 2 GB&lt;br /&gt;
|-&lt;br /&gt;
|Storage || 64 GB Flash&lt;br /&gt;
|-&lt;br /&gt;
|Display || 2 HD 16:9 light engines&lt;br /&gt;
|-&lt;br /&gt;
|Optics || See-through holographic lenses (waveguides)&lt;br /&gt;
|-&lt;br /&gt;
|[[IPD]] || Automatic pupillary distance calibration&lt;br /&gt;
|-&lt;br /&gt;
|Holographic Resolution || 2.3M total light points&lt;br /&gt;
|-&lt;br /&gt;
|Holographic Density|| &amp;gt;2.5k radiants (light points per radian)&lt;br /&gt;
|-&lt;br /&gt;
|Field of View || 30°H and 17.5°V&lt;br /&gt;
|-&lt;br /&gt;
|Cameras || 2 Mega-pixel photo / HD video camera, depth camera, 4 greyscale environment understanding cameras&lt;br /&gt;
|-&lt;br /&gt;
|Sensors || ambient light sensor, 4 microphones&lt;br /&gt;
|-&lt;br /&gt;
|[[Tracking]] || 6 degrees of freedom&lt;br /&gt;
|-&lt;br /&gt;
|[[Rotational tracking]] || [[Gyroscope]], [[Magnetometer]], [[Accelerometer]]&lt;br /&gt;
|-&lt;br /&gt;
|[[Positional tracking]] || depth camera, 4 greyscale environment understanding cameras&lt;br /&gt;
|-&lt;br /&gt;
|Update Rate || &lt;br /&gt;
|-&lt;br /&gt;
|[[#Tracking volume|Tracking Volume]] || &lt;br /&gt;
|-&lt;br /&gt;
|Latency || Motion to Photon: less than 2ms&lt;br /&gt;
|-&lt;br /&gt;
|Audio || Built-in speakers, Audio 3.5mm jack&lt;br /&gt;
|-&lt;br /&gt;
|Connectivity || Wi-Fi 802.11ac, Micro USB 2.0, Bluetooth 4.1 LE&lt;br /&gt;
|-&lt;br /&gt;
|Power || Battery: 2-3 hours of active use, Up to 2 weeks of standby time&lt;br /&gt;
|-&lt;br /&gt;
|Weight || 579g&lt;br /&gt;
|-&lt;br /&gt;
|User Input || [[Gaze]], [[voice]], [[gesture]]&lt;br /&gt;
|-&lt;br /&gt;
|Buttons || Brightness, volume, power&lt;br /&gt;
|-&lt;br /&gt;
|OS || Windows 10&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Setup Tutorial==&lt;br /&gt;
&lt;br /&gt;
==Commands==&lt;br /&gt;
===Gaze===&lt;br /&gt;
HoloLens tracks your gaze. When you perform a gesture such as air tap, look at the part of hologram where you want to place your tap. &lt;br /&gt;
===Gesture===&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Action&lt;br /&gt;
!Description&lt;br /&gt;
!Effect&lt;br /&gt;
|-&lt;br /&gt;
|[[Air Tap]] || With your index finger pointed upward, bend it forward || Simulates a mouse click in a desktop environment. Activates the interactive component&lt;br /&gt;
|-&lt;br /&gt;
|Home/Start || Opening your hand with palm facing up || Simulates the Windows key on a keyboard or Home button on a Windows Tablet. Opens up the holographic start menu. &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Voice===&lt;br /&gt;
Microsoft&#039;s virtual assistant [[Cortana]] is incorporated into the HoloLens. Users can interact with her with natural language commands. &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Action&lt;br /&gt;
!Effect&lt;br /&gt;
|-&lt;br /&gt;
|&amp;quot;Follow me&amp;quot; || The window follows the user, along the wall. &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Input Devices==&lt;br /&gt;
&#039;&#039;&#039;[[HoloLens Clicker]]&#039;&#039;&#039; - a small clicker with a loop that wraps around your middle or index finger. It is held with the microUSB port towards your body and your thumb resting on top of the click, in the indentation. The clicker features a single button and [[rotational tracking]]. It allows a user to click and scroll with minimal hand motion as a replacement for the air-tap gesture.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Bluetooth Mouse and Keyboard&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Apps== &lt;br /&gt;
HoloLens can project various Windows 10 Apps, programs, and browsers onto walls and other objects. One of the examples Microsoft used was Windows-like interfaces projected onto walls and furniture. Users can interact with these projections with gaze, gestures and voice commands.&lt;br /&gt;
&lt;br /&gt;
[[SketchUp]] &lt;br /&gt;
&lt;br /&gt;
[[Holo Studio]] - Allows the user to create 3D models used for [[3D Printing]]. In addition to gesture commands, it also accepts voice commands.&lt;br /&gt;
&lt;br /&gt;
[[Minecraft]] - An Augmented reality version of Minecraft.&lt;br /&gt;
&lt;br /&gt;
[[Project Xray]] - A [[mixed reality]] shooter game.&lt;br /&gt;
&lt;br /&gt;
[[Actiongram]] - Place 3D models into real world environments and record videos with them, mixing reality with digital overlays.&lt;br /&gt;
&lt;br /&gt;
[[HoloGuide]] - Guides a user through low visibility areas.&lt;br /&gt;
&lt;br /&gt;
[[HoloHear]] - Instantly translates speech into sign language for deaf people.&lt;br /&gt;
&lt;br /&gt;
[[Teomirn]] - Overlays prompts and instructions on a real piano to help people learn how to play. &amp;lt;ref name=”three”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Developer==&lt;br /&gt;
[[Windows Mixed Reality]] is Microsoft&#039;s AR platform incorporated in Windows 10 OS. Windows Mixed Reality API is implemented in all devices running Windows 10 including smartphones and tablets.&lt;br /&gt;
&lt;br /&gt;
To develop for HoloLens, you need a Windows 10 PC able to run [[Visual Studio 2015]] and [[Unity]].&lt;br /&gt;
&lt;br /&gt;
===Tools===&lt;br /&gt;
[[Unity]]&lt;br /&gt;
&lt;br /&gt;
[[Visual Studio 2015]]&lt;br /&gt;
&lt;br /&gt;
[[Windows SDK]]&lt;br /&gt;
&lt;br /&gt;
[[Windows Device Portal]]&lt;br /&gt;
====HoloLens Emulator====&lt;br /&gt;
[[HoloLens Emulator]] allows the user to test Holographic apps on their PCs without the need of a physical HoloLens. The human and environmental inputs that would usually be read by the sensors on the HoloLens are instead simulated using your keyboard, mouse, or Xbox controller. Apps don&#039;t need to be modified to run on the emulator and don&#039;t know that they aren&#039;t running on a real HoloLens. &amp;lt;ref&amp;gt;Microsoft. Using the HoloLens emulator. Retrieved from https://developer.microsoft.com/en-us/windows/holographic/using_the_hololens_emulator&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==History==&lt;br /&gt;
&#039;&#039;&#039;January 21, 2015&#039;&#039;&#039; - Microsoft HoloLens was officially announced.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;April 28, 2015&#039;&#039;&#039; - First live stage presentation of the HoloLens.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;March 30, 2016&#039;&#039;&#039; - Developer Edition of the HoloLens is officially released.&lt;br /&gt;
&lt;br /&gt;
==Images==&lt;br /&gt;
[[File:microsoft hololens3.jpg|300px]] [[File:microsoft hololens4.jpg|300px]] [[File:microsoft hololens5.jpg|300px]] [[File:microsoft hololens6.jpg|300px]]&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Augmented Reality Devices]]&lt;/div&gt;</summary>
		<author><name>RealEditor</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Microsoft_HoloLens&amp;diff=36376</id>
		<title>Microsoft HoloLens</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Microsoft_HoloLens&amp;diff=36376"/>
		<updated>2025-08-04T06:26:21Z</updated>

		<summary type="html">&lt;p&gt;RealEditor: Remove bullshit&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Device Infobox&lt;br /&gt;
|image=[[File:microsoft hololens2.jpg|350px]]&lt;br /&gt;
|VR/AR=[[Augmented reality]]&lt;br /&gt;
|Type=[[Optical see-through head-mounted display]]&lt;br /&gt;
|Subtype=[[Standalone AR]]&lt;br /&gt;
|Platform=[[Windows Mixed Reality]]&lt;br /&gt;
|Creator=[[Alex Kipman]]&lt;br /&gt;
|Developer=[[Microsoft]]&lt;br /&gt;
|Manufacturer=Microsoft&lt;br /&gt;
|Operating System=[[Windows 10]]&lt;br /&gt;
|Versions=&lt;br /&gt;
|Requires=Nothing&lt;br /&gt;
|Predecessor=None&lt;br /&gt;
|Successor=[[Microsoft HoloLens 2]]&lt;br /&gt;
|CPU=Intel 32 bit architecture&lt;br /&gt;
|GPU=&lt;br /&gt;
|HPU=[[Holographic processing unit]]&lt;br /&gt;
|Memory=2 GB&lt;br /&gt;
|Storage=64 GB Flash&lt;br /&gt;
|Display=2 HD 16:9 light engines&lt;br /&gt;
|Resolution=Holographic resolution: 2.3M total light points&lt;br /&gt;
|Pixel Density=Holographic density: over 2.5k radiants (light points per radian)&lt;br /&gt;
|Refresh Rate=240Hz (60Hz content rate, each frame consists of four sequential colors: R-G-B-G)&lt;br /&gt;
|Persistence=2.5ms&lt;br /&gt;
|Precision=&lt;br /&gt;
|Field of View=30°H and 17.5°V&lt;br /&gt;
|Optics=See-through holographic lenses (waveguides)&lt;br /&gt;
|Tracking=6DOF&lt;br /&gt;
|Rotational Tracking=[[Gyroscope]], [[Magnetometer]], [[Accelerometer]]&lt;br /&gt;
|Positional Tracking=Depth Camera with 120°×120° FOV, 4 greyscale cameras&lt;br /&gt;
|Update Rate=&lt;br /&gt;
|Latency=Motion to Photon: less than 2ms&lt;br /&gt;
|Audio=Built-in speakers, Audio 3.5mm jack&lt;br /&gt;
|Camera=2MP photo / HD video camera, depth camera, 4 greyscale cameras&lt;br /&gt;
|Sensors=ambient light sensor, array of 4 microphones&lt;br /&gt;
|Input=Gaze, Gesture, Voice, HoloLens Clicker, Keyboard, Mouse&lt;br /&gt;
|Connectivity=WiFi, Bluetooth&lt;br /&gt;
|Power=Battery (2.5 to 5.5 hours per charge)&lt;br /&gt;
|Weight=579g&lt;br /&gt;
|Size=&lt;br /&gt;
|Cable Length=Wireless&lt;br /&gt;
|Release Date=March 30, 2016&lt;br /&gt;
|Price=$3,000 / £2,000&lt;br /&gt;
|Website=[http://www.microsoft.com/microsoft-hololens/en-us Microsoft HoloLens]&lt;br /&gt;
}}&lt;br /&gt;
[[Microsoft HoloLens]] is an [[augmented reality headset]] developed by [[Microsoft]]. It is part of the [[Windows Mixed Reality]] [[AR Platform]] incorporated with [[Windows 10]] OS. HoloLens is an optical see-through head-mounted display. It may be similar to other [[OHMD]]s (optical head-mounted displays). Unlike the [[Oculus Rift]] and other [[Virtual Reality#Devices|VR Devices]], the eye-piece component of HoloLens is transparent and the headset requires neither PC nor smartphone. It is able to project high-definition (HD) virtual content over real world objects. &amp;lt;ref name=”one”&amp;gt;Microsoft. Microsoft HoloLens. Retrieved from https://www.microsoft.com/en-us/hololens&amp;lt;/ref&amp;gt; &amp;lt;ref name=”two”&amp;gt;Microsoft. Why HoloLens. Retrieved from https://www.microsoft.com/en-us/hololens/why-hololens&amp;lt;/ref&amp;gt;&lt;br /&gt;
__TOC__&lt;br /&gt;
==General Information==&lt;br /&gt;
Microsoft HoloLens runs a self-contained Windows 10 computer.  It features an HD 3D optical head-mounted display, spatial sound projection and advanced sensors to allow its users to interact with AR applications through head movements, [[#Gesture|gestures]] and [[#Voice|voices]].&lt;br /&gt;
&lt;br /&gt;
HoloLens has various sensors and a high-end CPU and GPU, which Microsoft says gives the headset more processing power than an average laptop. &amp;lt;ref name=”four”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The potential uses of the HoloLens are vast. From social apps to games, to navigation, there’s an incredible potential that this [[mixed reality]] device can tap into. Indeed, Microsoft collaborated with NASA in the making of HoloLens, and there is the potential to control the Mars rover Curiosity via the headset, allowing Nasa staff to work as if they were on the planet themselves. Microsoft also partnered with Volvo to showcase another possible use - using it in car showrooms for customers to view different color configurations for their chosen car and see features in action. &amp;lt;ref name=”four”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
At the end of March 2016, holoportation was showcased. The video demonstration showed how it could be possible - through the use of multiple cameras - to use the HoloLens to view a 3D version of a person. &amp;lt;ref name=”four”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
While the HoloLens price is high, it is an impressive piece of hardware and indicates that Microsoft is taking the augmented reality and virtual reality markets seriously. &amp;lt;ref name=”three”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Announcement and Release==&lt;br /&gt;
Microsoft HoloLens was announced during a Windows 10 Event on January 21st, 2015. The Development Edition was released on March 30, 2016, for $3,000 or £2,000. It allowed developers to start making apps and games for the headset. Months later, it became available to anyone with a Microsoft account. During the last quarter of 2016, the program expanded beyond the United States into countries like the United Kingdom, Ireland, France, Germany, Australia and New Zealand. Currently, there’s still no information regarding a consumer edition release date. &amp;lt;ref name=”three”&amp;gt;Sophie, C. (2017). Microsoft HoloLens: Everything you need to know about the $3,000 AR headset. Retrieved from https://www.wareable.com/microsoft/microsoft-hololens-everything-you-need-to-know-about-the-futuristic-ar-headset-735&amp;lt;/ref&amp;gt; &amp;lt;ref name=”four”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”five”&amp;gt;Spence, E. (2017). Microsoft HoloLens Review: Winning the reality wars. Retrieved from https://www.forbes.com/sites/ewanspence/2017/01/14/microsoft-hololens-review-experience-review/2/#4053cf3d43f9&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Features==&lt;br /&gt;
[[Holograms]] - realistic 3D projections that can be anchored onto real life objects. These virtual objects are projected at about 60 cm (near plane) to few meters. &lt;br /&gt;
&lt;br /&gt;
[[Spatial Mapping]] - scans the environment in real time to create a mesh of an X/Y/Z coordinate plane. Objects can be accurately projected into the mesh.&lt;br /&gt;
&lt;br /&gt;
[[Spatial Audio]] - in-app audio will come from different directions which depend on where you are in relation to the virtual object making the sound&lt;br /&gt;
&lt;br /&gt;
[[#Voice|Voice Recognition]] - recognizes various voice commands.&lt;br /&gt;
&lt;br /&gt;
[[#Gesture|Gesture Recognition]] - recognizes various gesture commands such as the [[Air Tap]].&lt;br /&gt;
&lt;br /&gt;
[[#Gaze|Gaze Recognition]] - HoloLens tracks your gaze.&lt;br /&gt;
&lt;br /&gt;
==Hardware==&lt;br /&gt;
===Review===&lt;br /&gt;
&#039;&#039;&#039;Headset and Display&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
HoloLens requires neither cords nor phones. It features an optical [[HMD]] on top of a plastic ring that wraps around the head. The plastic ring has a soft foam cushion on the inside. Like other HMDs, the weight of HoloLens is front loaded and feels a bit bulky. HoloLens can be used with most prescription glasses.&lt;br /&gt;
&lt;br /&gt;
The transparent dual displays are made of three layers of glass (red, blue and green). A light engine is mounted above the displays and projects light on the lenses. The tiny corrugated grooves in each layer of glass diffract these light particles, making them bounce around and helping to trick your eyes into perceiving virtual objects at virtual distances.&lt;br /&gt;
&lt;br /&gt;
The [[field of view]] where the holograms appear is quite small - 30° horizontal and 17.5° vertical. It is the same as a 16:9 monitor with 15 feet diagonal, 2 feet away from you face.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Sensors&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Sensors include head tracking [[IMU]]s (Inertial Measuring Unit); a sound capture system consisting of an array of 4 microphones; an energy efficient depth camera with 120°×120° [[FOV]], an RGB 2-megapixel photo / HD video camera and an ambient light sensor. Additionally, it has 4 greyscale environment sensing cameras that work with the depth camera to track the head, hands and the surrounding environment.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Processors&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
For processors, in addition to [[CPU]] and [[GPU]], HoloLens possess an [[HPU]], ([[holographic processing unit]]). The HPU is a coprocessor dedicated to integrating real world and virtually generated content. It consolidates and processes all the data from various sensors and produces a thin stream of useful information to the other processors. HPU removes the burden of handling heavy external data from the CPU and GPU, allowing them to focus on creating content.&lt;br /&gt;
&lt;br /&gt;
[[HPU]] - processes all of the data from its sensors, depth camera, microphone etc.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Audio&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The [[3D audio|Spatial sound system]] consists of 2 small speakers are located on the sides of the OHMD, sitting above the ears. Unlike headphones, these speakers do not prevent the user from hearing external sounds. In-app audio will come from different directions which depend on where you are in relation to the virtual object making the sound.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Input and Interface&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
A pair of buttons responsible for brightness is above the left ear while another pair of buttons responsible for volume is above the right ear. In each pair, one of the buttons is concave while the other one is convex. There is also a Power button. These are the only physical inputs - HoloLens is largely controlled by [[#Voice|voice]], [[#Gesture|gesture]] and [[#Gaze|gaze]] along with [[HoloLens Clicker|a bluetooth clicker]]&lt;br /&gt;
&lt;br /&gt;
5 LEDs are present on the left side of the OHMD. These LEDs display various system statuses such as power and battery conditions. A microUSB port is present for charging and connection. It is possible to use Microsoft HoloLens while it’s charging over microUSB. &amp;lt;ref name=”four”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Power and Connectivity&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The battery in HoloLens lasts around 2.5 hours during processor intensive use. It lasts around 5.5 hours during regular use. &lt;br /&gt;
&lt;br /&gt;
HoloLens can connect to any WiFi or Bluetooth-equipped device. &lt;br /&gt;
&lt;br /&gt;
HoloLens can run any universal Windows 10 app.&lt;br /&gt;
&lt;br /&gt;
===In the Box===&lt;br /&gt;
*HoloLens Development Edition&lt;br /&gt;
*[[HoloLens Clicker]]&lt;br /&gt;
*Carrying case&lt;br /&gt;
*Charger and cable&lt;br /&gt;
*Microfiber cloth&lt;br /&gt;
*Nose pads&lt;br /&gt;
*Overhead strap&lt;br /&gt;
&lt;br /&gt;
==Specifications==&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Part&lt;br /&gt;
!Spec&lt;br /&gt;
|-&lt;br /&gt;
| CPU || Intel 32 bit architecture&lt;br /&gt;
|-&lt;br /&gt;
| GPU || ??&lt;br /&gt;
|-&lt;br /&gt;
|[[HPU]] || Custom-built Microsoft Holographic Processing Unit (HPU 1.0)&lt;br /&gt;
|-&lt;br /&gt;
|RAM || 2 GB&lt;br /&gt;
|-&lt;br /&gt;
|Storage || 64 GB Flash&lt;br /&gt;
|-&lt;br /&gt;
|Display || 2 HD 16:9 light engines&lt;br /&gt;
|-&lt;br /&gt;
|Optics || See-through holographic lenses (waveguides)&lt;br /&gt;
|-&lt;br /&gt;
|[[IPD]] || Automatic pupillary distance calibration&lt;br /&gt;
|-&lt;br /&gt;
|Holographic Resolution || 2.3M total light points&lt;br /&gt;
|-&lt;br /&gt;
|Holographic Density|| &amp;gt;2.5k radiants (light points per radian)&lt;br /&gt;
|-&lt;br /&gt;
|Field of View || 30°H and 17.5°V&lt;br /&gt;
|-&lt;br /&gt;
|Cameras || 2 Mega-pixel photo / HD video camera, depth camera, 4 greyscale environment understanding cameras&lt;br /&gt;
|-&lt;br /&gt;
|Sensors || ambient light sensor, 4 microphones&lt;br /&gt;
|-&lt;br /&gt;
|[[Tracking]] || 6 degrees of freedom&lt;br /&gt;
|-&lt;br /&gt;
|[[Rotational tracking]] || [[Gyroscope]], [[Magnetometer]], [[Accelerometer]]&lt;br /&gt;
|-&lt;br /&gt;
|[[Positional tracking]] || depth camera, 4 greyscale environment understanding cameras&lt;br /&gt;
|-&lt;br /&gt;
|Update Rate || &lt;br /&gt;
|-&lt;br /&gt;
|[[#Tracking volume|Tracking Volume]] || &lt;br /&gt;
|-&lt;br /&gt;
|Latency || Motion to Photon: less than 2ms&lt;br /&gt;
|-&lt;br /&gt;
|Audio || Built-in speakers, Audio 3.5mm jack&lt;br /&gt;
|-&lt;br /&gt;
|Connectivity || Wi-Fi 802.11ac, Micro USB 2.0, Bluetooth 4.1 LE&lt;br /&gt;
|-&lt;br /&gt;
|Power || Battery: 2-3 hours of active use, Up to 2 weeks of standby time&lt;br /&gt;
|-&lt;br /&gt;
|Weight || 579g&lt;br /&gt;
|-&lt;br /&gt;
|User Input || [[Gaze]], [[voice]], [[gesture]]&lt;br /&gt;
|-&lt;br /&gt;
|Buttons || Brightness, volume, power&lt;br /&gt;
|-&lt;br /&gt;
|OS || Windows 10&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Setup Tutorial==&lt;br /&gt;
&lt;br /&gt;
==Commands==&lt;br /&gt;
===Gaze===&lt;br /&gt;
HoloLens tracks your gaze. When you perform a gesture such as air tap, look at the part of hologram where you want to place your tap. &lt;br /&gt;
===Gesture===&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Action&lt;br /&gt;
!Description&lt;br /&gt;
!Effect&lt;br /&gt;
|-&lt;br /&gt;
|[[Air Tap]] || With your index finger pointed upward, bend it forward || Simulates a mouse click in a desktop environment. Activates the interactive component&lt;br /&gt;
|-&lt;br /&gt;
|Home/Start || Opening your hand with palm facing up || Simulates the Windows key on a keyboard or Home button on a Windows Tablet. Opens up the holographic start menu. &lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Voice===&lt;br /&gt;
Microsoft&#039;s virtual assistant [[Cortana]] is incorporated into the HoloLens. Users can interact with her with natural language commands. &lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!Action&lt;br /&gt;
!Effect&lt;br /&gt;
|-&lt;br /&gt;
|&amp;quot;Follow me&amp;quot; || The window follows the user, along the wall. &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Input Devices==&lt;br /&gt;
&#039;&#039;&#039;[[HoloLens Clicker]]&#039;&#039;&#039; - a small clicker with a loop that wraps around your middle or index finger. It is held with the microUSB port towards your body and your thumb resting on top of the click, in the indentation. The clicker features a single button and [[rotational tracking]]. It allows a user to click and scroll with minimal hand motion as a replacement for the air-tap gesture.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Bluetooth Mouse and Keyboard&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Apps== &lt;br /&gt;
HoloLens can project various Windows 10 Apps, programs, and browsers onto walls and other objects. One of the examples Microsoft used was Windows-like interfaces projected onto walls and furniture. Users can interact with these projections with gaze, gestures and voice commands.&lt;br /&gt;
&lt;br /&gt;
[[SketchUp]] &lt;br /&gt;
&lt;br /&gt;
[[Holo Studio]] - Allows the user to create 3D models used for [[3D Printing]]. In addition to gesture commands, it also accepts voice commands.&lt;br /&gt;
&lt;br /&gt;
[[Minecraft]] - An Augmented reality version of Minecraft.&lt;br /&gt;
&lt;br /&gt;
[[Project Xray]] - A [[mixed reality]] shooter game.&lt;br /&gt;
&lt;br /&gt;
[[Actiongram]] - Place 3D models into real world environments and record videos with them, mixing reality with digital overlays.&lt;br /&gt;
&lt;br /&gt;
[[HoloGuide]] - Guides a user through low visibility areas.&lt;br /&gt;
&lt;br /&gt;
[[HoloHear]] - Instantly translates speech into sign language for deaf people.&lt;br /&gt;
&lt;br /&gt;
[[Teomirn]] - Overlays prompts and instructions on a real piano to help people learn how to play. &amp;lt;ref name=”three”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Developer==&lt;br /&gt;
[[Windows Mixed Reality]] is Microsoft&#039;s AR platform incorporated in Windows 10 OS. Windows Mixed Reality API is implemented in all devices running Windows 10 including smartphones and tablets.&lt;br /&gt;
&lt;br /&gt;
To develop for HoloLens, you need a Windows 10 PC able to run [[Visual Studio 2015]] and [[Unity]].&lt;br /&gt;
&lt;br /&gt;
===Tools===&lt;br /&gt;
[[Unity]]&lt;br /&gt;
&lt;br /&gt;
[[Visual Studio 2015]]&lt;br /&gt;
&lt;br /&gt;
[[Windows SDK]]&lt;br /&gt;
&lt;br /&gt;
[[Windows Device Portal]]&lt;br /&gt;
====HoloLens Emulator====&lt;br /&gt;
[[HoloLens Emulator]] allows the user to test Holographic apps on their PCs without the need of a physical HoloLens. The human and environmental inputs that would usually be read by the sensors on the HoloLens are instead simulated using your keyboard, mouse, or Xbox controller. Apps don&#039;t need to be modified to run on the emulator and don&#039;t know that they aren&#039;t running on a real HoloLens. &amp;lt;ref&amp;gt;Microsoft. Using the HoloLens emulator. Retrieved from https://developer.microsoft.com/en-us/windows/holographic/using_the_hololens_emulator&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==History==&lt;br /&gt;
&#039;&#039;&#039;January 21, 2015&#039;&#039;&#039; - Microsoft HoloLens was officially announced.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;April 28, 2015&#039;&#039;&#039; - First live stage presentation of the HoloLens.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;March 30, 2016&#039;&#039;&#039; - Developer Edition of the HoloLens is officially released.&lt;br /&gt;
&lt;br /&gt;
==Images==&lt;br /&gt;
[[File:microsoft hololens3.jpg|300px]] [[File:microsoft hololens4.jpg|300px]] [[File:microsoft hololens5.jpg|300px]] [[File:microsoft hololens6.jpg|300px]]&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Augmented Reality Devices]]&lt;/div&gt;</summary>
		<author><name>RealEditor</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Windows_Mixed_Reality&amp;diff=36375</id>
		<title>Windows Mixed Reality</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Windows_Mixed_Reality&amp;diff=36375"/>
		<updated>2025-08-04T06:25:58Z</updated>

		<summary type="html">&lt;p&gt;RealEditor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Platform Infobox&lt;br /&gt;
|image=[[File:microsoft hololens1.jpg|350px]]&lt;br /&gt;
|Type=[[Augmented Reality]]&lt;br /&gt;
|Subtype=[[Optical head-mounted display]]&lt;br /&gt;
|Creator=&lt;br /&gt;
|Developer=[[Microsoft]]&lt;br /&gt;
|Manufacturer=&lt;br /&gt;
|Operating System=[[Windows 10]]&lt;br /&gt;
|Browser=&lt;br /&gt;
|Devices=[[Microsoft HoloLens]], [[Windows Mixed Reality Headsets]]&lt;br /&gt;
|Accessories=&lt;br /&gt;
|Release Date=&lt;br /&gt;
|Price=&lt;br /&gt;
|Website=https://www.microsoft.com/microsoft-hololens/en-us&lt;br /&gt;
}}&lt;br /&gt;
[[Windows Mixed Reality]], formerly Windows Holographic, is a [[Augmented Reality]] and [[Virtual Reality]] [[Augmented Reality#Platforms|Software Platform]] incorporated in [[Windows 10]] operating system. It utilizes [[Microsoft HoloLens]], an [[OHMD]], to project Windows 10 apps and other high-definition digital imageries onto real life objects. Holographic allows users to interact with these virtual objects through gaze, gestures and voice commands. &lt;br /&gt;
&lt;br /&gt;
Announced on January 21, 2015, Windows Mixed Reality was introduced with the release of Windows 10 along with [[Microsoft HoloLens]].&lt;br /&gt;
&lt;br /&gt;
On June 1, 2016 in Computex, Microsoft announced that they are opening Windows Mixed Reality to third party developers. They have partnered with [[HTC]], [[Acer]], [[Asus]], [[Lenovo]] and [[HP]].&lt;br /&gt;
&lt;br /&gt;
On August 28, 2017, Microsoft announced that they have partnered with [[Valve]] to have [[games]] and [[experiences]] from [[Steam]] to run on [[Windows Mixed Reality Headsets]].&lt;br /&gt;
__TOC__&lt;br /&gt;
==Features==&lt;br /&gt;
&lt;br /&gt;
*Users can interact with virtual objects. Gaze, gesture and voice are supported input methods.&lt;br /&gt;
&lt;br /&gt;
*The [[OHMD]] is mobile, untethered to your PC or another device.&lt;br /&gt;
&lt;br /&gt;
*[[Computer vision]] is utilize to understand the device&#039;s environment.&lt;br /&gt;
==Hardware==&lt;br /&gt;
===Integrated Headsets===&lt;br /&gt;
&#039;&#039;&#039;[[Microsoft HoloLens]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Microsoft HoloLens 2]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===Windows 10 VR Headsets===&lt;br /&gt;
{{see also|Windows 10 VR}}&lt;br /&gt;
&#039;&#039;&#039;[[Acer Windows Mixed Reality Headset]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Asus HC102]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Dell Visor]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[HP Windows Mixed Reality Headset]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Lenovo Explorer]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Samsung Odyssey]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Apps==&lt;br /&gt;
[[Windows Mixed Reality Apps]]&lt;br /&gt;
&lt;br /&gt;
Windows 10 Apps - [[Microsoft HoloLens#Apps|HoloLens Apps]]&lt;br /&gt;
&lt;br /&gt;
==Developer==&lt;br /&gt;
[[Windows Mixed Reality API]] is incorporated in all devices running Windows 10, even tablets and smartphones.&lt;br /&gt;
&lt;br /&gt;
HoloLens UI/UX are designed around [[gaze input]], [[gesture input]] and [[voice input]] or GGV. [[World coordinates]], [[spatial sound]] and [[spatial mapping]] are environmental understanding features that provide the ability for virtual objects to interact with both the user and the world around them.&lt;br /&gt;
&lt;br /&gt;
===Tools===&lt;br /&gt;
[[Unity]]&lt;br /&gt;
&lt;br /&gt;
[[Visual Studio]]&lt;br /&gt;
&lt;br /&gt;
[[Windows SDK]]&lt;br /&gt;
&lt;br /&gt;
[[Windows Device Portal]]&lt;br /&gt;
&lt;br /&gt;
[[HoloLens Emulator]]&lt;br /&gt;
&lt;br /&gt;
==History==&lt;br /&gt;
&lt;br /&gt;
[[Category:Virtual Reality Platforms]]&lt;/div&gt;</summary>
		<author><name>RealEditor</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Windows_Mixed_Reality&amp;diff=36374</id>
		<title>Windows Mixed Reality</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Windows_Mixed_Reality&amp;diff=36374"/>
		<updated>2025-08-04T06:25:50Z</updated>

		<summary type="html">&lt;p&gt;RealEditor: NOT REAL HOLOGRAMS!&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Platform Infobox&lt;br /&gt;
|image=[[File:microsoft hololens1.jpg|350px]]&lt;br /&gt;
|Type=[[Augmented Reality]]&lt;br /&gt;
|Subtype=[[Optical head-mounted display]]&lt;br /&gt;
|Creator=&lt;br /&gt;
|Developer=[[Microsoft]]&lt;br /&gt;
|Manufacturer=&lt;br /&gt;
|Operating System=[[Windows 10]]&lt;br /&gt;
|Browser=&lt;br /&gt;
|Devices=[[Microsoft HoloLens]], [[Windows Mixed Reality Headsets]]&lt;br /&gt;
|Accessories=&lt;br /&gt;
|Release Date=&lt;br /&gt;
|Price=&lt;br /&gt;
|Website=https://www.microsoft.com/microsoft-hololens/en-us&lt;br /&gt;
}}&lt;br /&gt;
==Introduction==&lt;br /&gt;
[[Windows Mixed Reality]], formerly Windows Holographic, is a [[Augmented Reality]] and [[Virtual Reality]] [[Augmented Reality#Platforms|Software Platform]] incorporated in [[Windows 10]] operating system. It utilizes [[Microsoft HoloLens]], an [[OHMD]], to project Windows 10 apps and other high-definition digital imageries onto real life objects. Holographic allows users to interact with these virtual objects through gaze, gestures and voice commands. &lt;br /&gt;
&lt;br /&gt;
Announced on January 21, 2015, Windows Mixed Reality was introduced with the release of Windows 10 along with [[Microsoft HoloLens]].&lt;br /&gt;
&lt;br /&gt;
On June 1, 2016 in Computex, Microsoft announced that they are opening Windows Mixed Reality to third party developers. They have partnered with [[HTC]], [[Acer]], [[Asus]], [[Lenovo]] and [[HP]].&lt;br /&gt;
&lt;br /&gt;
On August 28, 2017, Microsoft announced that they have partnered with [[Valve]] to have [[games]] and [[experiences]] from [[Steam]] to run on [[Windows Mixed Reality Headsets]].&lt;br /&gt;
__TOC__&lt;br /&gt;
==Features==&lt;br /&gt;
&lt;br /&gt;
*Users can interact with virtual objects. Gaze, gesture and voice are supported input methods.&lt;br /&gt;
&lt;br /&gt;
*The [[OHMD]] is mobile, untethered to your PC or another device.&lt;br /&gt;
&lt;br /&gt;
*[[Computer vision]] is utilize to understand the device&#039;s environment.&lt;br /&gt;
==Hardware==&lt;br /&gt;
===Integrated Headsets===&lt;br /&gt;
&#039;&#039;&#039;[[Microsoft HoloLens]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Microsoft HoloLens 2]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===Windows 10 VR Headsets===&lt;br /&gt;
{{see also|Windows 10 VR}}&lt;br /&gt;
&#039;&#039;&#039;[[Acer Windows Mixed Reality Headset]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Asus HC102]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Dell Visor]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[HP Windows Mixed Reality Headset]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Lenovo Explorer]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Samsung Odyssey]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Apps==&lt;br /&gt;
[[Windows Mixed Reality Apps]]&lt;br /&gt;
&lt;br /&gt;
Windows 10 Apps - [[Microsoft HoloLens#Apps|HoloLens Apps]]&lt;br /&gt;
&lt;br /&gt;
==Developer==&lt;br /&gt;
[[Windows Mixed Reality API]] is incorporated in all devices running Windows 10, even tablets and smartphones.&lt;br /&gt;
&lt;br /&gt;
HoloLens UI/UX are designed around [[gaze input]], [[gesture input]] and [[voice input]] or GGV. [[World coordinates]], [[spatial sound]] and [[spatial mapping]] are environmental understanding features that provide the ability for virtual objects to interact with both the user and the world around them.&lt;br /&gt;
&lt;br /&gt;
===Tools===&lt;br /&gt;
[[Unity]]&lt;br /&gt;
&lt;br /&gt;
[[Visual Studio]]&lt;br /&gt;
&lt;br /&gt;
[[Windows SDK]]&lt;br /&gt;
&lt;br /&gt;
[[Windows Device Portal]]&lt;br /&gt;
&lt;br /&gt;
[[HoloLens Emulator]]&lt;br /&gt;
&lt;br /&gt;
==History==&lt;br /&gt;
&lt;br /&gt;
[[Category:Virtual Reality Platforms]]&lt;/div&gt;</summary>
		<author><name>RealEditor</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Holograms&amp;diff=36373</id>
		<title>Holograms</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Holograms&amp;diff=36373"/>
		<updated>2025-08-04T06:03:34Z</updated>

		<summary type="html">&lt;p&gt;RealEditor: Redirected page to Hologram&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#Redirect [[Hologram]]&lt;/div&gt;</summary>
		<author><name>RealEditor</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Hologram&amp;diff=36372</id>
		<title>Hologram</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Hologram&amp;diff=36372"/>
		<updated>2025-08-04T06:03:32Z</updated>

		<summary type="html">&lt;p&gt;RealEditor: Removed redirect to Holograms&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:Holograms 1.png|thumb|Figure 1. Types of light (image: science.howstuffworks.com)]]&lt;br /&gt;
[[File:Holograms 2.png|thumb|Figure 2. Basic hologram setup (image: science.howstuffworks.com)]]&lt;br /&gt;
[[File:Holograms 3.png|thumb|Figure 3. Reconstructing a hologram (image: www.livescience.com)]]&lt;br /&gt;
&lt;br /&gt;
A &#039;&#039;&#039;hologram&#039;&#039;&#039; is the recorded interference pattern between a point sourced of light of fixed wavelength (reference beam) and a wavefield scattered from the object (object beam). A hologram is recorded in a two- or three-dimensional medium and contains information about the entire three-dimensional wavefield of the recorded object. When the hologram is illuminated by the reference beam, the diffraction pattern recreates the lightfield of the original object. The viewer is then able to see an image that is indistinguishable from the recorded object &amp;lt;ref name=”1”&amp;gt; Jeong, A. and Jeong, T. What are the main types of holograms? Retrieved from http://www.integraf.com/resources/articles/a-main-types-of-holograms&amp;lt;/ref&amp;gt; &amp;lt;ref name=”2”&amp;gt; Schnars, U. and Jüptner, W. (2002). Digital recording and numerical reconstruction of holograms. Meas. Sci. Technol., 13: R85-R101&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The holographic plate is a kind of recording medium, in which the 3D virtual image of an object is stored. While in a recording media (e.g a CD) the grooves contain information about sound that can be used to reconstruct a song, a holographic plate contains information about light that is used to reconstruct an object &amp;lt;ref name=”3”&amp;gt; Physics Central. Holograms: virtually approaching science fiction. Retrieved from http://physicscentral.com/explore/action/hologram.cfm&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The information about light is coded in the form of bright and dark microinterferences. Usually, these are not visible to the human eye due to the high spatial frequencies. Reconstructing the object wave by illuminating the hologram with the reference wave creates a 3D image that exhibits the effects of perspective and depth of focus &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
This photographic technique of recording light scattered from an object and presenting it as a 3D image is called Holography. The object&#039;s representations generated by this technique are the most lifelike 3D renditions because it records information in a way closer to what our eyes use to see the world around us &amp;lt;ref name=”4”&amp;gt; Workman, R. (2013). What is a hologram? Retrieved from  http://www.livescience.com/34652-hologram.html&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt; Bryner, M. (2010). ‘Star Wars’-like holograms nearly a reality. Retrieved from http://www.livescience.com/10227-star-wars-holograms-reality.html&amp;lt;/ref&amp;gt;. Therefore, it is an attractive imaging technique since it allows the viewer to see a complete three-dimensional volume of one image &amp;lt;ref name=”6”&amp;gt; Rosen, J., Katz, B. and Brooker, G. (2009). Review of three-dimensional holographic imaging by Fresnel incoherent correlation holograms. 3D Research, 1(1)&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Throughout the years, several types of holograms have been created. These include transmission holograms, that allow light to be shined through them and the image to be viewed from the side, and rainbow holograms. These are common in credit cards and driver’s licenses (used for security reasons) &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
While various holograms have been used in movies like Star Wars and Iron Man, the real world technology has not achieved the same level as presented in those cinematic stories &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;. Currently, holograms are still static, but they can look incredible such as in the case of large-scale holograms that are illuminated with lasers or displayed in a darkened room with carefully directed lighting. Some holograms can even appear to move as the viewer walks past them, looking at them from different angles. Others can change colors or include views of different objects, depending on how the viewer looks at them &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”7”&amp;gt; Wilson, T. V. (2007). How holograms work. Retrieved from http://science.howstuffworks.com/hologram.htm&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
One of the interesting traits of a hologram is that cutting one in half, each half will contain the pattern to recreate the original object. Even if a small piece is cut out, it will still contain the entire holographic image. Another feature is that making a hologram of a magnifying glass will create a hologram that will magnify the other objects in the hologram &amp;lt;ref name=”7”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==How does it work?==&lt;br /&gt;
&lt;br /&gt;
To create a hologram, holography uses the wave nature of light. In a normal photograph, lenses are used to focus an image on film or an electronic chip, recording where there is light or not. With the holographic technique, the shape a light wave takes after it bounces off an object is recorded. It uses interfering waves of light to capture images that can be 3D. When waves of light meet they interfere with each other, analogous to what happens with waves of water. The pattern created by the interference of waves contains the information used to make the holograms &amp;lt;ref name=”8”&amp;gt; Holographic Studios. A brief history of holography. Retrieved from http://www.holographer.com/history-of-holography/&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
True 3D holograms could not be a practical reality without the invention of the laser. A laser creates waves of light that are coherent. It is this coherent light that makes it possible to record the light wave interference patterns of holography  &amp;lt;ref name=”8”&amp;gt;&amp;lt;/ref&amp;gt;. While white light contains all of the different frequencies of light traveling in all directions, laser light produces light that has only one wavelength and one color (Figure 1) &amp;lt;ref name=”7”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In its basic form, three elements are necessary to create a hologram: an object or person, a laser beam, and a recording medium. A clear environment is also recommended to enable the light beams to intersect &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The laser beam is separated into two beams and redirected using mirrors (Figure 2). One of the beams is directed at the object, while the other - the reference beam - is directed to the recording medium. Some of the light of the object beam is reflected off the object onto the recording medium. The beams intersect and interfere with each other, creating an interference pattern that is imprinted on the recording medium. This medium can be composed of various materials. A common recording medium is a photographic film with an added amount of light reactive grains, enabling a higher resolution for the two beams, and making the image more realistic than using silver halide material &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
A developed film from a regular camera shows the negative view of the original scene, with light and dark areas. Looking at it, it is still possible to more or less understand what the original scene looked like. However, when looking at a revealed holographic tape, there is nothing that resembles the original scene. There can be dark frames of film or a random pattern of lines and swirls, and only with the right illumination is the captured object properly shown &amp;lt;ref name=”7”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Using a transmission hologram made with silver halide emulsion as an example, there needs to be the right light source to recreate the original object beam. This beam is recreated due to the diffraction grating and reflective surfaces inside the hologram that were caused by the interference of the two light sources. The recreated beam is identical to the original object beam before it was combined with the reference wave. Furthermore, it also travels in the same direction as the original beam. This means that since the object was on the other side of the holographic plate, the beam travels towards the viewer. The eyes focus the light, and the brain interprets it as a 3D image located behind the recording medium (Figure 3) &amp;lt;ref name=”7”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Brief history==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1886 -&#039;&#039;&#039; Gabriel Lippmann, in France, develops a theory of using light wave interference to capture color in photography. He presented his theory in 1891 to the Academy of Sciences, along with some primitive examples of his interference color photographs. In 1983, he presented perfect color photographs to the Academy and won a Nobel Prize in Physics, in 1908, due to his work in this area.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1947&#039;&#039;&#039; - Dennis Gabor develops the theory of holography. He coined the term hologram from the Greek words holos (meaning ‘whole’) and gramma (‘message’).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1960 -&#039;&#039;&#039; N. Bassov, A. Prokhorov, and Charles Towns contributed to the development of the laser. Its pure, intense light was optimal for creating holograms.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1962 -&#039;&#039;&#039; Yuri Denisyuk publishes his work in recording 3D images, inspired by Lippmann’s description of interference photography. He began his experiments in 1958 using a highly filtered mercury discharge tube as his light source.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1968 -&#039;&#039;&#039; Dr. Stephen A. Benton invents the white-light transmission holography while researching holographic television. The white-light hologram can be viewed in ordinary white light.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1972 -&#039;&#039;&#039; Lloyd Cross develops the integral hologram. It combines white-light transmission holography with conventional cinematography to produce moving 3D images. &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”8”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”9”&amp;gt; Holography Virtual Gallery. History of holography. Retrieved from http://www.holography.ru/histeng.htm&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Main types of holograms==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;White-light transmission holograms -&#039;&#039;&#039; This type of hologram is illuminated with incandescent light, producing images that contain the rainbow spectrum of colors. Depending on the point of view of the viewer, the holograms&#039; colors change. They are also called rainbow holograms.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Reflection holograms -&#039;&#039;&#039; Reflection holograms are usually mass-produced using a stamping method. They can be seen in credit cards or in a driver’s license. Normally, these holograms can be viewed in white light.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Transmission holograms -&#039;&#039;&#039; Typically, a transmission hologram is viewed with laser light. The light is directed from behind the hologram and the image projected to the viewer’s side.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Hybrid hologram -&#039;&#039;&#039; This type of hologram is between the reflection and transmission types. Examples include embossed holograms, integral holograms, holographic interferometry, multichannel holograms, and computer-generated holograms. &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”7”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”10”&amp;gt; MIT Museum. Holography glossary. Retrieved from https://mitmuseum.mit.edu/holography-glossary&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>RealEditor</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Light_field_display&amp;diff=36371</id>
		<title>Light field display</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Light_field_display&amp;diff=36371"/>
		<updated>2025-08-04T06:01:52Z</updated>

		<summary type="html">&lt;p&gt;RealEditor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Light field display&#039;&#039;&#039; (&#039;&#039;&#039;LFD&#039;&#039;&#039;) is an advanced display technology designed to reproduce a [[light field]], the distribution of light rays in [[3D space]], including their intensity and direction.&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;&amp;gt;Wetzstein G. (2020). “Computational Displays: Achieving the Full Plenoptic Function.” ACM SIGGRAPH 2020 Courses. ACM Digital Library. doi:10.1145/3386569.3409414. Available: https://dl.acm.org/doi/10.1145/3386569.3409414 (accessed 3 May 2025).&amp;lt;/ref&amp;gt; Unlike conventional 2D displays or [[stereoscopic display|stereoscopic 3D]] systems that present flat images or fixed viewpoints requiring glasses, light field displays aim to recreate how light naturally propagates from a real scene.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;&amp;gt;Wetzstein, G., Lanman, D., Hirsch, M., &amp;amp; Raskar, R. (2012). Tensor displays: Compressive light field synthesis using multilayer displays with directional backlighting. ACM Transactions on Graphics, 31(4), Article 80. doi:10.1145/2185520.2185576&amp;lt;/ref&amp;gt; This allows viewers to perceive genuine [[depth]], [[parallax]] (both horizontal and vertical), and perspective changes without special eyewear. (in many implementations).&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;&amp;gt;Hollister, S. (2024, January 19). Leia is building a 3D empire on the back of the worst phone we&#039;ve ever reviewed. The Verge. Retrieved from https://www.theverge.com/24036574/leia-glasses-free-3d-ces-2024&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This method of display is crucial for the future of [[virtual reality]] (VR) and [[augmented reality]] (AR), because it solves the [[vergence-accommodation conflict]] (VAC).&amp;lt;ref name=&amp;quot;WiredVAC&amp;quot;&amp;gt;Zhang, S. (2015, August 11). The Obscure Neuroscience Problem That&#039;s Plaguing VR. WIRED. Retrieved from https://www.wired.com/2015/08/obscure-neuroscience-problem-thats-plaguing-vr&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;VACReview&amp;quot;&amp;gt;Y. Zhou, J. Zhang, F. Fang, “Vergence-accommodation conflict in optical see-through display: Review and prospect,” *Results in Optics*, vol. 5, p. 100160, 2021, doi:10.1016/j.rio.2021.100160.&amp;lt;/ref&amp;gt; It provides correct [[focal cues]] that match the [[vergence]] information, giving a more realistic 3D image that is more visually comfortable, reducing eye strain and [[Virtual Reality Sickness|simulator sickness]] often associated with current [[head-mounted display]]s (HMDs).&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;&amp;gt;CREAL. Light-field: Seeing Virtual Worlds Naturally. Retrieved from https://creal.com/technology/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Definition and Principles ==&lt;br /&gt;
A light field display aims to replicate the [[Plenoptic Function]], a theoretical function describing the complete set of light rays passing through every point in space, in every direction, potentially across time and wavelength.&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt; In practice, light field displays generate a discretized (sampled) approximation of the relevant 4D subset of this function (typically spatial position and angular direction).&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;&amp;gt;Huang, F. C., Wetzstein, G., Barsky, B. A., &amp;amp; Raskar, R. (2014). Eyeglasses-free display: Towards correcting visual aberrations with computational light field displays. ACM Transactions on Graphics, 33(4), Article 59. doi:10.1145/2601097.2601122&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
By controlling the direction as well as the color and intensity of emitted light rays, these displays allow the viewer&#039;s eyes to naturally focus ([[accommodation]]) at different depths within the displayed scene, matching the depth cues provided by binocular vision ([[vergence]]).&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt; This recreation allows users to experience:&lt;br /&gt;
* Full motion [[parallax]] (horizontal and vertical look-around).&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&lt;br /&gt;
* Accurate [[occlusion]] cues.&lt;br /&gt;
* Natural [[focal cues]], mitigating the [[Vergence-accommodation conflict]].&amp;lt;ref name=&amp;quot;WiredVAC&amp;quot;/&amp;gt;&lt;br /&gt;
* [[Specular highlights]] and realistic reflections that change with viewpoint.&lt;br /&gt;
* Viewing without specialized eyewear (especially in non-headset formats).&lt;br /&gt;
&lt;br /&gt;
== Characteristics ==&lt;br /&gt;
* &#039;&#039;&#039;Glasses-Free 3D&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;Full Parallax:&#039;&#039;&#039; True LFDs provide both horizontal and vertical parallax, unlike earlier [[autostereoscopic display|autostereoscopic]] technologies that often limited parallax to side-to-side movement.&lt;br /&gt;
* &#039;&#039;&#039;Accommodation-Convergence Conflict Resolution:&#039;&#039;&#039; A primary driver for VR/AR, LFDs can render virtual objects at appropriate focal distances, aligning accommodation and vergence to significantly improve visual comfort and realism.&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;&amp;gt;&lt;br /&gt;
Lanman D., &amp;amp; Luebke D. (2013). “Near‑Eye Light Field Displays.”  &lt;br /&gt;
*ACM Transactions on Graphics*, 32 (6), 220:1–220:10. doi:10.1145/2508363.2508366.  &lt;br /&gt;
Project page: https://research.nvidia.com/publication/near-eye-light-field-displays (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Computational Requirements:&#039;&#039;&#039; Generating and processing the massive amount of data (multiple views or directional light information) needed for LFDs requires significant [[Graphics processing unit|GPU]] power and bandwidth.&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Resolution Trade-offs:&#039;&#039;&#039; A fundamental challenge involves balancing spatial resolution (image sharpness), angular resolution (smoothness of parallax/number of views), [[Field of view|field of view (FoV)]], and depth of field.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt; This is often referred to as the spatio-angular resolution trade-off.&lt;br /&gt;
&lt;br /&gt;
==History and Development==&lt;br /&gt;
===Early Concepts and Foundations===&lt;br /&gt;
The underlying concept can be traced back to Michael Faraday&#039;s 1846 suggestion of light as a field&amp;lt;ref name=&amp;quot;FaradayField&amp;quot;&amp;gt;Princeton University Press. Faraday, Maxwell, and the Electromagnetic Field - How Two Men Revolutionized Physics. Retrieved from https://press.princeton.edu/books/hardcover/9780691161664/faraday-maxwell-and-the-electromagnetic-field&amp;lt;/ref&amp;gt; and was mathematically formalized regarding radiance transfer by Andrey Gershun in 1936.&amp;lt;ref name=&amp;quot;Gershun1936&amp;quot;&amp;gt;Gershun, A. (1936). The Light Field. Moscow. (Translated by P. Moon &amp;amp; G. Timoshenko, 1939, Journal of Mathematics and Physics, XVIII, 51–151).&amp;lt;/ref&amp;gt; The practical groundwork for reproducing light fields was laid by Gabriel Lippmann&#039;s 1908 concept of [[Integral imaging|Integral Photography]] (&amp;quot;photographie intégrale&amp;quot;), which used an array of small lenses to capture and reproduce light fields.&amp;lt;ref name=&amp;quot;Lippmann1908&amp;quot;&amp;gt;Lippmann, G. (1908). Épreuves réversibles donnant la sensation du relief. Journal de Physique Théorique et Appliquée, 7(1), 821–825. doi:10.1051/jphystap:019080070082100&amp;lt;/ref&amp;gt; The modern computational understanding was significantly advanced by Adelson and Bergen&#039;s formalization of the [[Plenoptic Function]] in 1991.&amp;lt;ref name=&amp;quot;AdelsonBergen1991&amp;quot;&amp;gt;Adelson, E. H., &amp;amp; Bergen, J. R. (1991). The plenoptic function and the elements of early vision. In M. Landy &amp;amp; J. A. Movshon (Eds.), Computational Models of Visual Processing (pp. 3–20). MIT Press.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Key Development Milestones===&lt;br /&gt;
* &#039;&#039;&#039;1908:&#039;&#039;&#039; Gabriel Lippmann introduces integral photography.&amp;lt;ref name=&amp;quot;Lippmann1908&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;1936:&#039;&#039;&#039; Andrey Gershun formalizes the light field mathematically.&amp;lt;ref name=&amp;quot;Gershun1936&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;1991:&#039;&#039;&#039; Adelson and Bergen formalize the plenoptic function.&amp;lt;ref name=&amp;quot;AdelsonBergen1991&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;1996:&#039;&#039;&#039; Levoy and Hanrahan publish work on Light Field Rendering.&amp;lt;ref name=&amp;quot;Levoy1996&amp;quot;&amp;gt;Levoy, M., &amp;amp; Hanrahan, P. (1996). Light field rendering. Proceedings of the 23rd annual conference on Computer graphics and interactive techniques (SIGGRAPH &#039;96), 31-42. doi:10.1145/237170.237193&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2005:&#039;&#039;&#039; Stanford Multi-camera Array demonstrated for light field capture.&amp;lt;ref name=&amp;quot;Wilburn2005&amp;quot;&amp;gt;Wilburn, B., Joshi, N., Vaish, V., Talvala, E. V., Antunez, E., Barth, A., Adams, A., Horowitz, M., &amp;amp; Levoy, M. (2005). High performance imaging using large camera arrays. ACM SIGGRAPH 2005 Papers (SIGGRAPH &#039;05), 765-776. doi:10.1145/1186822.1073256&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2004-2008:&#039;&#039;&#039; Early computational light field displays developed (for example MIT Media Lab).&amp;lt;ref name=&amp;quot;Matusik2004&amp;quot;&amp;gt;Matusik, W., &amp;amp; Pfister, H. (2004). 3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes. ACM SIGGRAPH 2004 Papers (SIGGRAPH &#039;04), 814–824. doi:10.1145/1186562.1015805&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2010-2013:&#039;&#039;&#039; Introduction of multilayer, compressive, and tensor light field display concepts.&amp;lt;ref name=&amp;quot;Lanman2010ContentAdaptive&amp;quot;&amp;gt;Lanman, D., Hirsch, M., Kim, Y., &amp;amp; Raskar, R. (2010). Content-adaptive parallax barriers: optimizing dual-layer 3D displays using low-rank light field factorization. ACM SIGGRAPH Asia 2010 papers (SIGGRAPH ASIA &#039;10), Article 163. doi:10.1145/1882261.1866191&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2013:&#039;&#039;&#039; NVIDIA demonstrates near-eye light field display prototype for VR.&amp;lt;ref name=&amp;quot;NvidiaNELD&amp;quot;&amp;gt;Lanman, D., &amp;amp; Luebke, D. (2013). Near-Eye Light Field Displays (Technical Report NVR-2013-004). NVIDIA Research. Retrieved from https://research.nvidia.com/sites/default/files/pubs/2013-11_Near-Eye-Light-Field/NVIDIA-NELD.pdf&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2015 onwards:&#039;&#039;&#039; Emergence of advanced prototypes (for example CREAL, Light Field Lab, PetaRay).&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;&amp;gt;Lang, B. (2023, January 11). CREAL&#039;s Latest Light-field AR Demo Shows Continued Progress Toward Natural Depth &amp;amp; Focus. Road to VR. Retrieved from https://www.roadtovr.com/creal-light-field-ar-vr-headset-prototype/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Technical Implementations (How They Work) ==&lt;br /&gt;
Light field displays use various techniques to generate the 4D light field:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;[[Microlens Arrays]] (MLAs):&#039;&#039;&#039; A high-resolution display panel (LCD or OLED) is overlaid with an array of tiny lenses. Each lenslet directs light from the pixels beneath it into a specific set of directions, creating different views for different observer positions.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt; This is a common approach derived from integral imaging.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt; The trade-off is explicit: spatial resolution is determined by the lenslet count, angular resolution by the pixels per lenslet.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Multilayer Displays (Stacked LCDs):&#039;&#039;&#039; Several layers of transparent display panels (typically [[Liquid crystal display|LCDs]]) are stacked with air gaps. By computationally optimizing the opacity patterns on each layer, the display acts as a multiplicative [[Spatial Light Modulator|spatial light modulator]], shaping light from a backlight into a complex light field.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2010ContentAdaptive&amp;quot;/&amp;gt; These are often explored for near-eye displays.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Directional Backlighting:&#039;&#039;&#039; A standard display panel (for example LCD) is combined with a specialized backlight that emits light in controlled directions. The backlight might use another LCD panel coupled with optics like lenticular sheets to achieve directionality.&amp;lt;ref name=&amp;quot;Maimone2013Focus3D&amp;quot;&amp;gt;Maimone, A., Wetzstein, G., Hirsch, M., Lanman, D., Raskar, R., &amp;amp; Fuchs, H. (2013). Focus 3D: compressive accommodation display. ACM Transactions on Graphics, 32(5), Article 152. doi:10.1145/2516971.2516983&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Projector Arrays:&#039;&#039;&#039; Multiple projectors illuminate a screen (often lenticular or diffusive). Each projector provides a different perspective view, and their combined output forms the light field.&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Parallax Barrier]]s:&#039;&#039;&#039; An opaque layer with precisely positioned slits or apertures is placed in front of or between display panels. The barrier blocks light selectively, allowing different pixels to be seen from different angles.&amp;lt;ref name=&amp;quot;JDI_Parallax&amp;quot;&amp;gt;&lt;br /&gt;
Japan Display Inc. (2016, Dec 5). *Ultra‑High Resolution Display with Integrated Parallax Barrier for Glasses‑Free 3D* [Press release].  &lt;br /&gt;
Archived copy: https://web.archive.org/web/20161221045330/https://www.j-display.com/english/news/2016/20161205.html (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt; Often less light-efficient than MLAs.&lt;br /&gt;
* &#039;&#039;&#039;[[Waveguide]] Optics:&#039;&#039;&#039; Light is injected into thin optical waveguides (similar to those in some AR glasses) and then coupled out at specific points with controlled directionality, often using diffractive optical elements (DOEs) or gratings.&amp;lt;ref name=&amp;quot;LightFieldLabTech&amp;quot;&amp;gt;&lt;br /&gt;
Light Field Lab. *SolidLight™ Platform Overview.* https://www.lightfieldlab.com/ (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Maimone2017HolographicNED&amp;quot;&amp;gt;Maimone, A., Georgiou, A., &amp;amp; Kollin, J. S. (2017). Holographic near-eye displays for virtual and augmented reality. ACM Transactions on Graphics, 36(4), Article 85. doi:10.1145/3072959.3073624&amp;lt;/ref&amp;gt; This is explored for compact AR/VR systems.&lt;br /&gt;
* &#039;&#039;&#039;Time-Multiplexed Displays:&#039;&#039;&#039; This is what CREAL is doing. Different views or directional illumination patterns are presented rapidly in sequence. If cycled faster than human perception, this creates the illusion of a continuous light field. Can be combined with other techniques like directional backlighting.&amp;lt;ref name=&amp;quot;Liu2014OSTHMD&amp;quot;&amp;gt;Liu, S., Cheng, D., &amp;amp; Hua, H. (2014). An optical see-through head mounted display with addressable focal planes. 2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 33-42. doi:10.1109/ISMAR.2014.6948403&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Holographic and Diffractive Approaches:&#039;&#039;&#039; While [[Holographic display|holographic displays]] reconstruct wavefronts through diffraction, some LFDs utilize holographic optical elements (HOEs) or related diffractive principles to achieve high angular resolution and potentially overcome MLA limitations.&amp;lt;ref name=&amp;quot;SpringerReview2021&amp;quot;&amp;gt;M. Martínez-Corral, Z. Guan, Y. Li, Z. Xiong, B. Javidi, “Review of light field technologies,” *Visual Computing for Industry, Biomedicine and Art*, 4 (1): 29, 2021, doi:10.1186/s42492-021-00096-8.&amp;lt;/ref&amp;gt; Some companies use &amp;quot;holographic&amp;quot; terminology for their high-density LFDs.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;&amp;gt;C. Fink, “Light Field Lab Raises $50 Million to Bring SolidLight Holograms Into the Real World,” *Forbes*, 8 Feb 2023. Available: https://www.forbes.com/sites/charliefink/2023/02/08/light-field-lab-raises-50m-to-bring-solidlight-holograms-into-the-real-world/ (accessed 30 Apr 2025).&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Types of Light Field Displays ==&lt;br /&gt;
* &#039;&#039;&#039;Near-Eye Light Field Displays:&#039;&#039;&#039; Integrated into VR/AR [[Head-mounted display|HMDs]]. Primarily focused on solving the VAC for comfortable, realistic close-up interactions.&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt; Examples include research prototypes from NVIDIA&amp;lt;ref name=&amp;quot;NvidiaNELD&amp;quot;/&amp;gt; and academic groups,&amp;lt;ref name=&amp;quot;Huang2015Stereoscope&amp;quot;&amp;gt;Huang, F. C., Chen, K., &amp;amp; Wetzstein, G. (2015). The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues. ACM Transactions on Graphics, 34(4), Article 60. doi:10.1145/2766943&amp;lt;/ref&amp;gt; and commercial modules from companies like [[CREAL]].&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt; Often utilize MLAs, stacked LCDs, or waveguide/diffractive approaches.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Large Format / Tiled Displays:&#039;&#039;&#039; Aimed at creating large-scale, immersive 3D experiences without glasses for public venues, command centers, or collaborative environments.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LightFieldLabSolidLightPR&amp;quot;&amp;gt;&lt;br /&gt;
Light Field Lab Press Release (2021, Oct 7). *Light Field Lab Unveils SolidLight™ – The Highest Resolution Holographic Display Platform Ever Designed.*  &lt;br /&gt;
https://www.lightfieldlab.com/press-release-oct-2021 (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt; [[Light Field Lab]]&#039;s SolidLight™ platform uses modular panels designed to be tiled into large video walls.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LightFieldLabSolidLightPR&amp;quot;/&amp;gt; Sony&#039;s ELF-SR series (Spatial Reality Display) uses high-speed vision sensors and a micro-optical lens for a single user but demonstrates high-fidelity desktop light field effects.&amp;lt;ref name=&amp;quot;SonyELFSR2&amp;quot;&amp;gt;&lt;br /&gt;
Sony Professional. *ELF‑SR2 Spatial Reality Display.*  &lt;br /&gt;
https://pro.sony/ue_US/products/spatial-reality-displays/elf-sr2 (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Comparison with Other 3D Display Technologies ==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Comparison of Key 3D Display Technology Characteristics&lt;br /&gt;
! Technology&lt;br /&gt;
! Glasses Required&lt;br /&gt;
! Natural Focal Cues (Solves [[Vergence-accommodation conflict|VAC]])&lt;br /&gt;
! Full Motion [[Parallax]]&lt;br /&gt;
! Typical [[Field of view|View Field]]&lt;br /&gt;
! Key Trade-offs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Light Field Display&#039;&#039;&#039;&lt;br /&gt;
| No (often)&lt;br /&gt;
| Yes&lt;br /&gt;
| Yes&lt;br /&gt;
| &lt;br /&gt;
| Spatio-angular resolution trade-off, computation needs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Stereoscopic display|Stereoscopic Displays]]&#039;&#039;&#039;&lt;br /&gt;
| Yes&lt;br /&gt;
| No&lt;br /&gt;
| No &amp;lt;small&amp;gt;(requires head tracking)&amp;lt;/small&amp;gt;&lt;br /&gt;
| Wide&lt;br /&gt;
| VAC causes fatigue, requires glasses&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Autostereoscopic display|Autostereoscopic (non-LFD)]]&#039;&#039;&#039;&lt;br /&gt;
| No&lt;br /&gt;
| No&lt;br /&gt;
| Limited &amp;lt;small&amp;gt;(often Horizontal only)&amp;lt;/small&amp;gt;&lt;br /&gt;
| Limited&lt;br /&gt;
| Reduced resolution per view, fixed viewing zones&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Volumetric Display]]&#039;&#039;&#039;&lt;br /&gt;
| No&lt;br /&gt;
| Yes&lt;br /&gt;
| Yes&lt;br /&gt;
| 360° potential&lt;br /&gt;
| Limited resolution, transparency/opacity issues, bulk&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Holographic display|Holographic Displays]]&#039;&#039;&#039;&lt;br /&gt;
| No&lt;br /&gt;
| Yes&lt;br /&gt;
| Yes&lt;br /&gt;
| Often Limited&lt;br /&gt;
| Extreme computational demands, [[Speckle pattern|speckle]], small size typically&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
LFDs offer a compelling balance, providing natural depth cues without glasses (in many formats) and resolving the VAC, but face challenges in achieving high resolution across both spatial and angular domains simultaneously.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Content Creation ==&lt;br /&gt;
Creating content compatible with LFDs requires capturing or generating directional view information:&lt;br /&gt;
* &#039;&#039;&#039;[[Light Field Camera|Light Field Cameras]] / [[Plenoptic Camera|Plenoptic Cameras]]:&#039;&#039;&#039; Capture both intensity and direction of incoming light using specialized sensors (often with MLAs).&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt; The captured data can be processed for LFD playback.&lt;br /&gt;
* &#039;&#039;&#039;[[Computer Graphics]] Rendering:&#039;&#039;&#039; Standard 3D scenes built in engines like [[Unity (game engine)|Unity]] or [[Unreal Engine]] can be rendered from multiple viewpoints to generate the necessary data.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt; Specialized light field rendering techniques, potentially using [[Ray tracing (graphics)|ray tracing]] or neural methods like [[Neural Radiance Fields]] (NeRF), are employed.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Mildenhall2020NeRF&amp;quot;&amp;gt;Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T., Ramamoorthi, R., &amp;amp; Ng, R. (2020). NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. European Conference on Computer Vision (ECCV), 405-421. doi:10.1007/978-3-030-58452-8_24&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Photogrammetry]] and 3D Scanning:&#039;&#039;&#039; Real-world objects/scenes captured as 3D models can serve as input for rendering light field views.&lt;br /&gt;
* &#039;&#039;&#039;[[Focal Stack]] Conversion:&#039;&#039;&#039; Research explores converting image stacks captured at different focal depths into light field representations, particularly for multi-layer displays.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Applications==&lt;br /&gt;
===Applications in VR and AR===&lt;br /&gt;
* &#039;&#039;&#039;Enhanced Realism and Immersion:&#039;&#039;&#039; Correct depth cues make virtual objects appear more solid and stable, improving the sense of presence, especially for near-field interactions.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Improved Visual Comfort:&#039;&#039;&#039; Mitigating the VAC reduces eye strain, fatigue, and nausea, enabling longer and more comfortable VR/AR sessions.&amp;lt;ref name=&amp;quot;WiredVAC&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Natural Interaction:&#039;&#039;&#039; Accurate depth perception facilitates intuitive hand-eye coordination for manipulating virtual objects.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Seamless AR Integration:&#039;&#039;&#039; Allows virtual elements to appear more cohesively integrated with the real world at correct focal depths.&lt;br /&gt;
* &#039;&#039;&#039;Vision Correction:&#039;&#039;&#039; Near-eye LFDs can potentially pre-distort the displayed light field to correct for the user&#039;s refractive errors, eliminating the need for prescription glasses within the headset.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2015Stereoscope&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Other Applications===&lt;br /&gt;
* &#039;&#039;&#039;Medical Imaging and Visualization:&#039;&#039;&#039; Intuitive visualization of complex 3D scans (CT, MRI) for diagnostics, surgical planning, and education.&amp;lt;ref name=&amp;quot;Nam2019Medical&amp;quot;&amp;gt;Nam, J., McCormick, M., &amp;amp; Tate, A. J. (2019). Light field display systems for medical imaging applications. Journal of Display Technology, 15(3), 215-225. doi:10.1002/jsid.785&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Scientific Visualization:&#039;&#039;&#039; Analyzing complex datasets in fields like fluid dynamics, molecular modeling, geology.&amp;lt;ref name=&amp;quot;Halle2017SciVis&amp;quot;&amp;gt;Halle, M. W., &amp;amp; Meng, J. (2017). LightPlanets: GPU-based rendering of transparent astronomical objects using light field methods. IEEE Transactions on Visualization and Computer Graphics, 23(5), 1479-1488. doi:10.1109/TVCG.2016.2535388&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Product Design and Engineering (CAD/CAE):&#039;&#039;&#039; Collaborative visualization and review of 3D models.&amp;lt;ref name=&amp;quot;Nam2019Medical&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Entertainment and Gaming:&#039;&#039;&#039; Immersive experiences in arcades, museums, theme parks, and potentially future home entertainment.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Telepresence and Communication:&#039;&#039;&#039; Creating realistic, life-sized 3D representations of remote collaborators, like Google&#039;s [[Project Starline]] concept.&amp;lt;ref name=&amp;quot;Starline&amp;quot;&amp;gt;Google Blog (2023, May 10). A first look at Project Starline’s new, simpler prototype. Retrieved from https://blog.google/technology/research/project-starline-prototype/&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Microscopy]]:&#039;&#039;&#039; Viewing microscopic samples with natural depth perception.&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Challenges and Limitations ==&lt;br /&gt;
* &#039;&#039;&#039;Spatio-Angular Resolution Trade-off:&#039;&#039;&#039; Increasing the number of views (angular resolution) often decreases the perceived sharpness (spatial resolution) for a fixed display pixel count.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Computational Complexity &amp;amp; Bandwidth:&#039;&#039;&#039; Rendering, compressing, and transmitting the massive datasets for real-time LFDs is extremely demanding on GPUs and data infrastructure.&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Manufacturing Complexity and Cost:&#039;&#039;&#039; Producing precise optical components like high-density MLAs, perfectly aligned multi-layer stacks, or large-area waveguide structures is challenging and costly.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Form Factor and Miniaturization:&#039;&#039;&#039; Integrating complex optics and electronics into thin, lightweight, and power-efficient near-eye devices remains difficult.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Limited Field of View (FoV):&#039;&#039;&#039; Achieving wide FoV comparable to traditional VR headsets while maintaining high angular resolution is challenging.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Content Ecosystem:&#039;&#039;&#039; The workflow for creating, distributing, and viewing native light field content is still developing compared to standard 2D or stereoscopic 3D, due to no consumer lightfield hardware.&lt;br /&gt;
&lt;br /&gt;
== Key Players and Commercial Landscape ==&lt;br /&gt;
Several companies and research groups are active in LFD development:&lt;br /&gt;
* &#039;&#039;&#039;[[CREAL]]:&#039;&#039;&#039; Swiss startup focused on compact near-eye LFD modules for AR/VR glasses aiming to solve VAC.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Light Field Lab]]:&#039;&#039;&#039; Developing large-scale, modular LFD panels (branded as SolidLight) based on [[Waveguide (optics)|waveguide]] technology.&amp;lt;ref name=&amp;quot;LightFieldLabTech&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LightFieldLabSolidLightPR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Sony]]:&#039;&#039;&#039; Produces the Spatial Reality Display (ELF-SR series), a high-fidelity desktop LFD using eye-tracking.&amp;lt;ref name=&amp;quot;SonyELFSR2&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Avegant]]:&#039;&#039;&#039; Develops light field light engines, particularly for AR, focusing on VAC resolution.&amp;lt;ref name=&amp;quot;AvegantPR&amp;quot;&amp;gt;PR Newswire (2017, March 15). Avegant Introduces Light Field Technology for Mixed Reality. Retrieved from https://www.prnewswire.com/news-releases/avegant-introduces-light-field-technology-for-mixed-reality-300423855.html&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Holografika]]:&#039;&#039;&#039; Offers glasses-free 3D LFD systems for professional applications.&amp;lt;ref name=&amp;quot;Holografika&amp;quot;&amp;gt;Holografika. Light Field Displays. Retrieved from https://holografika.com/light-field-displays/&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Japan Display Inc. (JDI)]]:&#039;&#039;&#039; Demonstrated prototype LFDs for various applications.&amp;lt;ref name=&amp;quot;JDI_LFD_2019&amp;quot;&amp;gt;Japan Display Inc. News (2019, December 3). JDI Develops World&#039;s First 10.1-inch Light Field Display. Retrieved from https://www.j-display.com/english/news/2019/20191203_01.html&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[NVIDIA]]:&#039;&#039;&#039; Foundational research in near-eye LFDs and ongoing GPU development crucial for LFD rendering.&amp;lt;ref name=&amp;quot;NvidiaNELD&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Google]]:&#039;&#039;&#039; Research in LFDs, demonstrated through concepts like Project Starline.&amp;lt;ref name=&amp;quot;Starline&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Academic Research:&#039;&#039;&#039; Institutions like [[MIT Media Lab]], [[Stanford University]], University of Arizona, and others continue to push theoretical and practical boundaries.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Future Directions and Research ==&lt;br /&gt;
* &#039;&#039;&#039;Computational Display Optimization:&#039;&#039;&#039; Using [[Artificial intelligence|AI]] and sophisticated algorithms to optimize patterns on multi-layer displays or directional backlights for better quality with fewer resources.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt; Using neural representations (like NeRF) for efficient light field synthesis and compression.&amp;lt;ref name=&amp;quot;Mildenhall2020NeRF&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Varifocal and Multifocal Integration:&#039;&#039;&#039; Hybrid approaches combining LFD principles with dynamic focus elements (liquid lenses, deformable mirrors) to achieve focus cues potentially more efficiently than pure LFDs.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Liu2014OSTHMD&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Miniaturization for Wearables:&#039;&#039;&#039; Developing ultra-thin, efficient components using [[Metasurface|metasurfaces]], [[Holographic optical element|holographic optical elements (HOEs)]], advanced waveguides, and [[MicroLED]] displays for integration into consumer AR/VR glasses.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;SpringerReview2021&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Improved Content Capture and Creation Tools:&#039;&#039;&#039; Advancements in [[Plenoptic camera|plenoptic cameras]], AI-driven view synthesis, and streamlined software workflows.&amp;lt;ref name=&amp;quot;Mildenhall2020NeRF&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Higher Resolution and Efficiency:&#039;&#039;&#039; Addressing the spatio-angular trade-off and improving light efficiency through new materials, optical designs (for example polarization multiplexing&amp;lt;ref name=&amp;quot;Tan2019Polarization&amp;quot;&amp;gt;G. Tan, T. Zhan, Y.-H. Lee, J. Xiong, S.-T. Wu, “Near-eye light-field display with polarization multiplexing,” *Proceedings of SPIE* 10942, Advances in Display Technologies IX, paper 1094206, 2019, doi:10.1117/12.2509121.&amp;lt;/ref&amp;gt;), and display technologies.&lt;br /&gt;
&lt;br /&gt;
== See Also ==&lt;br /&gt;
* [[Light Field]]&lt;br /&gt;
* [[Plenoptic Function]]&lt;br /&gt;
* [[Integral imaging]]&lt;br /&gt;
* [[Autostereoscopic display]]&lt;br /&gt;
* [[Stereoscopy]]&lt;br /&gt;
* [[Holographic display]]&lt;br /&gt;
* [[Volumetric Display]]&lt;br /&gt;
* [[Varifocal display]]&lt;br /&gt;
* [[Vergence-accommodation conflict]]&lt;br /&gt;
* [[Virtual Reality]]&lt;br /&gt;
* [[Augmented Reality]]&lt;br /&gt;
* [[Head-mounted display]]&lt;br /&gt;
* [[Microlens array]]&lt;br /&gt;
* [[Spatial Light Modulator]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Terms]]&lt;br /&gt;
[[Category:Technical Terms]]&lt;br /&gt;
[[Category:Display technology]]&lt;br /&gt;
[[Category:3D display technology]]&lt;br /&gt;
[[Category:Autostereoscopy]]&lt;br /&gt;
[[Category:Virtual reality]]&lt;br /&gt;
[[Category:Augmented reality]]&lt;br /&gt;
[[Category:Optics]]&lt;br /&gt;
[[Category:Computational photography]]&lt;br /&gt;
[[Category:Emerging technologies]]&lt;br /&gt;
[[Category:Human-computer interaction]]&lt;/div&gt;</summary>
		<author><name>RealEditor</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Light_field_display&amp;diff=36370</id>
		<title>Light field display</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Light_field_display&amp;diff=36370"/>
		<updated>2025-08-04T06:00:52Z</updated>

		<summary type="html">&lt;p&gt;RealEditor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Light field display&#039;&#039;&#039; (&#039;&#039;&#039;LFD&#039;&#039;&#039;) is an advanced display technology designed to reproduce a [[light field]], the distribution of light rays in [[3D space]], including their intensity and direction.&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;&amp;gt;Wetzstein G. (2020). “Computational Displays: Achieving the Full Plenoptic Function.” ACM SIGGRAPH 2020 Courses. ACM Digital Library. doi:10.1145/3386569.3409414. Available: https://dl.acm.org/doi/10.1145/3386569.3409414 (accessed 3 May 2025).&amp;lt;/ref&amp;gt; Unlike conventional 2D displays or [[stereoscopic display|stereoscopic 3D]] systems that present flat images or fixed viewpoints requiring glasses, light field displays aim to recreate how light naturally propagates from a real scene.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;&amp;gt;Wetzstein, G., Lanman, D., Hirsch, M., &amp;amp; Raskar, R. (2012). Tensor displays: Compressive light field synthesis using multilayer displays with directional backlighting. ACM Transactions on Graphics, 31(4), Article 80. doi:10.1145/2185520.2185576&amp;lt;/ref&amp;gt; This allows viewers to perceive genuine [[depth]], [[parallax]] (both horizontal and vertical), and perspective changes without special eyewear. (in many implementations).&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;&amp;gt;Hollister, S. (2024, January 19). Leia is building a 3D empire on the back of the worst phone we&#039;ve ever reviewed. The Verge. Retrieved from https://www.theverge.com/24036574/leia-glasses-free-3d-ces-2024&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This method of display is crucial for the future of [[virtual reality]] (VR) and [[augmented reality]] (AR), because it solves the [[vergence-accommodation conflict]] (VAC).&amp;lt;ref name=&amp;quot;WiredVAC&amp;quot;&amp;gt;Zhang, S. (2015, August 11). The Obscure Neuroscience Problem That&#039;s Plaguing VR. WIRED. Retrieved from https://www.wired.com/2015/08/obscure-neuroscience-problem-thats-plaguing-vr&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;VACReview&amp;quot;&amp;gt;Y. Zhou, J. Zhang, F. Fang, “Vergence-accommodation conflict in optical see-through display: Review and prospect,” *Results in Optics*, vol. 5, p. 100160, 2021, doi:10.1016/j.rio.2021.100160.&amp;lt;/ref&amp;gt; By providing correct [[focal cues]] that match the [[vergence]] information, LFDs promise more immersive, realistic, and visually comfortable experiences, reducing eye strain and [[Virtual Reality Sickness|simulator sickness]] often associated with current [[head-mounted display]]s (HMDs).&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;&amp;gt;CREAL. Light-field: Seeing Virtual Worlds Naturally. Retrieved from https://creal.com/technology/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Definition and Principles ==&lt;br /&gt;
A light field display aims to replicate the [[Plenoptic Function]], a theoretical function describing the complete set of light rays passing through every point in space, in every direction, potentially across time and wavelength.&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt; In practice, light field displays generate a discretized (sampled) approximation of the relevant 4D subset of this function (typically spatial position and angular direction).&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;&amp;gt;Huang, F. C., Wetzstein, G., Barsky, B. A., &amp;amp; Raskar, R. (2014). Eyeglasses-free display: Towards correcting visual aberrations with computational light field displays. ACM Transactions on Graphics, 33(4), Article 59. doi:10.1145/2601097.2601122&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
By controlling the direction as well as the color and intensity of emitted light rays, these displays allow the viewer&#039;s eyes to naturally focus ([[accommodation]]) at different depths within the displayed scene, matching the depth cues provided by binocular vision ([[vergence]]).&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt; This recreation allows users to experience:&lt;br /&gt;
* Full motion [[parallax]] (horizontal and vertical look-around).&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&lt;br /&gt;
* Accurate [[occlusion]] cues.&lt;br /&gt;
* Natural [[focal cues]], mitigating the [[Vergence-accommodation conflict]].&amp;lt;ref name=&amp;quot;WiredVAC&amp;quot;/&amp;gt;&lt;br /&gt;
* [[Specular highlights]] and realistic reflections that change with viewpoint.&lt;br /&gt;
* Viewing without specialized eyewear (especially in non-headset formats).&lt;br /&gt;
&lt;br /&gt;
== Characteristics ==&lt;br /&gt;
* &#039;&#039;&#039;Glasses-Free 3D&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;Full Parallax:&#039;&#039;&#039; True LFDs provide both horizontal and vertical parallax, unlike earlier [[autostereoscopic display|autostereoscopic]] technologies that often limited parallax to side-to-side movement.&lt;br /&gt;
* &#039;&#039;&#039;Accommodation-Convergence Conflict Resolution:&#039;&#039;&#039; A primary driver for VR/AR, LFDs can render virtual objects at appropriate focal distances, aligning accommodation and vergence to significantly improve visual comfort and realism.&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;&amp;gt;&lt;br /&gt;
Lanman D., &amp;amp; Luebke D. (2013). “Near‑Eye Light Field Displays.”  &lt;br /&gt;
*ACM Transactions on Graphics*, 32 (6), 220:1–220:10. doi:10.1145/2508363.2508366.  &lt;br /&gt;
Project page: https://research.nvidia.com/publication/near-eye-light-field-displays (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Computational Requirements:&#039;&#039;&#039; Generating and processing the massive amount of data (multiple views or directional light information) needed for LFDs requires significant [[Graphics processing unit|GPU]] power and bandwidth.&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Resolution Trade-offs:&#039;&#039;&#039; A fundamental challenge involves balancing spatial resolution (image sharpness), angular resolution (smoothness of parallax/number of views), [[Field of view|field of view (FoV)]], and depth of field.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt; This is often referred to as the spatio-angular resolution trade-off.&lt;br /&gt;
&lt;br /&gt;
==History and Development==&lt;br /&gt;
===Early Concepts and Foundations===&lt;br /&gt;
The underlying concept can be traced back to Michael Faraday&#039;s 1846 suggestion of light as a field&amp;lt;ref name=&amp;quot;FaradayField&amp;quot;&amp;gt;Princeton University Press. Faraday, Maxwell, and the Electromagnetic Field - How Two Men Revolutionized Physics. Retrieved from https://press.princeton.edu/books/hardcover/9780691161664/faraday-maxwell-and-the-electromagnetic-field&amp;lt;/ref&amp;gt; and was mathematically formalized regarding radiance transfer by Andrey Gershun in 1936.&amp;lt;ref name=&amp;quot;Gershun1936&amp;quot;&amp;gt;Gershun, A. (1936). The Light Field. Moscow. (Translated by P. Moon &amp;amp; G. Timoshenko, 1939, Journal of Mathematics and Physics, XVIII, 51–151).&amp;lt;/ref&amp;gt; The practical groundwork for reproducing light fields was laid by Gabriel Lippmann&#039;s 1908 concept of [[Integral imaging|Integral Photography]] (&amp;quot;photographie intégrale&amp;quot;), which used an array of small lenses to capture and reproduce light fields.&amp;lt;ref name=&amp;quot;Lippmann1908&amp;quot;&amp;gt;Lippmann, G. (1908). Épreuves réversibles donnant la sensation du relief. Journal de Physique Théorique et Appliquée, 7(1), 821–825. doi:10.1051/jphystap:019080070082100&amp;lt;/ref&amp;gt; The modern computational understanding was significantly advanced by Adelson and Bergen&#039;s formalization of the [[Plenoptic Function]] in 1991.&amp;lt;ref name=&amp;quot;AdelsonBergen1991&amp;quot;&amp;gt;Adelson, E. H., &amp;amp; Bergen, J. R. (1991). The plenoptic function and the elements of early vision. In M. Landy &amp;amp; J. A. Movshon (Eds.), Computational Models of Visual Processing (pp. 3–20). MIT Press.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Key Development Milestones===&lt;br /&gt;
* &#039;&#039;&#039;1908:&#039;&#039;&#039; Gabriel Lippmann introduces integral photography.&amp;lt;ref name=&amp;quot;Lippmann1908&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;1936:&#039;&#039;&#039; Andrey Gershun formalizes the light field mathematically.&amp;lt;ref name=&amp;quot;Gershun1936&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;1991:&#039;&#039;&#039; Adelson and Bergen formalize the plenoptic function.&amp;lt;ref name=&amp;quot;AdelsonBergen1991&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;1996:&#039;&#039;&#039; Levoy and Hanrahan publish work on Light Field Rendering.&amp;lt;ref name=&amp;quot;Levoy1996&amp;quot;&amp;gt;Levoy, M., &amp;amp; Hanrahan, P. (1996). Light field rendering. Proceedings of the 23rd annual conference on Computer graphics and interactive techniques (SIGGRAPH &#039;96), 31-42. doi:10.1145/237170.237193&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2005:&#039;&#039;&#039; Stanford Multi-camera Array demonstrated for light field capture.&amp;lt;ref name=&amp;quot;Wilburn2005&amp;quot;&amp;gt;Wilburn, B., Joshi, N., Vaish, V., Talvala, E. V., Antunez, E., Barth, A., Adams, A., Horowitz, M., &amp;amp; Levoy, M. (2005). High performance imaging using large camera arrays. ACM SIGGRAPH 2005 Papers (SIGGRAPH &#039;05), 765-776. doi:10.1145/1186822.1073256&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2004-2008:&#039;&#039;&#039; Early computational light field displays developed (for example MIT Media Lab).&amp;lt;ref name=&amp;quot;Matusik2004&amp;quot;&amp;gt;Matusik, W., &amp;amp; Pfister, H. (2004). 3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes. ACM SIGGRAPH 2004 Papers (SIGGRAPH &#039;04), 814–824. doi:10.1145/1186562.1015805&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2010-2013:&#039;&#039;&#039; Introduction of multilayer, compressive, and tensor light field display concepts.&amp;lt;ref name=&amp;quot;Lanman2010ContentAdaptive&amp;quot;&amp;gt;Lanman, D., Hirsch, M., Kim, Y., &amp;amp; Raskar, R. (2010). Content-adaptive parallax barriers: optimizing dual-layer 3D displays using low-rank light field factorization. ACM SIGGRAPH Asia 2010 papers (SIGGRAPH ASIA &#039;10), Article 163. doi:10.1145/1882261.1866191&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2013:&#039;&#039;&#039; NVIDIA demonstrates near-eye light field display prototype for VR.&amp;lt;ref name=&amp;quot;NvidiaNELD&amp;quot;&amp;gt;Lanman, D., &amp;amp; Luebke, D. (2013). Near-Eye Light Field Displays (Technical Report NVR-2013-004). NVIDIA Research. Retrieved from https://research.nvidia.com/sites/default/files/pubs/2013-11_Near-Eye-Light-Field/NVIDIA-NELD.pdf&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2015 onwards:&#039;&#039;&#039; Emergence of advanced prototypes (for example CREAL, Light Field Lab, PetaRay).&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;&amp;gt;Lang, B. (2023, January 11). CREAL&#039;s Latest Light-field AR Demo Shows Continued Progress Toward Natural Depth &amp;amp; Focus. Road to VR. Retrieved from https://www.roadtovr.com/creal-light-field-ar-vr-headset-prototype/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Technical Implementations (How They Work) ==&lt;br /&gt;
Light field displays use various techniques to generate the 4D light field:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;[[Microlens Arrays]] (MLAs):&#039;&#039;&#039; A high-resolution display panel (LCD or OLED) is overlaid with an array of tiny lenses. Each lenslet directs light from the pixels beneath it into a specific set of directions, creating different views for different observer positions.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt; This is a common approach derived from integral imaging.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt; The trade-off is explicit: spatial resolution is determined by the lenslet count, angular resolution by the pixels per lenslet.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Multilayer Displays (Stacked LCDs):&#039;&#039;&#039; Several layers of transparent display panels (typically [[Liquid crystal display|LCDs]]) are stacked with air gaps. By computationally optimizing the opacity patterns on each layer, the display acts as a multiplicative [[Spatial Light Modulator|spatial light modulator]], shaping light from a backlight into a complex light field.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2010ContentAdaptive&amp;quot;/&amp;gt; These are often explored for near-eye displays.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Directional Backlighting:&#039;&#039;&#039; A standard display panel (for example LCD) is combined with a specialized backlight that emits light in controlled directions. The backlight might use another LCD panel coupled with optics like lenticular sheets to achieve directionality.&amp;lt;ref name=&amp;quot;Maimone2013Focus3D&amp;quot;&amp;gt;Maimone, A., Wetzstein, G., Hirsch, M., Lanman, D., Raskar, R., &amp;amp; Fuchs, H. (2013). Focus 3D: compressive accommodation display. ACM Transactions on Graphics, 32(5), Article 152. doi:10.1145/2516971.2516983&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Projector Arrays:&#039;&#039;&#039; Multiple projectors illuminate a screen (often lenticular or diffusive). Each projector provides a different perspective view, and their combined output forms the light field.&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Parallax Barrier]]s:&#039;&#039;&#039; An opaque layer with precisely positioned slits or apertures is placed in front of or between display panels. The barrier blocks light selectively, allowing different pixels to be seen from different angles.&amp;lt;ref name=&amp;quot;JDI_Parallax&amp;quot;&amp;gt;&lt;br /&gt;
Japan Display Inc. (2016, Dec 5). *Ultra‑High Resolution Display with Integrated Parallax Barrier for Glasses‑Free 3D* [Press release].  &lt;br /&gt;
Archived copy: https://web.archive.org/web/20161221045330/https://www.j-display.com/english/news/2016/20161205.html (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt; Often less light-efficient than MLAs.&lt;br /&gt;
* &#039;&#039;&#039;[[Waveguide]] Optics:&#039;&#039;&#039; Light is injected into thin optical waveguides (similar to those in some AR glasses) and then coupled out at specific points with controlled directionality, often using diffractive optical elements (DOEs) or gratings.&amp;lt;ref name=&amp;quot;LightFieldLabTech&amp;quot;&amp;gt;&lt;br /&gt;
Light Field Lab. *SolidLight™ Platform Overview.* https://www.lightfieldlab.com/ (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Maimone2017HolographicNED&amp;quot;&amp;gt;Maimone, A., Georgiou, A., &amp;amp; Kollin, J. S. (2017). Holographic near-eye displays for virtual and augmented reality. ACM Transactions on Graphics, 36(4), Article 85. doi:10.1145/3072959.3073624&amp;lt;/ref&amp;gt; This is explored for compact AR/VR systems.&lt;br /&gt;
* &#039;&#039;&#039;Time-Multiplexed Displays:&#039;&#039;&#039; This is what CREAL is doing. Different views or directional illumination patterns are presented rapidly in sequence. If cycled faster than human perception, this creates the illusion of a continuous light field. Can be combined with other techniques like directional backlighting.&amp;lt;ref name=&amp;quot;Liu2014OSTHMD&amp;quot;&amp;gt;Liu, S., Cheng, D., &amp;amp; Hua, H. (2014). An optical see-through head mounted display with addressable focal planes. 2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 33-42. doi:10.1109/ISMAR.2014.6948403&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Holographic and Diffractive Approaches:&#039;&#039;&#039; While [[Holographic display|holographic displays]] reconstruct wavefronts through diffraction, some LFDs utilize holographic optical elements (HOEs) or related diffractive principles to achieve high angular resolution and potentially overcome MLA limitations.&amp;lt;ref name=&amp;quot;SpringerReview2021&amp;quot;&amp;gt;M. Martínez-Corral, Z. Guan, Y. Li, Z. Xiong, B. Javidi, “Review of light field technologies,” *Visual Computing for Industry, Biomedicine and Art*, 4 (1): 29, 2021, doi:10.1186/s42492-021-00096-8.&amp;lt;/ref&amp;gt; Some companies use &amp;quot;holographic&amp;quot; terminology for their high-density LFDs.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;&amp;gt;C. Fink, “Light Field Lab Raises $50 Million to Bring SolidLight Holograms Into the Real World,” *Forbes*, 8 Feb 2023. Available: https://www.forbes.com/sites/charliefink/2023/02/08/light-field-lab-raises-50m-to-bring-solidlight-holograms-into-the-real-world/ (accessed 30 Apr 2025).&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Types of Light Field Displays ==&lt;br /&gt;
* &#039;&#039;&#039;Near-Eye Light Field Displays:&#039;&#039;&#039; Integrated into VR/AR [[Head-mounted display|HMDs]]. Primarily focused on solving the VAC for comfortable, realistic close-up interactions.&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt; Examples include research prototypes from NVIDIA&amp;lt;ref name=&amp;quot;NvidiaNELD&amp;quot;/&amp;gt; and academic groups,&amp;lt;ref name=&amp;quot;Huang2015Stereoscope&amp;quot;&amp;gt;Huang, F. C., Chen, K., &amp;amp; Wetzstein, G. (2015). The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues. ACM Transactions on Graphics, 34(4), Article 60. doi:10.1145/2766943&amp;lt;/ref&amp;gt; and commercial modules from companies like [[CREAL]].&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt; Often utilize MLAs, stacked LCDs, or waveguide/diffractive approaches.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Large Format / Tiled Displays:&#039;&#039;&#039; Aimed at creating large-scale, immersive 3D experiences without glasses for public venues, command centers, or collaborative environments.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LightFieldLabSolidLightPR&amp;quot;&amp;gt;&lt;br /&gt;
Light Field Lab Press Release (2021, Oct 7). *Light Field Lab Unveils SolidLight™ – The Highest Resolution Holographic Display Platform Ever Designed.*  &lt;br /&gt;
https://www.lightfieldlab.com/press-release-oct-2021 (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt; [[Light Field Lab]]&#039;s SolidLight™ platform uses modular panels designed to be tiled into large video walls.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LightFieldLabSolidLightPR&amp;quot;/&amp;gt; Sony&#039;s ELF-SR series (Spatial Reality Display) uses high-speed vision sensors and a micro-optical lens for a single user but demonstrates high-fidelity desktop light field effects.&amp;lt;ref name=&amp;quot;SonyELFSR2&amp;quot;&amp;gt;&lt;br /&gt;
Sony Professional. *ELF‑SR2 Spatial Reality Display.*  &lt;br /&gt;
https://pro.sony/ue_US/products/spatial-reality-displays/elf-sr2 (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Comparison with Other 3D Display Technologies ==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Comparison of Key 3D Display Technology Characteristics&lt;br /&gt;
! Technology&lt;br /&gt;
! Glasses Required&lt;br /&gt;
! Natural Focal Cues (Solves [[Vergence-accommodation conflict|VAC]])&lt;br /&gt;
! Full Motion [[Parallax]]&lt;br /&gt;
! Typical [[Field of view|View Field]]&lt;br /&gt;
! Key Trade-offs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Light Field Display&#039;&#039;&#039;&lt;br /&gt;
| No (often)&lt;br /&gt;
| Yes&lt;br /&gt;
| Yes&lt;br /&gt;
| &lt;br /&gt;
| Spatio-angular resolution trade-off, computation needs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Stereoscopic display|Stereoscopic Displays]]&#039;&#039;&#039;&lt;br /&gt;
| Yes&lt;br /&gt;
| No&lt;br /&gt;
| No &amp;lt;small&amp;gt;(requires head tracking)&amp;lt;/small&amp;gt;&lt;br /&gt;
| Wide&lt;br /&gt;
| VAC causes fatigue, requires glasses&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Autostereoscopic display|Autostereoscopic (non-LFD)]]&#039;&#039;&#039;&lt;br /&gt;
| No&lt;br /&gt;
| No&lt;br /&gt;
| Limited &amp;lt;small&amp;gt;(often Horizontal only)&amp;lt;/small&amp;gt;&lt;br /&gt;
| Limited&lt;br /&gt;
| Reduced resolution per view, fixed viewing zones&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Volumetric Display]]&#039;&#039;&#039;&lt;br /&gt;
| No&lt;br /&gt;
| Yes&lt;br /&gt;
| Yes&lt;br /&gt;
| 360° potential&lt;br /&gt;
| Limited resolution, transparency/opacity issues, bulk&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Holographic display|Holographic Displays]]&#039;&#039;&#039;&lt;br /&gt;
| No&lt;br /&gt;
| Yes&lt;br /&gt;
| Yes&lt;br /&gt;
| Often Limited&lt;br /&gt;
| Extreme computational demands, [[Speckle pattern|speckle]], small size typically&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
LFDs offer a compelling balance, providing natural depth cues without glasses (in many formats) and resolving the VAC, but face challenges in achieving high resolution across both spatial and angular domains simultaneously.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Content Creation ==&lt;br /&gt;
Creating content compatible with LFDs requires capturing or generating directional view information:&lt;br /&gt;
* &#039;&#039;&#039;[[Light Field Camera|Light Field Cameras]] / [[Plenoptic Camera|Plenoptic Cameras]]:&#039;&#039;&#039; Capture both intensity and direction of incoming light using specialized sensors (often with MLAs).&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt; The captured data can be processed for LFD playback.&lt;br /&gt;
* &#039;&#039;&#039;[[Computer Graphics]] Rendering:&#039;&#039;&#039; Standard 3D scenes built in engines like [[Unity (game engine)|Unity]] or [[Unreal Engine]] can be rendered from multiple viewpoints to generate the necessary data.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt; Specialized light field rendering techniques, potentially using [[Ray tracing (graphics)|ray tracing]] or neural methods like [[Neural Radiance Fields]] (NeRF), are employed.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Mildenhall2020NeRF&amp;quot;&amp;gt;Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T., Ramamoorthi, R., &amp;amp; Ng, R. (2020). NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. European Conference on Computer Vision (ECCV), 405-421. doi:10.1007/978-3-030-58452-8_24&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Photogrammetry]] and 3D Scanning:&#039;&#039;&#039; Real-world objects/scenes captured as 3D models can serve as input for rendering light field views.&lt;br /&gt;
* &#039;&#039;&#039;[[Focal Stack]] Conversion:&#039;&#039;&#039; Research explores converting image stacks captured at different focal depths into light field representations, particularly for multi-layer displays.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Applications==&lt;br /&gt;
===Applications in VR and AR===&lt;br /&gt;
* &#039;&#039;&#039;Enhanced Realism and Immersion:&#039;&#039;&#039; Correct depth cues make virtual objects appear more solid and stable, improving the sense of presence, especially for near-field interactions.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Improved Visual Comfort:&#039;&#039;&#039; Mitigating the VAC reduces eye strain, fatigue, and nausea, enabling longer and more comfortable VR/AR sessions.&amp;lt;ref name=&amp;quot;WiredVAC&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Natural Interaction:&#039;&#039;&#039; Accurate depth perception facilitates intuitive hand-eye coordination for manipulating virtual objects.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Seamless AR Integration:&#039;&#039;&#039; Allows virtual elements to appear more cohesively integrated with the real world at correct focal depths.&lt;br /&gt;
* &#039;&#039;&#039;Vision Correction:&#039;&#039;&#039; Near-eye LFDs can potentially pre-distort the displayed light field to correct for the user&#039;s refractive errors, eliminating the need for prescription glasses within the headset.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2015Stereoscope&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Other Applications===&lt;br /&gt;
* &#039;&#039;&#039;Medical Imaging and Visualization:&#039;&#039;&#039; Intuitive visualization of complex 3D scans (CT, MRI) for diagnostics, surgical planning, and education.&amp;lt;ref name=&amp;quot;Nam2019Medical&amp;quot;&amp;gt;Nam, J., McCormick, M., &amp;amp; Tate, A. J. (2019). Light field display systems for medical imaging applications. Journal of Display Technology, 15(3), 215-225. doi:10.1002/jsid.785&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Scientific Visualization:&#039;&#039;&#039; Analyzing complex datasets in fields like fluid dynamics, molecular modeling, geology.&amp;lt;ref name=&amp;quot;Halle2017SciVis&amp;quot;&amp;gt;Halle, M. W., &amp;amp; Meng, J. (2017). LightPlanets: GPU-based rendering of transparent astronomical objects using light field methods. IEEE Transactions on Visualization and Computer Graphics, 23(5), 1479-1488. doi:10.1109/TVCG.2016.2535388&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Product Design and Engineering (CAD/CAE):&#039;&#039;&#039; Collaborative visualization and review of 3D models.&amp;lt;ref name=&amp;quot;Nam2019Medical&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Entertainment and Gaming:&#039;&#039;&#039; Immersive experiences in arcades, museums, theme parks, and potentially future home entertainment.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Telepresence and Communication:&#039;&#039;&#039; Creating realistic, life-sized 3D representations of remote collaborators, like Google&#039;s [[Project Starline]] concept.&amp;lt;ref name=&amp;quot;Starline&amp;quot;&amp;gt;Google Blog (2023, May 10). A first look at Project Starline’s new, simpler prototype. Retrieved from https://blog.google/technology/research/project-starline-prototype/&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Microscopy]]:&#039;&#039;&#039; Viewing microscopic samples with natural depth perception.&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Challenges and Limitations ==&lt;br /&gt;
* &#039;&#039;&#039;Spatio-Angular Resolution Trade-off:&#039;&#039;&#039; Increasing the number of views (angular resolution) often decreases the perceived sharpness (spatial resolution) for a fixed display pixel count.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Computational Complexity &amp;amp; Bandwidth:&#039;&#039;&#039; Rendering, compressing, and transmitting the massive datasets for real-time LFDs is extremely demanding on GPUs and data infrastructure.&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Manufacturing Complexity and Cost:&#039;&#039;&#039; Producing precise optical components like high-density MLAs, perfectly aligned multi-layer stacks, or large-area waveguide structures is challenging and costly.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Form Factor and Miniaturization:&#039;&#039;&#039; Integrating complex optics and electronics into thin, lightweight, and power-efficient near-eye devices remains difficult.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Limited Field of View (FoV):&#039;&#039;&#039; Achieving wide FoV comparable to traditional VR headsets while maintaining high angular resolution is challenging.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Content Ecosystem:&#039;&#039;&#039; The workflow for creating, distributing, and viewing native light field content is still developing compared to standard 2D or stereoscopic 3D, due to no consumer lightfield hardware.&lt;br /&gt;
&lt;br /&gt;
== Key Players and Commercial Landscape ==&lt;br /&gt;
Several companies and research groups are active in LFD development:&lt;br /&gt;
* &#039;&#039;&#039;[[CREAL]]:&#039;&#039;&#039; Swiss startup focused on compact near-eye LFD modules for AR/VR glasses aiming to solve VAC.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Light Field Lab]]:&#039;&#039;&#039; Developing large-scale, modular LFD panels (branded as SolidLight) based on [[Waveguide (optics)|waveguide]] technology.&amp;lt;ref name=&amp;quot;LightFieldLabTech&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LightFieldLabSolidLightPR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Sony]]:&#039;&#039;&#039; Produces the Spatial Reality Display (ELF-SR series), a high-fidelity desktop LFD using eye-tracking.&amp;lt;ref name=&amp;quot;SonyELFSR2&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Avegant]]:&#039;&#039;&#039; Develops light field light engines, particularly for AR, focusing on VAC resolution.&amp;lt;ref name=&amp;quot;AvegantPR&amp;quot;&amp;gt;PR Newswire (2017, March 15). Avegant Introduces Light Field Technology for Mixed Reality. Retrieved from https://www.prnewswire.com/news-releases/avegant-introduces-light-field-technology-for-mixed-reality-300423855.html&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Holografika]]:&#039;&#039;&#039; Offers glasses-free 3D LFD systems for professional applications.&amp;lt;ref name=&amp;quot;Holografika&amp;quot;&amp;gt;Holografika. Light Field Displays. Retrieved from https://holografika.com/light-field-displays/&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Japan Display Inc. (JDI)]]:&#039;&#039;&#039; Demonstrated prototype LFDs for various applications.&amp;lt;ref name=&amp;quot;JDI_LFD_2019&amp;quot;&amp;gt;Japan Display Inc. News (2019, December 3). JDI Develops World&#039;s First 10.1-inch Light Field Display. Retrieved from https://www.j-display.com/english/news/2019/20191203_01.html&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[NVIDIA]]:&#039;&#039;&#039; Foundational research in near-eye LFDs and ongoing GPU development crucial for LFD rendering.&amp;lt;ref name=&amp;quot;NvidiaNELD&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Google]]:&#039;&#039;&#039; Research in LFDs, demonstrated through concepts like Project Starline.&amp;lt;ref name=&amp;quot;Starline&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Academic Research:&#039;&#039;&#039; Institutions like [[MIT Media Lab]], [[Stanford University]], University of Arizona, and others continue to push theoretical and practical boundaries.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Future Directions and Research ==&lt;br /&gt;
* &#039;&#039;&#039;Computational Display Optimization:&#039;&#039;&#039; Using [[Artificial intelligence|AI]] and sophisticated algorithms to optimize patterns on multi-layer displays or directional backlights for better quality with fewer resources.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt; Using neural representations (like NeRF) for efficient light field synthesis and compression.&amp;lt;ref name=&amp;quot;Mildenhall2020NeRF&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Varifocal and Multifocal Integration:&#039;&#039;&#039; Hybrid approaches combining LFD principles with dynamic focus elements (liquid lenses, deformable mirrors) to achieve focus cues potentially more efficiently than pure LFDs.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Liu2014OSTHMD&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Miniaturization for Wearables:&#039;&#039;&#039; Developing ultra-thin, efficient components using [[Metasurface|metasurfaces]], [[Holographic optical element|holographic optical elements (HOEs)]], advanced waveguides, and [[MicroLED]] displays for integration into consumer AR/VR glasses.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;SpringerReview2021&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Improved Content Capture and Creation Tools:&#039;&#039;&#039; Advancements in [[Plenoptic camera|plenoptic cameras]], AI-driven view synthesis, and streamlined software workflows.&amp;lt;ref name=&amp;quot;Mildenhall2020NeRF&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Higher Resolution and Efficiency:&#039;&#039;&#039; Addressing the spatio-angular trade-off and improving light efficiency through new materials, optical designs (for example polarization multiplexing&amp;lt;ref name=&amp;quot;Tan2019Polarization&amp;quot;&amp;gt;G. Tan, T. Zhan, Y.-H. Lee, J. Xiong, S.-T. Wu, “Near-eye light-field display with polarization multiplexing,” *Proceedings of SPIE* 10942, Advances in Display Technologies IX, paper 1094206, 2019, doi:10.1117/12.2509121.&amp;lt;/ref&amp;gt;), and display technologies.&lt;br /&gt;
&lt;br /&gt;
== See Also ==&lt;br /&gt;
* [[Light Field]]&lt;br /&gt;
* [[Plenoptic Function]]&lt;br /&gt;
* [[Integral imaging]]&lt;br /&gt;
* [[Autostereoscopic display]]&lt;br /&gt;
* [[Stereoscopy]]&lt;br /&gt;
* [[Holographic display]]&lt;br /&gt;
* [[Volumetric Display]]&lt;br /&gt;
* [[Varifocal display]]&lt;br /&gt;
* [[Vergence-accommodation conflict]]&lt;br /&gt;
* [[Virtual Reality]]&lt;br /&gt;
* [[Augmented Reality]]&lt;br /&gt;
* [[Head-mounted display]]&lt;br /&gt;
* [[Microlens array]]&lt;br /&gt;
* [[Spatial Light Modulator]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Terms]]&lt;br /&gt;
[[Category:Technical Terms]]&lt;br /&gt;
[[Category:Display technology]]&lt;br /&gt;
[[Category:3D display technology]]&lt;br /&gt;
[[Category:Autostereoscopy]]&lt;br /&gt;
[[Category:Virtual reality]]&lt;br /&gt;
[[Category:Augmented reality]]&lt;br /&gt;
[[Category:Optics]]&lt;br /&gt;
[[Category:Computational photography]]&lt;br /&gt;
[[Category:Emerging technologies]]&lt;br /&gt;
[[Category:Human-computer interaction]]&lt;/div&gt;</summary>
		<author><name>RealEditor</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Light_field_display&amp;diff=36369</id>
		<title>Light field display</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Light_field_display&amp;diff=36369"/>
		<updated>2025-08-04T06:00:00Z</updated>

		<summary type="html">&lt;p&gt;RealEditor: /* Key Players and Commercial Landscape */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Light field display&#039;&#039;&#039; (&#039;&#039;&#039;LFD&#039;&#039;&#039;) is an advanced display technology designed to reproduce a [[light field]], the distribution of light rays in [[3D space]], including their intensity and direction.&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;&amp;gt;Wetzstein G. (2020). “Computational Displays: Achieving the Full Plenoptic Function.” ACM SIGGRAPH 2020 Courses. ACM Digital Library. doi:10.1145/3386569.3409414. Available: https://dl.acm.org/doi/10.1145/3386569.3409414 (accessed 3 May 2025).&amp;lt;/ref&amp;gt; Unlike conventional 2D displays or [[stereoscopic display|stereoscopic 3D]] systems that present flat images or fixed viewpoints requiring glasses, light field displays aim to recreate how light naturally propagates from a real scene.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;&amp;gt;Wetzstein, G., Lanman, D., Hirsch, M., &amp;amp; Raskar, R. (2012). Tensor displays: Compressive light field synthesis using multilayer displays with directional backlighting. ACM Transactions on Graphics, 31(4), Article 80. doi:10.1145/2185520.2185576&amp;lt;/ref&amp;gt; This allows viewers to perceive genuine [[depth]], [[parallax]] (both horizontal and vertical), and perspective changes without special eyewear. (in many implementations).&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;&amp;gt;Hollister, S. (2024, January 19). Leia is building a 3D empire on the back of the worst phone we&#039;ve ever reviewed. The Verge. Retrieved from https://www.theverge.com/24036574/leia-glasses-free-3d-ces-2024&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This method of display is crucial for the future of [[virtual reality]] (VR) and [[augmented reality]] (AR), because it can directly address the [[vergence-accommodation conflict]] (VAC).&amp;lt;ref name=&amp;quot;WiredVAC&amp;quot;&amp;gt;Zhang, S. (2015, August 11). The Obscure Neuroscience Problem That&#039;s Plaguing VR. WIRED. Retrieved from https://www.wired.com/2015/08/obscure-neuroscience-problem-thats-plaguing-vr&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;VACReview&amp;quot;&amp;gt;Y. Zhou, J. Zhang, F. Fang, “Vergence-accommodation conflict in optical see-through display: Review and prospect,” *Results in Optics*, vol. 5, p. 100160, 2021, doi:10.1016/j.rio.2021.100160.&amp;lt;/ref&amp;gt; By providing correct [[focal cues]] that match the [[vergence]] information, LFDs promise more immersive, realistic, and visually comfortable experiences, reducing eye strain and [[Virtual Reality Sickness|simulator sickness]] often associated with current [[head-mounted display]]s (HMDs).&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;&amp;gt;CREAL. Light-field: Seeing Virtual Worlds Naturally. Retrieved from https://creal.com/technology/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Definition and Principles ==&lt;br /&gt;
A light field display aims to replicate the [[Plenoptic Function]], a theoretical function describing the complete set of light rays passing through every point in space, in every direction, potentially across time and wavelength.&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt; In practice, light field displays generate a discretized (sampled) approximation of the relevant 4D subset of this function (typically spatial position and angular direction).&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;&amp;gt;Huang, F. C., Wetzstein, G., Barsky, B. A., &amp;amp; Raskar, R. (2014). Eyeglasses-free display: Towards correcting visual aberrations with computational light field displays. ACM Transactions on Graphics, 33(4), Article 59. doi:10.1145/2601097.2601122&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
By controlling the direction as well as the color and intensity of emitted light rays, these displays allow the viewer&#039;s eyes to naturally focus ([[accommodation]]) at different depths within the displayed scene, matching the depth cues provided by binocular vision ([[vergence]]).&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt; This recreation allows users to experience:&lt;br /&gt;
* Full motion [[parallax]] (horizontal and vertical look-around).&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&lt;br /&gt;
* Accurate [[occlusion]] cues.&lt;br /&gt;
* Natural [[focal cues]], mitigating the [[Vergence-accommodation conflict]].&amp;lt;ref name=&amp;quot;WiredVAC&amp;quot;/&amp;gt;&lt;br /&gt;
* [[Specular highlights]] and realistic reflections that change with viewpoint.&lt;br /&gt;
* Viewing without specialized eyewear (especially in non-headset formats).&lt;br /&gt;
&lt;br /&gt;
== Characteristics ==&lt;br /&gt;
* &#039;&#039;&#039;Glasses-Free 3D&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;Full Parallax:&#039;&#039;&#039; True LFDs provide both horizontal and vertical parallax, unlike earlier [[autostereoscopic display|autostereoscopic]] technologies that often limited parallax to side-to-side movement.&lt;br /&gt;
* &#039;&#039;&#039;Accommodation-Convergence Conflict Resolution:&#039;&#039;&#039; A primary driver for VR/AR, LFDs can render virtual objects at appropriate focal distances, aligning accommodation and vergence to significantly improve visual comfort and realism.&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;&amp;gt;&lt;br /&gt;
Lanman D., &amp;amp; Luebke D. (2013). “Near‑Eye Light Field Displays.”  &lt;br /&gt;
*ACM Transactions on Graphics*, 32 (6), 220:1–220:10. doi:10.1145/2508363.2508366.  &lt;br /&gt;
Project page: https://research.nvidia.com/publication/near-eye-light-field-displays (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Computational Requirements:&#039;&#039;&#039; Generating and processing the massive amount of data (multiple views or directional light information) needed for LFDs requires significant [[Graphics processing unit|GPU]] power and bandwidth.&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Resolution Trade-offs:&#039;&#039;&#039; A fundamental challenge involves balancing spatial resolution (image sharpness), angular resolution (smoothness of parallax/number of views), [[Field of view|field of view (FoV)]], and depth of field.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt; This is often referred to as the spatio-angular resolution trade-off.&lt;br /&gt;
&lt;br /&gt;
==History and Development==&lt;br /&gt;
===Early Concepts and Foundations===&lt;br /&gt;
The underlying concept can be traced back to Michael Faraday&#039;s 1846 suggestion of light as a field&amp;lt;ref name=&amp;quot;FaradayField&amp;quot;&amp;gt;Princeton University Press. Faraday, Maxwell, and the Electromagnetic Field - How Two Men Revolutionized Physics. Retrieved from https://press.princeton.edu/books/hardcover/9780691161664/faraday-maxwell-and-the-electromagnetic-field&amp;lt;/ref&amp;gt; and was mathematically formalized regarding radiance transfer by Andrey Gershun in 1936.&amp;lt;ref name=&amp;quot;Gershun1936&amp;quot;&amp;gt;Gershun, A. (1936). The Light Field. Moscow. (Translated by P. Moon &amp;amp; G. Timoshenko, 1939, Journal of Mathematics and Physics, XVIII, 51–151).&amp;lt;/ref&amp;gt; The practical groundwork for reproducing light fields was laid by Gabriel Lippmann&#039;s 1908 concept of [[Integral imaging|Integral Photography]] (&amp;quot;photographie intégrale&amp;quot;), which used an array of small lenses to capture and reproduce light fields.&amp;lt;ref name=&amp;quot;Lippmann1908&amp;quot;&amp;gt;Lippmann, G. (1908). Épreuves réversibles donnant la sensation du relief. Journal de Physique Théorique et Appliquée, 7(1), 821–825. doi:10.1051/jphystap:019080070082100&amp;lt;/ref&amp;gt; The modern computational understanding was significantly advanced by Adelson and Bergen&#039;s formalization of the [[Plenoptic Function]] in 1991.&amp;lt;ref name=&amp;quot;AdelsonBergen1991&amp;quot;&amp;gt;Adelson, E. H., &amp;amp; Bergen, J. R. (1991). The plenoptic function and the elements of early vision. In M. Landy &amp;amp; J. A. Movshon (Eds.), Computational Models of Visual Processing (pp. 3–20). MIT Press.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Key Development Milestones===&lt;br /&gt;
* &#039;&#039;&#039;1908:&#039;&#039;&#039; Gabriel Lippmann introduces integral photography.&amp;lt;ref name=&amp;quot;Lippmann1908&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;1936:&#039;&#039;&#039; Andrey Gershun formalizes the light field mathematically.&amp;lt;ref name=&amp;quot;Gershun1936&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;1991:&#039;&#039;&#039; Adelson and Bergen formalize the plenoptic function.&amp;lt;ref name=&amp;quot;AdelsonBergen1991&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;1996:&#039;&#039;&#039; Levoy and Hanrahan publish work on Light Field Rendering.&amp;lt;ref name=&amp;quot;Levoy1996&amp;quot;&amp;gt;Levoy, M., &amp;amp; Hanrahan, P. (1996). Light field rendering. Proceedings of the 23rd annual conference on Computer graphics and interactive techniques (SIGGRAPH &#039;96), 31-42. doi:10.1145/237170.237193&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2005:&#039;&#039;&#039; Stanford Multi-camera Array demonstrated for light field capture.&amp;lt;ref name=&amp;quot;Wilburn2005&amp;quot;&amp;gt;Wilburn, B., Joshi, N., Vaish, V., Talvala, E. V., Antunez, E., Barth, A., Adams, A., Horowitz, M., &amp;amp; Levoy, M. (2005). High performance imaging using large camera arrays. ACM SIGGRAPH 2005 Papers (SIGGRAPH &#039;05), 765-776. doi:10.1145/1186822.1073256&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2004-2008:&#039;&#039;&#039; Early computational light field displays developed (for example MIT Media Lab).&amp;lt;ref name=&amp;quot;Matusik2004&amp;quot;&amp;gt;Matusik, W., &amp;amp; Pfister, H. (2004). 3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes. ACM SIGGRAPH 2004 Papers (SIGGRAPH &#039;04), 814–824. doi:10.1145/1186562.1015805&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2010-2013:&#039;&#039;&#039; Introduction of multilayer, compressive, and tensor light field display concepts.&amp;lt;ref name=&amp;quot;Lanman2010ContentAdaptive&amp;quot;&amp;gt;Lanman, D., Hirsch, M., Kim, Y., &amp;amp; Raskar, R. (2010). Content-adaptive parallax barriers: optimizing dual-layer 3D displays using low-rank light field factorization. ACM SIGGRAPH Asia 2010 papers (SIGGRAPH ASIA &#039;10), Article 163. doi:10.1145/1882261.1866191&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2013:&#039;&#039;&#039; NVIDIA demonstrates near-eye light field display prototype for VR.&amp;lt;ref name=&amp;quot;NvidiaNELD&amp;quot;&amp;gt;Lanman, D., &amp;amp; Luebke, D. (2013). Near-Eye Light Field Displays (Technical Report NVR-2013-004). NVIDIA Research. Retrieved from https://research.nvidia.com/sites/default/files/pubs/2013-11_Near-Eye-Light-Field/NVIDIA-NELD.pdf&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2015 onwards:&#039;&#039;&#039; Emergence of advanced prototypes (for example CREAL, Light Field Lab, PetaRay).&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;&amp;gt;Lang, B. (2023, January 11). CREAL&#039;s Latest Light-field AR Demo Shows Continued Progress Toward Natural Depth &amp;amp; Focus. Road to VR. Retrieved from https://www.roadtovr.com/creal-light-field-ar-vr-headset-prototype/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Technical Implementations (How They Work) ==&lt;br /&gt;
Light field displays use various techniques to generate the 4D light field:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;[[Microlens Arrays]] (MLAs):&#039;&#039;&#039; A high-resolution display panel (LCD or OLED) is overlaid with an array of tiny lenses. Each lenslet directs light from the pixels beneath it into a specific set of directions, creating different views for different observer positions.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt; This is a common approach derived from integral imaging.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt; The trade-off is explicit: spatial resolution is determined by the lenslet count, angular resolution by the pixels per lenslet.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Multilayer Displays (Stacked LCDs):&#039;&#039;&#039; Several layers of transparent display panels (typically [[Liquid crystal display|LCDs]]) are stacked with air gaps. By computationally optimizing the opacity patterns on each layer, the display acts as a multiplicative [[Spatial Light Modulator|spatial light modulator]], shaping light from a backlight into a complex light field.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2010ContentAdaptive&amp;quot;/&amp;gt; These are often explored for near-eye displays.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Directional Backlighting:&#039;&#039;&#039; A standard display panel (for example LCD) is combined with a specialized backlight that emits light in controlled directions. The backlight might use another LCD panel coupled with optics like lenticular sheets to achieve directionality.&amp;lt;ref name=&amp;quot;Maimone2013Focus3D&amp;quot;&amp;gt;Maimone, A., Wetzstein, G., Hirsch, M., Lanman, D., Raskar, R., &amp;amp; Fuchs, H. (2013). Focus 3D: compressive accommodation display. ACM Transactions on Graphics, 32(5), Article 152. doi:10.1145/2516971.2516983&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Projector Arrays:&#039;&#039;&#039; Multiple projectors illuminate a screen (often lenticular or diffusive). Each projector provides a different perspective view, and their combined output forms the light field.&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Parallax Barrier]]s:&#039;&#039;&#039; An opaque layer with precisely positioned slits or apertures is placed in front of or between display panels. The barrier blocks light selectively, allowing different pixels to be seen from different angles.&amp;lt;ref name=&amp;quot;JDI_Parallax&amp;quot;&amp;gt;&lt;br /&gt;
Japan Display Inc. (2016, Dec 5). *Ultra‑High Resolution Display with Integrated Parallax Barrier for Glasses‑Free 3D* [Press release].  &lt;br /&gt;
Archived copy: https://web.archive.org/web/20161221045330/https://www.j-display.com/english/news/2016/20161205.html (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt; Often less light-efficient than MLAs.&lt;br /&gt;
* &#039;&#039;&#039;[[Waveguide]] Optics:&#039;&#039;&#039; Light is injected into thin optical waveguides (similar to those in some AR glasses) and then coupled out at specific points with controlled directionality, often using diffractive optical elements (DOEs) or gratings.&amp;lt;ref name=&amp;quot;LightFieldLabTech&amp;quot;&amp;gt;&lt;br /&gt;
Light Field Lab. *SolidLight™ Platform Overview.* https://www.lightfieldlab.com/ (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Maimone2017HolographicNED&amp;quot;&amp;gt;Maimone, A., Georgiou, A., &amp;amp; Kollin, J. S. (2017). Holographic near-eye displays for virtual and augmented reality. ACM Transactions on Graphics, 36(4), Article 85. doi:10.1145/3072959.3073624&amp;lt;/ref&amp;gt; This is explored for compact AR/VR systems.&lt;br /&gt;
* &#039;&#039;&#039;Time-Multiplexed Displays:&#039;&#039;&#039; This is what CREAL is doing. Different views or directional illumination patterns are presented rapidly in sequence. If cycled faster than human perception, this creates the illusion of a continuous light field. Can be combined with other techniques like directional backlighting.&amp;lt;ref name=&amp;quot;Liu2014OSTHMD&amp;quot;&amp;gt;Liu, S., Cheng, D., &amp;amp; Hua, H. (2014). An optical see-through head mounted display with addressable focal planes. 2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 33-42. doi:10.1109/ISMAR.2014.6948403&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Holographic and Diffractive Approaches:&#039;&#039;&#039; While [[Holographic display|holographic displays]] reconstruct wavefronts through diffraction, some LFDs utilize holographic optical elements (HOEs) or related diffractive principles to achieve high angular resolution and potentially overcome MLA limitations.&amp;lt;ref name=&amp;quot;SpringerReview2021&amp;quot;&amp;gt;M. Martínez-Corral, Z. Guan, Y. Li, Z. Xiong, B. Javidi, “Review of light field technologies,” *Visual Computing for Industry, Biomedicine and Art*, 4 (1): 29, 2021, doi:10.1186/s42492-021-00096-8.&amp;lt;/ref&amp;gt; Some companies use &amp;quot;holographic&amp;quot; terminology for their high-density LFDs.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;&amp;gt;C. Fink, “Light Field Lab Raises $50 Million to Bring SolidLight Holograms Into the Real World,” *Forbes*, 8 Feb 2023. Available: https://www.forbes.com/sites/charliefink/2023/02/08/light-field-lab-raises-50m-to-bring-solidlight-holograms-into-the-real-world/ (accessed 30 Apr 2025).&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Types of Light Field Displays ==&lt;br /&gt;
* &#039;&#039;&#039;Near-Eye Light Field Displays:&#039;&#039;&#039; Integrated into VR/AR [[Head-mounted display|HMDs]]. Primarily focused on solving the VAC for comfortable, realistic close-up interactions.&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt; Examples include research prototypes from NVIDIA&amp;lt;ref name=&amp;quot;NvidiaNELD&amp;quot;/&amp;gt; and academic groups,&amp;lt;ref name=&amp;quot;Huang2015Stereoscope&amp;quot;&amp;gt;Huang, F. C., Chen, K., &amp;amp; Wetzstein, G. (2015). The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues. ACM Transactions on Graphics, 34(4), Article 60. doi:10.1145/2766943&amp;lt;/ref&amp;gt; and commercial modules from companies like [[CREAL]].&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt; Often utilize MLAs, stacked LCDs, or waveguide/diffractive approaches.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Large Format / Tiled Displays:&#039;&#039;&#039; Aimed at creating large-scale, immersive 3D experiences without glasses for public venues, command centers, or collaborative environments.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LightFieldLabSolidLightPR&amp;quot;&amp;gt;&lt;br /&gt;
Light Field Lab Press Release (2021, Oct 7). *Light Field Lab Unveils SolidLight™ – The Highest Resolution Holographic Display Platform Ever Designed.*  &lt;br /&gt;
https://www.lightfieldlab.com/press-release-oct-2021 (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt; [[Light Field Lab]]&#039;s SolidLight™ platform uses modular panels designed to be tiled into large video walls.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LightFieldLabSolidLightPR&amp;quot;/&amp;gt; Sony&#039;s ELF-SR series (Spatial Reality Display) uses high-speed vision sensors and a micro-optical lens for a single user but demonstrates high-fidelity desktop light field effects.&amp;lt;ref name=&amp;quot;SonyELFSR2&amp;quot;&amp;gt;&lt;br /&gt;
Sony Professional. *ELF‑SR2 Spatial Reality Display.*  &lt;br /&gt;
https://pro.sony/ue_US/products/spatial-reality-displays/elf-sr2 (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Comparison with Other 3D Display Technologies ==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Comparison of Key 3D Display Technology Characteristics&lt;br /&gt;
! Technology&lt;br /&gt;
! Glasses Required&lt;br /&gt;
! Natural Focal Cues (Solves [[Vergence-accommodation conflict|VAC]])&lt;br /&gt;
! Full Motion [[Parallax]]&lt;br /&gt;
! Typical [[Field of view|View Field]]&lt;br /&gt;
! Key Trade-offs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Light Field Display&#039;&#039;&#039;&lt;br /&gt;
| No (often)&lt;br /&gt;
| Yes&lt;br /&gt;
| Yes&lt;br /&gt;
| &lt;br /&gt;
| Spatio-angular resolution trade-off, computation needs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Stereoscopic display|Stereoscopic Displays]]&#039;&#039;&#039;&lt;br /&gt;
| Yes&lt;br /&gt;
| No&lt;br /&gt;
| No &amp;lt;small&amp;gt;(requires head tracking)&amp;lt;/small&amp;gt;&lt;br /&gt;
| Wide&lt;br /&gt;
| VAC causes fatigue, requires glasses&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Autostereoscopic display|Autostereoscopic (non-LFD)]]&#039;&#039;&#039;&lt;br /&gt;
| No&lt;br /&gt;
| No&lt;br /&gt;
| Limited &amp;lt;small&amp;gt;(often Horizontal only)&amp;lt;/small&amp;gt;&lt;br /&gt;
| Limited&lt;br /&gt;
| Reduced resolution per view, fixed viewing zones&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Volumetric Display]]&#039;&#039;&#039;&lt;br /&gt;
| No&lt;br /&gt;
| Yes&lt;br /&gt;
| Yes&lt;br /&gt;
| 360° potential&lt;br /&gt;
| Limited resolution, transparency/opacity issues, bulk&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Holographic display|Holographic Displays]]&#039;&#039;&#039;&lt;br /&gt;
| No&lt;br /&gt;
| Yes&lt;br /&gt;
| Yes&lt;br /&gt;
| Often Limited&lt;br /&gt;
| Extreme computational demands, [[Speckle pattern|speckle]], small size typically&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
LFDs offer a compelling balance, providing natural depth cues without glasses (in many formats) and resolving the VAC, but face challenges in achieving high resolution across both spatial and angular domains simultaneously.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Content Creation ==&lt;br /&gt;
Creating content compatible with LFDs requires capturing or generating directional view information:&lt;br /&gt;
* &#039;&#039;&#039;[[Light Field Camera|Light Field Cameras]] / [[Plenoptic Camera|Plenoptic Cameras]]:&#039;&#039;&#039; Capture both intensity and direction of incoming light using specialized sensors (often with MLAs).&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt; The captured data can be processed for LFD playback.&lt;br /&gt;
* &#039;&#039;&#039;[[Computer Graphics]] Rendering:&#039;&#039;&#039; Standard 3D scenes built in engines like [[Unity (game engine)|Unity]] or [[Unreal Engine]] can be rendered from multiple viewpoints to generate the necessary data.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt; Specialized light field rendering techniques, potentially using [[Ray tracing (graphics)|ray tracing]] or neural methods like [[Neural Radiance Fields]] (NeRF), are employed.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Mildenhall2020NeRF&amp;quot;&amp;gt;Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T., Ramamoorthi, R., &amp;amp; Ng, R. (2020). NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. European Conference on Computer Vision (ECCV), 405-421. doi:10.1007/978-3-030-58452-8_24&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Photogrammetry]] and 3D Scanning:&#039;&#039;&#039; Real-world objects/scenes captured as 3D models can serve as input for rendering light field views.&lt;br /&gt;
* &#039;&#039;&#039;[[Focal Stack]] Conversion:&#039;&#039;&#039; Research explores converting image stacks captured at different focal depths into light field representations, particularly for multi-layer displays.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Applications==&lt;br /&gt;
===Applications in VR and AR===&lt;br /&gt;
* &#039;&#039;&#039;Enhanced Realism and Immersion:&#039;&#039;&#039; Correct depth cues make virtual objects appear more solid and stable, improving the sense of presence, especially for near-field interactions.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Improved Visual Comfort:&#039;&#039;&#039; Mitigating the VAC reduces eye strain, fatigue, and nausea, enabling longer and more comfortable VR/AR sessions.&amp;lt;ref name=&amp;quot;WiredVAC&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Natural Interaction:&#039;&#039;&#039; Accurate depth perception facilitates intuitive hand-eye coordination for manipulating virtual objects.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Seamless AR Integration:&#039;&#039;&#039; Allows virtual elements to appear more cohesively integrated with the real world at correct focal depths.&lt;br /&gt;
* &#039;&#039;&#039;Vision Correction:&#039;&#039;&#039; Near-eye LFDs can potentially pre-distort the displayed light field to correct for the user&#039;s refractive errors, eliminating the need for prescription glasses within the headset.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2015Stereoscope&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Other Applications===&lt;br /&gt;
* &#039;&#039;&#039;Medical Imaging and Visualization:&#039;&#039;&#039; Intuitive visualization of complex 3D scans (CT, MRI) for diagnostics, surgical planning, and education.&amp;lt;ref name=&amp;quot;Nam2019Medical&amp;quot;&amp;gt;Nam, J., McCormick, M., &amp;amp; Tate, A. J. (2019). Light field display systems for medical imaging applications. Journal of Display Technology, 15(3), 215-225. doi:10.1002/jsid.785&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Scientific Visualization:&#039;&#039;&#039; Analyzing complex datasets in fields like fluid dynamics, molecular modeling, geology.&amp;lt;ref name=&amp;quot;Halle2017SciVis&amp;quot;&amp;gt;Halle, M. W., &amp;amp; Meng, J. (2017). LightPlanets: GPU-based rendering of transparent astronomical objects using light field methods. IEEE Transactions on Visualization and Computer Graphics, 23(5), 1479-1488. doi:10.1109/TVCG.2016.2535388&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Product Design and Engineering (CAD/CAE):&#039;&#039;&#039; Collaborative visualization and review of 3D models.&amp;lt;ref name=&amp;quot;Nam2019Medical&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Entertainment and Gaming:&#039;&#039;&#039; Immersive experiences in arcades, museums, theme parks, and potentially future home entertainment.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Telepresence and Communication:&#039;&#039;&#039; Creating realistic, life-sized 3D representations of remote collaborators, like Google&#039;s [[Project Starline]] concept.&amp;lt;ref name=&amp;quot;Starline&amp;quot;&amp;gt;Google Blog (2023, May 10). A first look at Project Starline’s new, simpler prototype. Retrieved from https://blog.google/technology/research/project-starline-prototype/&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Microscopy]]:&#039;&#039;&#039; Viewing microscopic samples with natural depth perception.&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Challenges and Limitations ==&lt;br /&gt;
* &#039;&#039;&#039;Spatio-Angular Resolution Trade-off:&#039;&#039;&#039; Increasing the number of views (angular resolution) often decreases the perceived sharpness (spatial resolution) for a fixed display pixel count.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Computational Complexity &amp;amp; Bandwidth:&#039;&#039;&#039; Rendering, compressing, and transmitting the massive datasets for real-time LFDs is extremely demanding on GPUs and data infrastructure.&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Manufacturing Complexity and Cost:&#039;&#039;&#039; Producing precise optical components like high-density MLAs, perfectly aligned multi-layer stacks, or large-area waveguide structures is challenging and costly.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Form Factor and Miniaturization:&#039;&#039;&#039; Integrating complex optics and electronics into thin, lightweight, and power-efficient near-eye devices remains difficult.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Limited Field of View (FoV):&#039;&#039;&#039; Achieving wide FoV comparable to traditional VR headsets while maintaining high angular resolution is challenging.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Content Ecosystem:&#039;&#039;&#039; The workflow for creating, distributing, and viewing native light field content is still developing compared to standard 2D or stereoscopic 3D, due to no consumer lightfield hardware.&lt;br /&gt;
&lt;br /&gt;
== Key Players and Commercial Landscape ==&lt;br /&gt;
Several companies and research groups are active in LFD development:&lt;br /&gt;
* &#039;&#039;&#039;[[CREAL]]:&#039;&#039;&#039; Swiss startup focused on compact near-eye LFD modules for AR/VR glasses aiming to solve VAC.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Light Field Lab]]:&#039;&#039;&#039; Developing large-scale, modular LFD panels (branded as SolidLight) based on [[Waveguide (optics)|waveguide]] technology.&amp;lt;ref name=&amp;quot;LightFieldLabTech&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LightFieldLabSolidLightPR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Sony]]:&#039;&#039;&#039; Produces the Spatial Reality Display (ELF-SR series), a high-fidelity desktop LFD using eye-tracking.&amp;lt;ref name=&amp;quot;SonyELFSR2&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Avegant]]:&#039;&#039;&#039; Develops light field light engines, particularly for AR, focusing on VAC resolution.&amp;lt;ref name=&amp;quot;AvegantPR&amp;quot;&amp;gt;PR Newswire (2017, March 15). Avegant Introduces Light Field Technology for Mixed Reality. Retrieved from https://www.prnewswire.com/news-releases/avegant-introduces-light-field-technology-for-mixed-reality-300423855.html&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Holografika]]:&#039;&#039;&#039; Offers glasses-free 3D LFD systems for professional applications.&amp;lt;ref name=&amp;quot;Holografika&amp;quot;&amp;gt;Holografika. Light Field Displays. Retrieved from https://holografika.com/light-field-displays/&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Japan Display Inc. (JDI)]]:&#039;&#039;&#039; Demonstrated prototype LFDs for various applications.&amp;lt;ref name=&amp;quot;JDI_LFD_2019&amp;quot;&amp;gt;Japan Display Inc. News (2019, December 3). JDI Develops World&#039;s First 10.1-inch Light Field Display. Retrieved from https://www.j-display.com/english/news/2019/20191203_01.html&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[NVIDIA]]:&#039;&#039;&#039; Foundational research in near-eye LFDs and ongoing GPU development crucial for LFD rendering.&amp;lt;ref name=&amp;quot;NvidiaNELD&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Google]]:&#039;&#039;&#039; Research in LFDs, demonstrated through concepts like Project Starline.&amp;lt;ref name=&amp;quot;Starline&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Academic Research:&#039;&#039;&#039; Institutions like [[MIT Media Lab]], [[Stanford University]], University of Arizona, and others continue to push theoretical and practical boundaries.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Future Directions and Research ==&lt;br /&gt;
* &#039;&#039;&#039;Computational Display Optimization:&#039;&#039;&#039; Using [[Artificial intelligence|AI]] and sophisticated algorithms to optimize patterns on multi-layer displays or directional backlights for better quality with fewer resources.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt; Using neural representations (like NeRF) for efficient light field synthesis and compression.&amp;lt;ref name=&amp;quot;Mildenhall2020NeRF&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Varifocal and Multifocal Integration:&#039;&#039;&#039; Hybrid approaches combining LFD principles with dynamic focus elements (liquid lenses, deformable mirrors) to achieve focus cues potentially more efficiently than pure LFDs.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Liu2014OSTHMD&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Miniaturization for Wearables:&#039;&#039;&#039; Developing ultra-thin, efficient components using [[Metasurface|metasurfaces]], [[Holographic optical element|holographic optical elements (HOEs)]], advanced waveguides, and [[MicroLED]] displays for integration into consumer AR/VR glasses.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;SpringerReview2021&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Improved Content Capture and Creation Tools:&#039;&#039;&#039; Advancements in [[Plenoptic camera|plenoptic cameras]], AI-driven view synthesis, and streamlined software workflows.&amp;lt;ref name=&amp;quot;Mildenhall2020NeRF&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Higher Resolution and Efficiency:&#039;&#039;&#039; Addressing the spatio-angular trade-off and improving light efficiency through new materials, optical designs (for example polarization multiplexing&amp;lt;ref name=&amp;quot;Tan2019Polarization&amp;quot;&amp;gt;G. Tan, T. Zhan, Y.-H. Lee, J. Xiong, S.-T. Wu, “Near-eye light-field display with polarization multiplexing,” *Proceedings of SPIE* 10942, Advances in Display Technologies IX, paper 1094206, 2019, doi:10.1117/12.2509121.&amp;lt;/ref&amp;gt;), and display technologies.&lt;br /&gt;
&lt;br /&gt;
== See Also ==&lt;br /&gt;
* [[Light Field]]&lt;br /&gt;
* [[Plenoptic Function]]&lt;br /&gt;
* [[Integral imaging]]&lt;br /&gt;
* [[Autostereoscopic display]]&lt;br /&gt;
* [[Stereoscopy]]&lt;br /&gt;
* [[Holographic display]]&lt;br /&gt;
* [[Volumetric Display]]&lt;br /&gt;
* [[Varifocal display]]&lt;br /&gt;
* [[Vergence-accommodation conflict]]&lt;br /&gt;
* [[Virtual Reality]]&lt;br /&gt;
* [[Augmented Reality]]&lt;br /&gt;
* [[Head-mounted display]]&lt;br /&gt;
* [[Microlens array]]&lt;br /&gt;
* [[Spatial Light Modulator]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Terms]]&lt;br /&gt;
[[Category:Technical Terms]]&lt;br /&gt;
[[Category:Display technology]]&lt;br /&gt;
[[Category:3D display technology]]&lt;br /&gt;
[[Category:Autostereoscopy]]&lt;br /&gt;
[[Category:Virtual reality]]&lt;br /&gt;
[[Category:Augmented reality]]&lt;br /&gt;
[[Category:Optics]]&lt;br /&gt;
[[Category:Computational photography]]&lt;br /&gt;
[[Category:Emerging technologies]]&lt;br /&gt;
[[Category:Human-computer interaction]]&lt;/div&gt;</summary>
		<author><name>RealEditor</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Light_field_display&amp;diff=36368</id>
		<title>Light field display</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Light_field_display&amp;diff=36368"/>
		<updated>2025-08-04T05:59:34Z</updated>

		<summary type="html">&lt;p&gt;RealEditor: /* Challenges and Limitations */ remove conjecture&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Light field display&#039;&#039;&#039; (&#039;&#039;&#039;LFD&#039;&#039;&#039;) is an advanced display technology designed to reproduce a [[light field]], the distribution of light rays in [[3D space]], including their intensity and direction.&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;&amp;gt;Wetzstein G. (2020). “Computational Displays: Achieving the Full Plenoptic Function.” ACM SIGGRAPH 2020 Courses. ACM Digital Library. doi:10.1145/3386569.3409414. Available: https://dl.acm.org/doi/10.1145/3386569.3409414 (accessed 3 May 2025).&amp;lt;/ref&amp;gt; Unlike conventional 2D displays or [[stereoscopic display|stereoscopic 3D]] systems that present flat images or fixed viewpoints requiring glasses, light field displays aim to recreate how light naturally propagates from a real scene.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;&amp;gt;Wetzstein, G., Lanman, D., Hirsch, M., &amp;amp; Raskar, R. (2012). Tensor displays: Compressive light field synthesis using multilayer displays with directional backlighting. ACM Transactions on Graphics, 31(4), Article 80. doi:10.1145/2185520.2185576&amp;lt;/ref&amp;gt; This allows viewers to perceive genuine [[depth]], [[parallax]] (both horizontal and vertical), and perspective changes without special eyewear. (in many implementations).&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;&amp;gt;Hollister, S. (2024, January 19). Leia is building a 3D empire on the back of the worst phone we&#039;ve ever reviewed. The Verge. Retrieved from https://www.theverge.com/24036574/leia-glasses-free-3d-ces-2024&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This method of display is crucial for the future of [[virtual reality]] (VR) and [[augmented reality]] (AR), because it can directly address the [[vergence-accommodation conflict]] (VAC).&amp;lt;ref name=&amp;quot;WiredVAC&amp;quot;&amp;gt;Zhang, S. (2015, August 11). The Obscure Neuroscience Problem That&#039;s Plaguing VR. WIRED. Retrieved from https://www.wired.com/2015/08/obscure-neuroscience-problem-thats-plaguing-vr&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;VACReview&amp;quot;&amp;gt;Y. Zhou, J. Zhang, F. Fang, “Vergence-accommodation conflict in optical see-through display: Review and prospect,” *Results in Optics*, vol. 5, p. 100160, 2021, doi:10.1016/j.rio.2021.100160.&amp;lt;/ref&amp;gt; By providing correct [[focal cues]] that match the [[vergence]] information, LFDs promise more immersive, realistic, and visually comfortable experiences, reducing eye strain and [[Virtual Reality Sickness|simulator sickness]] often associated with current [[head-mounted display]]s (HMDs).&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;&amp;gt;CREAL. Light-field: Seeing Virtual Worlds Naturally. Retrieved from https://creal.com/technology/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Definition and Principles ==&lt;br /&gt;
A light field display aims to replicate the [[Plenoptic Function]], a theoretical function describing the complete set of light rays passing through every point in space, in every direction, potentially across time and wavelength.&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt; In practice, light field displays generate a discretized (sampled) approximation of the relevant 4D subset of this function (typically spatial position and angular direction).&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;&amp;gt;Huang, F. C., Wetzstein, G., Barsky, B. A., &amp;amp; Raskar, R. (2014). Eyeglasses-free display: Towards correcting visual aberrations with computational light field displays. ACM Transactions on Graphics, 33(4), Article 59. doi:10.1145/2601097.2601122&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
By controlling the direction as well as the color and intensity of emitted light rays, these displays allow the viewer&#039;s eyes to naturally focus ([[accommodation]]) at different depths within the displayed scene, matching the depth cues provided by binocular vision ([[vergence]]).&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt; This recreation allows users to experience:&lt;br /&gt;
* Full motion [[parallax]] (horizontal and vertical look-around).&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&lt;br /&gt;
* Accurate [[occlusion]] cues.&lt;br /&gt;
* Natural [[focal cues]], mitigating the [[Vergence-accommodation conflict]].&amp;lt;ref name=&amp;quot;WiredVAC&amp;quot;/&amp;gt;&lt;br /&gt;
* [[Specular highlights]] and realistic reflections that change with viewpoint.&lt;br /&gt;
* Viewing without specialized eyewear (especially in non-headset formats).&lt;br /&gt;
&lt;br /&gt;
== Characteristics ==&lt;br /&gt;
* &#039;&#039;&#039;Glasses-Free 3D&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;Full Parallax:&#039;&#039;&#039; True LFDs provide both horizontal and vertical parallax, unlike earlier [[autostereoscopic display|autostereoscopic]] technologies that often limited parallax to side-to-side movement.&lt;br /&gt;
* &#039;&#039;&#039;Accommodation-Convergence Conflict Resolution:&#039;&#039;&#039; A primary driver for VR/AR, LFDs can render virtual objects at appropriate focal distances, aligning accommodation and vergence to significantly improve visual comfort and realism.&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;&amp;gt;&lt;br /&gt;
Lanman D., &amp;amp; Luebke D. (2013). “Near‑Eye Light Field Displays.”  &lt;br /&gt;
*ACM Transactions on Graphics*, 32 (6), 220:1–220:10. doi:10.1145/2508363.2508366.  &lt;br /&gt;
Project page: https://research.nvidia.com/publication/near-eye-light-field-displays (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Computational Requirements:&#039;&#039;&#039; Generating and processing the massive amount of data (multiple views or directional light information) needed for LFDs requires significant [[Graphics processing unit|GPU]] power and bandwidth.&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Resolution Trade-offs:&#039;&#039;&#039; A fundamental challenge involves balancing spatial resolution (image sharpness), angular resolution (smoothness of parallax/number of views), [[Field of view|field of view (FoV)]], and depth of field.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt; This is often referred to as the spatio-angular resolution trade-off.&lt;br /&gt;
&lt;br /&gt;
==History and Development==&lt;br /&gt;
===Early Concepts and Foundations===&lt;br /&gt;
The underlying concept can be traced back to Michael Faraday&#039;s 1846 suggestion of light as a field&amp;lt;ref name=&amp;quot;FaradayField&amp;quot;&amp;gt;Princeton University Press. Faraday, Maxwell, and the Electromagnetic Field - How Two Men Revolutionized Physics. Retrieved from https://press.princeton.edu/books/hardcover/9780691161664/faraday-maxwell-and-the-electromagnetic-field&amp;lt;/ref&amp;gt; and was mathematically formalized regarding radiance transfer by Andrey Gershun in 1936.&amp;lt;ref name=&amp;quot;Gershun1936&amp;quot;&amp;gt;Gershun, A. (1936). The Light Field. Moscow. (Translated by P. Moon &amp;amp; G. Timoshenko, 1939, Journal of Mathematics and Physics, XVIII, 51–151).&amp;lt;/ref&amp;gt; The practical groundwork for reproducing light fields was laid by Gabriel Lippmann&#039;s 1908 concept of [[Integral imaging|Integral Photography]] (&amp;quot;photographie intégrale&amp;quot;), which used an array of small lenses to capture and reproduce light fields.&amp;lt;ref name=&amp;quot;Lippmann1908&amp;quot;&amp;gt;Lippmann, G. (1908). Épreuves réversibles donnant la sensation du relief. Journal de Physique Théorique et Appliquée, 7(1), 821–825. doi:10.1051/jphystap:019080070082100&amp;lt;/ref&amp;gt; The modern computational understanding was significantly advanced by Adelson and Bergen&#039;s formalization of the [[Plenoptic Function]] in 1991.&amp;lt;ref name=&amp;quot;AdelsonBergen1991&amp;quot;&amp;gt;Adelson, E. H., &amp;amp; Bergen, J. R. (1991). The plenoptic function and the elements of early vision. In M. Landy &amp;amp; J. A. Movshon (Eds.), Computational Models of Visual Processing (pp. 3–20). MIT Press.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Key Development Milestones===&lt;br /&gt;
* &#039;&#039;&#039;1908:&#039;&#039;&#039; Gabriel Lippmann introduces integral photography.&amp;lt;ref name=&amp;quot;Lippmann1908&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;1936:&#039;&#039;&#039; Andrey Gershun formalizes the light field mathematically.&amp;lt;ref name=&amp;quot;Gershun1936&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;1991:&#039;&#039;&#039; Adelson and Bergen formalize the plenoptic function.&amp;lt;ref name=&amp;quot;AdelsonBergen1991&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;1996:&#039;&#039;&#039; Levoy and Hanrahan publish work on Light Field Rendering.&amp;lt;ref name=&amp;quot;Levoy1996&amp;quot;&amp;gt;Levoy, M., &amp;amp; Hanrahan, P. (1996). Light field rendering. Proceedings of the 23rd annual conference on Computer graphics and interactive techniques (SIGGRAPH &#039;96), 31-42. doi:10.1145/237170.237193&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2005:&#039;&#039;&#039; Stanford Multi-camera Array demonstrated for light field capture.&amp;lt;ref name=&amp;quot;Wilburn2005&amp;quot;&amp;gt;Wilburn, B., Joshi, N., Vaish, V., Talvala, E. V., Antunez, E., Barth, A., Adams, A., Horowitz, M., &amp;amp; Levoy, M. (2005). High performance imaging using large camera arrays. ACM SIGGRAPH 2005 Papers (SIGGRAPH &#039;05), 765-776. doi:10.1145/1186822.1073256&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2004-2008:&#039;&#039;&#039; Early computational light field displays developed (for example MIT Media Lab).&amp;lt;ref name=&amp;quot;Matusik2004&amp;quot;&amp;gt;Matusik, W., &amp;amp; Pfister, H. (2004). 3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes. ACM SIGGRAPH 2004 Papers (SIGGRAPH &#039;04), 814–824. doi:10.1145/1186562.1015805&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2010-2013:&#039;&#039;&#039; Introduction of multilayer, compressive, and tensor light field display concepts.&amp;lt;ref name=&amp;quot;Lanman2010ContentAdaptive&amp;quot;&amp;gt;Lanman, D., Hirsch, M., Kim, Y., &amp;amp; Raskar, R. (2010). Content-adaptive parallax barriers: optimizing dual-layer 3D displays using low-rank light field factorization. ACM SIGGRAPH Asia 2010 papers (SIGGRAPH ASIA &#039;10), Article 163. doi:10.1145/1882261.1866191&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2013:&#039;&#039;&#039; NVIDIA demonstrates near-eye light field display prototype for VR.&amp;lt;ref name=&amp;quot;NvidiaNELD&amp;quot;&amp;gt;Lanman, D., &amp;amp; Luebke, D. (2013). Near-Eye Light Field Displays (Technical Report NVR-2013-004). NVIDIA Research. Retrieved from https://research.nvidia.com/sites/default/files/pubs/2013-11_Near-Eye-Light-Field/NVIDIA-NELD.pdf&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2015 onwards:&#039;&#039;&#039; Emergence of advanced prototypes (for example CREAL, Light Field Lab, PetaRay).&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;&amp;gt;Lang, B. (2023, January 11). CREAL&#039;s Latest Light-field AR Demo Shows Continued Progress Toward Natural Depth &amp;amp; Focus. Road to VR. Retrieved from https://www.roadtovr.com/creal-light-field-ar-vr-headset-prototype/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Technical Implementations (How They Work) ==&lt;br /&gt;
Light field displays use various techniques to generate the 4D light field:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;[[Microlens Arrays]] (MLAs):&#039;&#039;&#039; A high-resolution display panel (LCD or OLED) is overlaid with an array of tiny lenses. Each lenslet directs light from the pixels beneath it into a specific set of directions, creating different views for different observer positions.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt; This is a common approach derived from integral imaging.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt; The trade-off is explicit: spatial resolution is determined by the lenslet count, angular resolution by the pixels per lenslet.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Multilayer Displays (Stacked LCDs):&#039;&#039;&#039; Several layers of transparent display panels (typically [[Liquid crystal display|LCDs]]) are stacked with air gaps. By computationally optimizing the opacity patterns on each layer, the display acts as a multiplicative [[Spatial Light Modulator|spatial light modulator]], shaping light from a backlight into a complex light field.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2010ContentAdaptive&amp;quot;/&amp;gt; These are often explored for near-eye displays.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Directional Backlighting:&#039;&#039;&#039; A standard display panel (for example LCD) is combined with a specialized backlight that emits light in controlled directions. The backlight might use another LCD panel coupled with optics like lenticular sheets to achieve directionality.&amp;lt;ref name=&amp;quot;Maimone2013Focus3D&amp;quot;&amp;gt;Maimone, A., Wetzstein, G., Hirsch, M., Lanman, D., Raskar, R., &amp;amp; Fuchs, H. (2013). Focus 3D: compressive accommodation display. ACM Transactions on Graphics, 32(5), Article 152. doi:10.1145/2516971.2516983&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Projector Arrays:&#039;&#039;&#039; Multiple projectors illuminate a screen (often lenticular or diffusive). Each projector provides a different perspective view, and their combined output forms the light field.&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Parallax Barrier]]s:&#039;&#039;&#039; An opaque layer with precisely positioned slits or apertures is placed in front of or between display panels. The barrier blocks light selectively, allowing different pixels to be seen from different angles.&amp;lt;ref name=&amp;quot;JDI_Parallax&amp;quot;&amp;gt;&lt;br /&gt;
Japan Display Inc. (2016, Dec 5). *Ultra‑High Resolution Display with Integrated Parallax Barrier for Glasses‑Free 3D* [Press release].  &lt;br /&gt;
Archived copy: https://web.archive.org/web/20161221045330/https://www.j-display.com/english/news/2016/20161205.html (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt; Often less light-efficient than MLAs.&lt;br /&gt;
* &#039;&#039;&#039;[[Waveguide]] Optics:&#039;&#039;&#039; Light is injected into thin optical waveguides (similar to those in some AR glasses) and then coupled out at specific points with controlled directionality, often using diffractive optical elements (DOEs) or gratings.&amp;lt;ref name=&amp;quot;LightFieldLabTech&amp;quot;&amp;gt;&lt;br /&gt;
Light Field Lab. *SolidLight™ Platform Overview.* https://www.lightfieldlab.com/ (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Maimone2017HolographicNED&amp;quot;&amp;gt;Maimone, A., Georgiou, A., &amp;amp; Kollin, J. S. (2017). Holographic near-eye displays for virtual and augmented reality. ACM Transactions on Graphics, 36(4), Article 85. doi:10.1145/3072959.3073624&amp;lt;/ref&amp;gt; This is explored for compact AR/VR systems.&lt;br /&gt;
* &#039;&#039;&#039;Time-Multiplexed Displays:&#039;&#039;&#039; This is what CREAL is doing. Different views or directional illumination patterns are presented rapidly in sequence. If cycled faster than human perception, this creates the illusion of a continuous light field. Can be combined with other techniques like directional backlighting.&amp;lt;ref name=&amp;quot;Liu2014OSTHMD&amp;quot;&amp;gt;Liu, S., Cheng, D., &amp;amp; Hua, H. (2014). An optical see-through head mounted display with addressable focal planes. 2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 33-42. doi:10.1109/ISMAR.2014.6948403&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Holographic and Diffractive Approaches:&#039;&#039;&#039; While [[Holographic display|holographic displays]] reconstruct wavefronts through diffraction, some LFDs utilize holographic optical elements (HOEs) or related diffractive principles to achieve high angular resolution and potentially overcome MLA limitations.&amp;lt;ref name=&amp;quot;SpringerReview2021&amp;quot;&amp;gt;M. Martínez-Corral, Z. Guan, Y. Li, Z. Xiong, B. Javidi, “Review of light field technologies,” *Visual Computing for Industry, Biomedicine and Art*, 4 (1): 29, 2021, doi:10.1186/s42492-021-00096-8.&amp;lt;/ref&amp;gt; Some companies use &amp;quot;holographic&amp;quot; terminology for their high-density LFDs.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;&amp;gt;C. Fink, “Light Field Lab Raises $50 Million to Bring SolidLight Holograms Into the Real World,” *Forbes*, 8 Feb 2023. Available: https://www.forbes.com/sites/charliefink/2023/02/08/light-field-lab-raises-50m-to-bring-solidlight-holograms-into-the-real-world/ (accessed 30 Apr 2025).&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Types of Light Field Displays ==&lt;br /&gt;
* &#039;&#039;&#039;Near-Eye Light Field Displays:&#039;&#039;&#039; Integrated into VR/AR [[Head-mounted display|HMDs]]. Primarily focused on solving the VAC for comfortable, realistic close-up interactions.&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt; Examples include research prototypes from NVIDIA&amp;lt;ref name=&amp;quot;NvidiaNELD&amp;quot;/&amp;gt; and academic groups,&amp;lt;ref name=&amp;quot;Huang2015Stereoscope&amp;quot;&amp;gt;Huang, F. C., Chen, K., &amp;amp; Wetzstein, G. (2015). The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues. ACM Transactions on Graphics, 34(4), Article 60. doi:10.1145/2766943&amp;lt;/ref&amp;gt; and commercial modules from companies like [[CREAL]].&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt; Often utilize MLAs, stacked LCDs, or waveguide/diffractive approaches.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Large Format / Tiled Displays:&#039;&#039;&#039; Aimed at creating large-scale, immersive 3D experiences without glasses for public venues, command centers, or collaborative environments.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LightFieldLabSolidLightPR&amp;quot;&amp;gt;&lt;br /&gt;
Light Field Lab Press Release (2021, Oct 7). *Light Field Lab Unveils SolidLight™ – The Highest Resolution Holographic Display Platform Ever Designed.*  &lt;br /&gt;
https://www.lightfieldlab.com/press-release-oct-2021 (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt; [[Light Field Lab]]&#039;s SolidLight™ platform uses modular panels designed to be tiled into large video walls.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LightFieldLabSolidLightPR&amp;quot;/&amp;gt; Sony&#039;s ELF-SR series (Spatial Reality Display) uses high-speed vision sensors and a micro-optical lens for a single user but demonstrates high-fidelity desktop light field effects.&amp;lt;ref name=&amp;quot;SonyELFSR2&amp;quot;&amp;gt;&lt;br /&gt;
Sony Professional. *ELF‑SR2 Spatial Reality Display.*  &lt;br /&gt;
https://pro.sony/ue_US/products/spatial-reality-displays/elf-sr2 (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Comparison with Other 3D Display Technologies ==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Comparison of Key 3D Display Technology Characteristics&lt;br /&gt;
! Technology&lt;br /&gt;
! Glasses Required&lt;br /&gt;
! Natural Focal Cues (Solves [[Vergence-accommodation conflict|VAC]])&lt;br /&gt;
! Full Motion [[Parallax]]&lt;br /&gt;
! Typical [[Field of view|View Field]]&lt;br /&gt;
! Key Trade-offs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Light Field Display&#039;&#039;&#039;&lt;br /&gt;
| No (often)&lt;br /&gt;
| Yes&lt;br /&gt;
| Yes&lt;br /&gt;
| &lt;br /&gt;
| Spatio-angular resolution trade-off, computation needs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Stereoscopic display|Stereoscopic Displays]]&#039;&#039;&#039;&lt;br /&gt;
| Yes&lt;br /&gt;
| No&lt;br /&gt;
| No &amp;lt;small&amp;gt;(requires head tracking)&amp;lt;/small&amp;gt;&lt;br /&gt;
| Wide&lt;br /&gt;
| VAC causes fatigue, requires glasses&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Autostereoscopic display|Autostereoscopic (non-LFD)]]&#039;&#039;&#039;&lt;br /&gt;
| No&lt;br /&gt;
| No&lt;br /&gt;
| Limited &amp;lt;small&amp;gt;(often Horizontal only)&amp;lt;/small&amp;gt;&lt;br /&gt;
| Limited&lt;br /&gt;
| Reduced resolution per view, fixed viewing zones&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Volumetric Display]]&#039;&#039;&#039;&lt;br /&gt;
| No&lt;br /&gt;
| Yes&lt;br /&gt;
| Yes&lt;br /&gt;
| 360° potential&lt;br /&gt;
| Limited resolution, transparency/opacity issues, bulk&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Holographic display|Holographic Displays]]&#039;&#039;&#039;&lt;br /&gt;
| No&lt;br /&gt;
| Yes&lt;br /&gt;
| Yes&lt;br /&gt;
| Often Limited&lt;br /&gt;
| Extreme computational demands, [[Speckle pattern|speckle]], small size typically&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
LFDs offer a compelling balance, providing natural depth cues without glasses (in many formats) and resolving the VAC, but face challenges in achieving high resolution across both spatial and angular domains simultaneously.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Content Creation ==&lt;br /&gt;
Creating content compatible with LFDs requires capturing or generating directional view information:&lt;br /&gt;
* &#039;&#039;&#039;[[Light Field Camera|Light Field Cameras]] / [[Plenoptic Camera|Plenoptic Cameras]]:&#039;&#039;&#039; Capture both intensity and direction of incoming light using specialized sensors (often with MLAs).&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt; The captured data can be processed for LFD playback.&lt;br /&gt;
* &#039;&#039;&#039;[[Computer Graphics]] Rendering:&#039;&#039;&#039; Standard 3D scenes built in engines like [[Unity (game engine)|Unity]] or [[Unreal Engine]] can be rendered from multiple viewpoints to generate the necessary data.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt; Specialized light field rendering techniques, potentially using [[Ray tracing (graphics)|ray tracing]] or neural methods like [[Neural Radiance Fields]] (NeRF), are employed.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Mildenhall2020NeRF&amp;quot;&amp;gt;Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T., Ramamoorthi, R., &amp;amp; Ng, R. (2020). NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. European Conference on Computer Vision (ECCV), 405-421. doi:10.1007/978-3-030-58452-8_24&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Photogrammetry]] and 3D Scanning:&#039;&#039;&#039; Real-world objects/scenes captured as 3D models can serve as input for rendering light field views.&lt;br /&gt;
* &#039;&#039;&#039;[[Focal Stack]] Conversion:&#039;&#039;&#039; Research explores converting image stacks captured at different focal depths into light field representations, particularly for multi-layer displays.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Applications==&lt;br /&gt;
===Applications in VR and AR===&lt;br /&gt;
* &#039;&#039;&#039;Enhanced Realism and Immersion:&#039;&#039;&#039; Correct depth cues make virtual objects appear more solid and stable, improving the sense of presence, especially for near-field interactions.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Improved Visual Comfort:&#039;&#039;&#039; Mitigating the VAC reduces eye strain, fatigue, and nausea, enabling longer and more comfortable VR/AR sessions.&amp;lt;ref name=&amp;quot;WiredVAC&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Natural Interaction:&#039;&#039;&#039; Accurate depth perception facilitates intuitive hand-eye coordination for manipulating virtual objects.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Seamless AR Integration:&#039;&#039;&#039; Allows virtual elements to appear more cohesively integrated with the real world at correct focal depths.&lt;br /&gt;
* &#039;&#039;&#039;Vision Correction:&#039;&#039;&#039; Near-eye LFDs can potentially pre-distort the displayed light field to correct for the user&#039;s refractive errors, eliminating the need for prescription glasses within the headset.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2015Stereoscope&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Other Applications===&lt;br /&gt;
* &#039;&#039;&#039;Medical Imaging and Visualization:&#039;&#039;&#039; Intuitive visualization of complex 3D scans (CT, MRI) for diagnostics, surgical planning, and education.&amp;lt;ref name=&amp;quot;Nam2019Medical&amp;quot;&amp;gt;Nam, J., McCormick, M., &amp;amp; Tate, A. J. (2019). Light field display systems for medical imaging applications. Journal of Display Technology, 15(3), 215-225. doi:10.1002/jsid.785&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Scientific Visualization:&#039;&#039;&#039; Analyzing complex datasets in fields like fluid dynamics, molecular modeling, geology.&amp;lt;ref name=&amp;quot;Halle2017SciVis&amp;quot;&amp;gt;Halle, M. W., &amp;amp; Meng, J. (2017). LightPlanets: GPU-based rendering of transparent astronomical objects using light field methods. IEEE Transactions on Visualization and Computer Graphics, 23(5), 1479-1488. doi:10.1109/TVCG.2016.2535388&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Product Design and Engineering (CAD/CAE):&#039;&#039;&#039; Collaborative visualization and review of 3D models.&amp;lt;ref name=&amp;quot;Nam2019Medical&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Entertainment and Gaming:&#039;&#039;&#039; Immersive experiences in arcades, museums, theme parks, and potentially future home entertainment.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Telepresence and Communication:&#039;&#039;&#039; Creating realistic, life-sized 3D representations of remote collaborators, like Google&#039;s [[Project Starline]] concept.&amp;lt;ref name=&amp;quot;Starline&amp;quot;&amp;gt;Google Blog (2023, May 10). A first look at Project Starline’s new, simpler prototype. Retrieved from https://blog.google/technology/research/project-starline-prototype/&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Microscopy]]:&#039;&#039;&#039; Viewing microscopic samples with natural depth perception.&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Challenges and Limitations ==&lt;br /&gt;
* &#039;&#039;&#039;Spatio-Angular Resolution Trade-off:&#039;&#039;&#039; Increasing the number of views (angular resolution) often decreases the perceived sharpness (spatial resolution) for a fixed display pixel count.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Computational Complexity &amp;amp; Bandwidth:&#039;&#039;&#039; Rendering, compressing, and transmitting the massive datasets for real-time LFDs is extremely demanding on GPUs and data infrastructure.&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Manufacturing Complexity and Cost:&#039;&#039;&#039; Producing precise optical components like high-density MLAs, perfectly aligned multi-layer stacks, or large-area waveguide structures is challenging and costly.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Form Factor and Miniaturization:&#039;&#039;&#039; Integrating complex optics and electronics into thin, lightweight, and power-efficient near-eye devices remains difficult.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Limited Field of View (FoV):&#039;&#039;&#039; Achieving wide FoV comparable to traditional VR headsets while maintaining high angular resolution is challenging.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Content Ecosystem:&#039;&#039;&#039; The workflow for creating, distributing, and viewing native light field content is still developing compared to standard 2D or stereoscopic 3D, due to no consumer lightfield hardware.&lt;br /&gt;
&lt;br /&gt;
== Key Players and Commercial Landscape ==&lt;br /&gt;
Several companies and research groups are active in LFD development:&lt;br /&gt;
* &#039;&#039;&#039;[[CREAL]]:&#039;&#039;&#039; Swiss startup focused on compact near-eye LFD modules for AR/VR glasses aiming to solve VAC.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Light Field Lab]]:&#039;&#039;&#039; Developing large-scale, modular &amp;quot;holographic&amp;quot; LFD panels (SolidLight™) based on proprietary [[Waveguide (optics)|waveguide]] technology.&amp;lt;ref name=&amp;quot;LightFieldLabTech&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LightFieldLabSolidLightPR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Sony]]:&#039;&#039;&#039; Produces the Spatial Reality Display (ELF-SR series), a high-fidelity desktop LFD using eye-tracking.&amp;lt;ref name=&amp;quot;SonyELFSR2&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Avegant]]:&#039;&#039;&#039; Develops light field light engines, particularly for AR, focusing on VAC resolution.&amp;lt;ref name=&amp;quot;AvegantPR&amp;quot;&amp;gt;PR Newswire (2017, March 15). Avegant Introduces Light Field Technology for Mixed Reality. Retrieved from https://www.prnewswire.com/news-releases/avegant-introduces-light-field-technology-for-mixed-reality-300423855.html&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Holografika]]:&#039;&#039;&#039; Offers glasses-free 3D LFD systems for professional applications.&amp;lt;ref name=&amp;quot;Holografika&amp;quot;&amp;gt;Holografika. Light Field Displays. Retrieved from https://holografika.com/light-field-displays/&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Japan Display Inc. (JDI)]]:&#039;&#039;&#039; Demonstrated prototype LFDs for various applications.&amp;lt;ref name=&amp;quot;JDI_LFD_2019&amp;quot;&amp;gt;Japan Display Inc. News (2019, December 3). JDI Develops World&#039;s First 10.1-inch Light Field Display. Retrieved from https://www.j-display.com/english/news/2019/20191203_01.html&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[NVIDIA]]:&#039;&#039;&#039; Foundational research in near-eye LFDs and ongoing GPU development crucial for LFD rendering.&amp;lt;ref name=&amp;quot;NvidiaNELD&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Google]]:&#039;&#039;&#039; Research in LFDs, demonstrated through concepts like Project Starline.&amp;lt;ref name=&amp;quot;Starline&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Academic Research:&#039;&#039;&#039; Institutions like [[MIT Media Lab]], [[Stanford University]], University of Arizona, and others continue to push theoretical and practical boundaries.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Future Directions and Research ==&lt;br /&gt;
* &#039;&#039;&#039;Computational Display Optimization:&#039;&#039;&#039; Using [[Artificial intelligence|AI]] and sophisticated algorithms to optimize patterns on multi-layer displays or directional backlights for better quality with fewer resources.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt; Using neural representations (like NeRF) for efficient light field synthesis and compression.&amp;lt;ref name=&amp;quot;Mildenhall2020NeRF&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Varifocal and Multifocal Integration:&#039;&#039;&#039; Hybrid approaches combining LFD principles with dynamic focus elements (liquid lenses, deformable mirrors) to achieve focus cues potentially more efficiently than pure LFDs.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Liu2014OSTHMD&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Miniaturization for Wearables:&#039;&#039;&#039; Developing ultra-thin, efficient components using [[Metasurface|metasurfaces]], [[Holographic optical element|holographic optical elements (HOEs)]], advanced waveguides, and [[MicroLED]] displays for integration into consumer AR/VR glasses.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;SpringerReview2021&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Improved Content Capture and Creation Tools:&#039;&#039;&#039; Advancements in [[Plenoptic camera|plenoptic cameras]], AI-driven view synthesis, and streamlined software workflows.&amp;lt;ref name=&amp;quot;Mildenhall2020NeRF&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Higher Resolution and Efficiency:&#039;&#039;&#039; Addressing the spatio-angular trade-off and improving light efficiency through new materials, optical designs (for example polarization multiplexing&amp;lt;ref name=&amp;quot;Tan2019Polarization&amp;quot;&amp;gt;G. Tan, T. Zhan, Y.-H. Lee, J. Xiong, S.-T. Wu, “Near-eye light-field display with polarization multiplexing,” *Proceedings of SPIE* 10942, Advances in Display Technologies IX, paper 1094206, 2019, doi:10.1117/12.2509121.&amp;lt;/ref&amp;gt;), and display technologies.&lt;br /&gt;
&lt;br /&gt;
== See Also ==&lt;br /&gt;
* [[Light Field]]&lt;br /&gt;
* [[Plenoptic Function]]&lt;br /&gt;
* [[Integral imaging]]&lt;br /&gt;
* [[Autostereoscopic display]]&lt;br /&gt;
* [[Stereoscopy]]&lt;br /&gt;
* [[Holographic display]]&lt;br /&gt;
* [[Volumetric Display]]&lt;br /&gt;
* [[Varifocal display]]&lt;br /&gt;
* [[Vergence-accommodation conflict]]&lt;br /&gt;
* [[Virtual Reality]]&lt;br /&gt;
* [[Augmented Reality]]&lt;br /&gt;
* [[Head-mounted display]]&lt;br /&gt;
* [[Microlens array]]&lt;br /&gt;
* [[Spatial Light Modulator]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Terms]]&lt;br /&gt;
[[Category:Technical Terms]]&lt;br /&gt;
[[Category:Display technology]]&lt;br /&gt;
[[Category:3D display technology]]&lt;br /&gt;
[[Category:Autostereoscopy]]&lt;br /&gt;
[[Category:Virtual reality]]&lt;br /&gt;
[[Category:Augmented reality]]&lt;br /&gt;
[[Category:Optics]]&lt;br /&gt;
[[Category:Computational photography]]&lt;br /&gt;
[[Category:Emerging technologies]]&lt;br /&gt;
[[Category:Human-computer interaction]]&lt;/div&gt;</summary>
		<author><name>RealEditor</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Light_field_display&amp;diff=36367</id>
		<title>Light field display</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Light_field_display&amp;diff=36367"/>
		<updated>2025-08-04T05:59:14Z</updated>

		<summary type="html">&lt;p&gt;RealEditor: /* Challenges and Limitations */ remove conjecture&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Light field display&#039;&#039;&#039; (&#039;&#039;&#039;LFD&#039;&#039;&#039;) is an advanced display technology designed to reproduce a [[light field]], the distribution of light rays in [[3D space]], including their intensity and direction.&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;&amp;gt;Wetzstein G. (2020). “Computational Displays: Achieving the Full Plenoptic Function.” ACM SIGGRAPH 2020 Courses. ACM Digital Library. doi:10.1145/3386569.3409414. Available: https://dl.acm.org/doi/10.1145/3386569.3409414 (accessed 3 May 2025).&amp;lt;/ref&amp;gt; Unlike conventional 2D displays or [[stereoscopic display|stereoscopic 3D]] systems that present flat images or fixed viewpoints requiring glasses, light field displays aim to recreate how light naturally propagates from a real scene.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;&amp;gt;Wetzstein, G., Lanman, D., Hirsch, M., &amp;amp; Raskar, R. (2012). Tensor displays: Compressive light field synthesis using multilayer displays with directional backlighting. ACM Transactions on Graphics, 31(4), Article 80. doi:10.1145/2185520.2185576&amp;lt;/ref&amp;gt; This allows viewers to perceive genuine [[depth]], [[parallax]] (both horizontal and vertical), and perspective changes without special eyewear. (in many implementations).&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;&amp;gt;Hollister, S. (2024, January 19). Leia is building a 3D empire on the back of the worst phone we&#039;ve ever reviewed. The Verge. Retrieved from https://www.theverge.com/24036574/leia-glasses-free-3d-ces-2024&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This method of display is crucial for the future of [[virtual reality]] (VR) and [[augmented reality]] (AR), because it can directly address the [[vergence-accommodation conflict]] (VAC).&amp;lt;ref name=&amp;quot;WiredVAC&amp;quot;&amp;gt;Zhang, S. (2015, August 11). The Obscure Neuroscience Problem That&#039;s Plaguing VR. WIRED. Retrieved from https://www.wired.com/2015/08/obscure-neuroscience-problem-thats-plaguing-vr&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;VACReview&amp;quot;&amp;gt;Y. Zhou, J. Zhang, F. Fang, “Vergence-accommodation conflict in optical see-through display: Review and prospect,” *Results in Optics*, vol. 5, p. 100160, 2021, doi:10.1016/j.rio.2021.100160.&amp;lt;/ref&amp;gt; By providing correct [[focal cues]] that match the [[vergence]] information, LFDs promise more immersive, realistic, and visually comfortable experiences, reducing eye strain and [[Virtual Reality Sickness|simulator sickness]] often associated with current [[head-mounted display]]s (HMDs).&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;&amp;gt;CREAL. Light-field: Seeing Virtual Worlds Naturally. Retrieved from https://creal.com/technology/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Definition and Principles ==&lt;br /&gt;
A light field display aims to replicate the [[Plenoptic Function]], a theoretical function describing the complete set of light rays passing through every point in space, in every direction, potentially across time and wavelength.&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt; In practice, light field displays generate a discretized (sampled) approximation of the relevant 4D subset of this function (typically spatial position and angular direction).&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;&amp;gt;Huang, F. C., Wetzstein, G., Barsky, B. A., &amp;amp; Raskar, R. (2014). Eyeglasses-free display: Towards correcting visual aberrations with computational light field displays. ACM Transactions on Graphics, 33(4), Article 59. doi:10.1145/2601097.2601122&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
By controlling the direction as well as the color and intensity of emitted light rays, these displays allow the viewer&#039;s eyes to naturally focus ([[accommodation]]) at different depths within the displayed scene, matching the depth cues provided by binocular vision ([[vergence]]).&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt; This recreation allows users to experience:&lt;br /&gt;
* Full motion [[parallax]] (horizontal and vertical look-around).&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&lt;br /&gt;
* Accurate [[occlusion]] cues.&lt;br /&gt;
* Natural [[focal cues]], mitigating the [[Vergence-accommodation conflict]].&amp;lt;ref name=&amp;quot;WiredVAC&amp;quot;/&amp;gt;&lt;br /&gt;
* [[Specular highlights]] and realistic reflections that change with viewpoint.&lt;br /&gt;
* Viewing without specialized eyewear (especially in non-headset formats).&lt;br /&gt;
&lt;br /&gt;
== Characteristics ==&lt;br /&gt;
* &#039;&#039;&#039;Glasses-Free 3D&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;Full Parallax:&#039;&#039;&#039; True LFDs provide both horizontal and vertical parallax, unlike earlier [[autostereoscopic display|autostereoscopic]] technologies that often limited parallax to side-to-side movement.&lt;br /&gt;
* &#039;&#039;&#039;Accommodation-Convergence Conflict Resolution:&#039;&#039;&#039; A primary driver for VR/AR, LFDs can render virtual objects at appropriate focal distances, aligning accommodation and vergence to significantly improve visual comfort and realism.&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;&amp;gt;&lt;br /&gt;
Lanman D., &amp;amp; Luebke D. (2013). “Near‑Eye Light Field Displays.”  &lt;br /&gt;
*ACM Transactions on Graphics*, 32 (6), 220:1–220:10. doi:10.1145/2508363.2508366.  &lt;br /&gt;
Project page: https://research.nvidia.com/publication/near-eye-light-field-displays (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Computational Requirements:&#039;&#039;&#039; Generating and processing the massive amount of data (multiple views or directional light information) needed for LFDs requires significant [[Graphics processing unit|GPU]] power and bandwidth.&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Resolution Trade-offs:&#039;&#039;&#039; A fundamental challenge involves balancing spatial resolution (image sharpness), angular resolution (smoothness of parallax/number of views), [[Field of view|field of view (FoV)]], and depth of field.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt; This is often referred to as the spatio-angular resolution trade-off.&lt;br /&gt;
&lt;br /&gt;
==History and Development==&lt;br /&gt;
===Early Concepts and Foundations===&lt;br /&gt;
The underlying concept can be traced back to Michael Faraday&#039;s 1846 suggestion of light as a field&amp;lt;ref name=&amp;quot;FaradayField&amp;quot;&amp;gt;Princeton University Press. Faraday, Maxwell, and the Electromagnetic Field - How Two Men Revolutionized Physics. Retrieved from https://press.princeton.edu/books/hardcover/9780691161664/faraday-maxwell-and-the-electromagnetic-field&amp;lt;/ref&amp;gt; and was mathematically formalized regarding radiance transfer by Andrey Gershun in 1936.&amp;lt;ref name=&amp;quot;Gershun1936&amp;quot;&amp;gt;Gershun, A. (1936). The Light Field. Moscow. (Translated by P. Moon &amp;amp; G. Timoshenko, 1939, Journal of Mathematics and Physics, XVIII, 51–151).&amp;lt;/ref&amp;gt; The practical groundwork for reproducing light fields was laid by Gabriel Lippmann&#039;s 1908 concept of [[Integral imaging|Integral Photography]] (&amp;quot;photographie intégrale&amp;quot;), which used an array of small lenses to capture and reproduce light fields.&amp;lt;ref name=&amp;quot;Lippmann1908&amp;quot;&amp;gt;Lippmann, G. (1908). Épreuves réversibles donnant la sensation du relief. Journal de Physique Théorique et Appliquée, 7(1), 821–825. doi:10.1051/jphystap:019080070082100&amp;lt;/ref&amp;gt; The modern computational understanding was significantly advanced by Adelson and Bergen&#039;s formalization of the [[Plenoptic Function]] in 1991.&amp;lt;ref name=&amp;quot;AdelsonBergen1991&amp;quot;&amp;gt;Adelson, E. H., &amp;amp; Bergen, J. R. (1991). The plenoptic function and the elements of early vision. In M. Landy &amp;amp; J. A. Movshon (Eds.), Computational Models of Visual Processing (pp. 3–20). MIT Press.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Key Development Milestones===&lt;br /&gt;
* &#039;&#039;&#039;1908:&#039;&#039;&#039; Gabriel Lippmann introduces integral photography.&amp;lt;ref name=&amp;quot;Lippmann1908&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;1936:&#039;&#039;&#039; Andrey Gershun formalizes the light field mathematically.&amp;lt;ref name=&amp;quot;Gershun1936&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;1991:&#039;&#039;&#039; Adelson and Bergen formalize the plenoptic function.&amp;lt;ref name=&amp;quot;AdelsonBergen1991&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;1996:&#039;&#039;&#039; Levoy and Hanrahan publish work on Light Field Rendering.&amp;lt;ref name=&amp;quot;Levoy1996&amp;quot;&amp;gt;Levoy, M., &amp;amp; Hanrahan, P. (1996). Light field rendering. Proceedings of the 23rd annual conference on Computer graphics and interactive techniques (SIGGRAPH &#039;96), 31-42. doi:10.1145/237170.237193&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2005:&#039;&#039;&#039; Stanford Multi-camera Array demonstrated for light field capture.&amp;lt;ref name=&amp;quot;Wilburn2005&amp;quot;&amp;gt;Wilburn, B., Joshi, N., Vaish, V., Talvala, E. V., Antunez, E., Barth, A., Adams, A., Horowitz, M., &amp;amp; Levoy, M. (2005). High performance imaging using large camera arrays. ACM SIGGRAPH 2005 Papers (SIGGRAPH &#039;05), 765-776. doi:10.1145/1186822.1073256&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2004-2008:&#039;&#039;&#039; Early computational light field displays developed (for example MIT Media Lab).&amp;lt;ref name=&amp;quot;Matusik2004&amp;quot;&amp;gt;Matusik, W., &amp;amp; Pfister, H. (2004). 3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes. ACM SIGGRAPH 2004 Papers (SIGGRAPH &#039;04), 814–824. doi:10.1145/1186562.1015805&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2010-2013:&#039;&#039;&#039; Introduction of multilayer, compressive, and tensor light field display concepts.&amp;lt;ref name=&amp;quot;Lanman2010ContentAdaptive&amp;quot;&amp;gt;Lanman, D., Hirsch, M., Kim, Y., &amp;amp; Raskar, R. (2010). Content-adaptive parallax barriers: optimizing dual-layer 3D displays using low-rank light field factorization. ACM SIGGRAPH Asia 2010 papers (SIGGRAPH ASIA &#039;10), Article 163. doi:10.1145/1882261.1866191&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2013:&#039;&#039;&#039; NVIDIA demonstrates near-eye light field display prototype for VR.&amp;lt;ref name=&amp;quot;NvidiaNELD&amp;quot;&amp;gt;Lanman, D., &amp;amp; Luebke, D. (2013). Near-Eye Light Field Displays (Technical Report NVR-2013-004). NVIDIA Research. Retrieved from https://research.nvidia.com/sites/default/files/pubs/2013-11_Near-Eye-Light-Field/NVIDIA-NELD.pdf&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2015 onwards:&#039;&#039;&#039; Emergence of advanced prototypes (for example CREAL, Light Field Lab, PetaRay).&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;&amp;gt;Lang, B. (2023, January 11). CREAL&#039;s Latest Light-field AR Demo Shows Continued Progress Toward Natural Depth &amp;amp; Focus. Road to VR. Retrieved from https://www.roadtovr.com/creal-light-field-ar-vr-headset-prototype/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Technical Implementations (How They Work) ==&lt;br /&gt;
Light field displays use various techniques to generate the 4D light field:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;[[Microlens Arrays]] (MLAs):&#039;&#039;&#039; A high-resolution display panel (LCD or OLED) is overlaid with an array of tiny lenses. Each lenslet directs light from the pixels beneath it into a specific set of directions, creating different views for different observer positions.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt; This is a common approach derived from integral imaging.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt; The trade-off is explicit: spatial resolution is determined by the lenslet count, angular resolution by the pixels per lenslet.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Multilayer Displays (Stacked LCDs):&#039;&#039;&#039; Several layers of transparent display panels (typically [[Liquid crystal display|LCDs]]) are stacked with air gaps. By computationally optimizing the opacity patterns on each layer, the display acts as a multiplicative [[Spatial Light Modulator|spatial light modulator]], shaping light from a backlight into a complex light field.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2010ContentAdaptive&amp;quot;/&amp;gt; These are often explored for near-eye displays.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Directional Backlighting:&#039;&#039;&#039; A standard display panel (for example LCD) is combined with a specialized backlight that emits light in controlled directions. The backlight might use another LCD panel coupled with optics like lenticular sheets to achieve directionality.&amp;lt;ref name=&amp;quot;Maimone2013Focus3D&amp;quot;&amp;gt;Maimone, A., Wetzstein, G., Hirsch, M., Lanman, D., Raskar, R., &amp;amp; Fuchs, H. (2013). Focus 3D: compressive accommodation display. ACM Transactions on Graphics, 32(5), Article 152. doi:10.1145/2516971.2516983&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Projector Arrays:&#039;&#039;&#039; Multiple projectors illuminate a screen (often lenticular or diffusive). Each projector provides a different perspective view, and their combined output forms the light field.&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Parallax Barrier]]s:&#039;&#039;&#039; An opaque layer with precisely positioned slits or apertures is placed in front of or between display panels. The barrier blocks light selectively, allowing different pixels to be seen from different angles.&amp;lt;ref name=&amp;quot;JDI_Parallax&amp;quot;&amp;gt;&lt;br /&gt;
Japan Display Inc. (2016, Dec 5). *Ultra‑High Resolution Display with Integrated Parallax Barrier for Glasses‑Free 3D* [Press release].  &lt;br /&gt;
Archived copy: https://web.archive.org/web/20161221045330/https://www.j-display.com/english/news/2016/20161205.html (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt; Often less light-efficient than MLAs.&lt;br /&gt;
* &#039;&#039;&#039;[[Waveguide]] Optics:&#039;&#039;&#039; Light is injected into thin optical waveguides (similar to those in some AR glasses) and then coupled out at specific points with controlled directionality, often using diffractive optical elements (DOEs) or gratings.&amp;lt;ref name=&amp;quot;LightFieldLabTech&amp;quot;&amp;gt;&lt;br /&gt;
Light Field Lab. *SolidLight™ Platform Overview.* https://www.lightfieldlab.com/ (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Maimone2017HolographicNED&amp;quot;&amp;gt;Maimone, A., Georgiou, A., &amp;amp; Kollin, J. S. (2017). Holographic near-eye displays for virtual and augmented reality. ACM Transactions on Graphics, 36(4), Article 85. doi:10.1145/3072959.3073624&amp;lt;/ref&amp;gt; This is explored for compact AR/VR systems.&lt;br /&gt;
* &#039;&#039;&#039;Time-Multiplexed Displays:&#039;&#039;&#039; This is what CREAL is doing. Different views or directional illumination patterns are presented rapidly in sequence. If cycled faster than human perception, this creates the illusion of a continuous light field. Can be combined with other techniques like directional backlighting.&amp;lt;ref name=&amp;quot;Liu2014OSTHMD&amp;quot;&amp;gt;Liu, S., Cheng, D., &amp;amp; Hua, H. (2014). An optical see-through head mounted display with addressable focal planes. 2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 33-42. doi:10.1109/ISMAR.2014.6948403&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Holographic and Diffractive Approaches:&#039;&#039;&#039; While [[Holographic display|holographic displays]] reconstruct wavefronts through diffraction, some LFDs utilize holographic optical elements (HOEs) or related diffractive principles to achieve high angular resolution and potentially overcome MLA limitations.&amp;lt;ref name=&amp;quot;SpringerReview2021&amp;quot;&amp;gt;M. Martínez-Corral, Z. Guan, Y. Li, Z. Xiong, B. Javidi, “Review of light field technologies,” *Visual Computing for Industry, Biomedicine and Art*, 4 (1): 29, 2021, doi:10.1186/s42492-021-00096-8.&amp;lt;/ref&amp;gt; Some companies use &amp;quot;holographic&amp;quot; terminology for their high-density LFDs.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;&amp;gt;C. Fink, “Light Field Lab Raises $50 Million to Bring SolidLight Holograms Into the Real World,” *Forbes*, 8 Feb 2023. Available: https://www.forbes.com/sites/charliefink/2023/02/08/light-field-lab-raises-50m-to-bring-solidlight-holograms-into-the-real-world/ (accessed 30 Apr 2025).&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Types of Light Field Displays ==&lt;br /&gt;
* &#039;&#039;&#039;Near-Eye Light Field Displays:&#039;&#039;&#039; Integrated into VR/AR [[Head-mounted display|HMDs]]. Primarily focused on solving the VAC for comfortable, realistic close-up interactions.&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt; Examples include research prototypes from NVIDIA&amp;lt;ref name=&amp;quot;NvidiaNELD&amp;quot;/&amp;gt; and academic groups,&amp;lt;ref name=&amp;quot;Huang2015Stereoscope&amp;quot;&amp;gt;Huang, F. C., Chen, K., &amp;amp; Wetzstein, G. (2015). The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues. ACM Transactions on Graphics, 34(4), Article 60. doi:10.1145/2766943&amp;lt;/ref&amp;gt; and commercial modules from companies like [[CREAL]].&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt; Often utilize MLAs, stacked LCDs, or waveguide/diffractive approaches.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Large Format / Tiled Displays:&#039;&#039;&#039; Aimed at creating large-scale, immersive 3D experiences without glasses for public venues, command centers, or collaborative environments.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LightFieldLabSolidLightPR&amp;quot;&amp;gt;&lt;br /&gt;
Light Field Lab Press Release (2021, Oct 7). *Light Field Lab Unveils SolidLight™ – The Highest Resolution Holographic Display Platform Ever Designed.*  &lt;br /&gt;
https://www.lightfieldlab.com/press-release-oct-2021 (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt; [[Light Field Lab]]&#039;s SolidLight™ platform uses modular panels designed to be tiled into large video walls.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LightFieldLabSolidLightPR&amp;quot;/&amp;gt; Sony&#039;s ELF-SR series (Spatial Reality Display) uses high-speed vision sensors and a micro-optical lens for a single user but demonstrates high-fidelity desktop light field effects.&amp;lt;ref name=&amp;quot;SonyELFSR2&amp;quot;&amp;gt;&lt;br /&gt;
Sony Professional. *ELF‑SR2 Spatial Reality Display.*  &lt;br /&gt;
https://pro.sony/ue_US/products/spatial-reality-displays/elf-sr2 (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Comparison with Other 3D Display Technologies ==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Comparison of Key 3D Display Technology Characteristics&lt;br /&gt;
! Technology&lt;br /&gt;
! Glasses Required&lt;br /&gt;
! Natural Focal Cues (Solves [[Vergence-accommodation conflict|VAC]])&lt;br /&gt;
! Full Motion [[Parallax]]&lt;br /&gt;
! Typical [[Field of view|View Field]]&lt;br /&gt;
! Key Trade-offs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Light Field Display&#039;&#039;&#039;&lt;br /&gt;
| No (often)&lt;br /&gt;
| Yes&lt;br /&gt;
| Yes&lt;br /&gt;
| &lt;br /&gt;
| Spatio-angular resolution trade-off, computation needs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Stereoscopic display|Stereoscopic Displays]]&#039;&#039;&#039;&lt;br /&gt;
| Yes&lt;br /&gt;
| No&lt;br /&gt;
| No &amp;lt;small&amp;gt;(requires head tracking)&amp;lt;/small&amp;gt;&lt;br /&gt;
| Wide&lt;br /&gt;
| VAC causes fatigue, requires glasses&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Autostereoscopic display|Autostereoscopic (non-LFD)]]&#039;&#039;&#039;&lt;br /&gt;
| No&lt;br /&gt;
| No&lt;br /&gt;
| Limited &amp;lt;small&amp;gt;(often Horizontal only)&amp;lt;/small&amp;gt;&lt;br /&gt;
| Limited&lt;br /&gt;
| Reduced resolution per view, fixed viewing zones&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Volumetric Display]]&#039;&#039;&#039;&lt;br /&gt;
| No&lt;br /&gt;
| Yes&lt;br /&gt;
| Yes&lt;br /&gt;
| 360° potential&lt;br /&gt;
| Limited resolution, transparency/opacity issues, bulk&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Holographic display|Holographic Displays]]&#039;&#039;&#039;&lt;br /&gt;
| No&lt;br /&gt;
| Yes&lt;br /&gt;
| Yes&lt;br /&gt;
| Often Limited&lt;br /&gt;
| Extreme computational demands, [[Speckle pattern|speckle]], small size typically&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
LFDs offer a compelling balance, providing natural depth cues without glasses (in many formats) and resolving the VAC, but face challenges in achieving high resolution across both spatial and angular domains simultaneously.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Content Creation ==&lt;br /&gt;
Creating content compatible with LFDs requires capturing or generating directional view information:&lt;br /&gt;
* &#039;&#039;&#039;[[Light Field Camera|Light Field Cameras]] / [[Plenoptic Camera|Plenoptic Cameras]]:&#039;&#039;&#039; Capture both intensity and direction of incoming light using specialized sensors (often with MLAs).&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt; The captured data can be processed for LFD playback.&lt;br /&gt;
* &#039;&#039;&#039;[[Computer Graphics]] Rendering:&#039;&#039;&#039; Standard 3D scenes built in engines like [[Unity (game engine)|Unity]] or [[Unreal Engine]] can be rendered from multiple viewpoints to generate the necessary data.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt; Specialized light field rendering techniques, potentially using [[Ray tracing (graphics)|ray tracing]] or neural methods like [[Neural Radiance Fields]] (NeRF), are employed.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Mildenhall2020NeRF&amp;quot;&amp;gt;Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T., Ramamoorthi, R., &amp;amp; Ng, R. (2020). NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. European Conference on Computer Vision (ECCV), 405-421. doi:10.1007/978-3-030-58452-8_24&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Photogrammetry]] and 3D Scanning:&#039;&#039;&#039; Real-world objects/scenes captured as 3D models can serve as input for rendering light field views.&lt;br /&gt;
* &#039;&#039;&#039;[[Focal Stack]] Conversion:&#039;&#039;&#039; Research explores converting image stacks captured at different focal depths into light field representations, particularly for multi-layer displays.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Applications==&lt;br /&gt;
===Applications in VR and AR===&lt;br /&gt;
* &#039;&#039;&#039;Enhanced Realism and Immersion:&#039;&#039;&#039; Correct depth cues make virtual objects appear more solid and stable, improving the sense of presence, especially for near-field interactions.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Improved Visual Comfort:&#039;&#039;&#039; Mitigating the VAC reduces eye strain, fatigue, and nausea, enabling longer and more comfortable VR/AR sessions.&amp;lt;ref name=&amp;quot;WiredVAC&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Natural Interaction:&#039;&#039;&#039; Accurate depth perception facilitates intuitive hand-eye coordination for manipulating virtual objects.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Seamless AR Integration:&#039;&#039;&#039; Allows virtual elements to appear more cohesively integrated with the real world at correct focal depths.&lt;br /&gt;
* &#039;&#039;&#039;Vision Correction:&#039;&#039;&#039; Near-eye LFDs can potentially pre-distort the displayed light field to correct for the user&#039;s refractive errors, eliminating the need for prescription glasses within the headset.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2015Stereoscope&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Other Applications===&lt;br /&gt;
* &#039;&#039;&#039;Medical Imaging and Visualization:&#039;&#039;&#039; Intuitive visualization of complex 3D scans (CT, MRI) for diagnostics, surgical planning, and education.&amp;lt;ref name=&amp;quot;Nam2019Medical&amp;quot;&amp;gt;Nam, J., McCormick, M., &amp;amp; Tate, A. J. (2019). Light field display systems for medical imaging applications. Journal of Display Technology, 15(3), 215-225. doi:10.1002/jsid.785&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Scientific Visualization:&#039;&#039;&#039; Analyzing complex datasets in fields like fluid dynamics, molecular modeling, geology.&amp;lt;ref name=&amp;quot;Halle2017SciVis&amp;quot;&amp;gt;Halle, M. W., &amp;amp; Meng, J. (2017). LightPlanets: GPU-based rendering of transparent astronomical objects using light field methods. IEEE Transactions on Visualization and Computer Graphics, 23(5), 1479-1488. doi:10.1109/TVCG.2016.2535388&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Product Design and Engineering (CAD/CAE):&#039;&#039;&#039; Collaborative visualization and review of 3D models.&amp;lt;ref name=&amp;quot;Nam2019Medical&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Entertainment and Gaming:&#039;&#039;&#039; Immersive experiences in arcades, museums, theme parks, and potentially future home entertainment.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Telepresence and Communication:&#039;&#039;&#039; Creating realistic, life-sized 3D representations of remote collaborators, like Google&#039;s [[Project Starline]] concept.&amp;lt;ref name=&amp;quot;Starline&amp;quot;&amp;gt;Google Blog (2023, May 10). A first look at Project Starline’s new, simpler prototype. Retrieved from https://blog.google/technology/research/project-starline-prototype/&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Microscopy]]:&#039;&#039;&#039; Viewing microscopic samples with natural depth perception.&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Challenges and Limitations ==&lt;br /&gt;
* &#039;&#039;&#039;Spatio-Angular Resolution Trade-off:&#039;&#039;&#039; Increasing the number of views (angular resolution) often decreases the perceived sharpness (spatial resolution) for a fixed display pixel count.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Computational Complexity &amp;amp; Bandwidth:&#039;&#039;&#039; Rendering, compressing, and transmitting the massive datasets for real-time LFDs is extremely demanding on GPUs and data infrastructure.&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Manufacturing Complexity and Cost:&#039;&#039;&#039; Producing precise optical components like high-density MLAs, perfectly aligned multi-layer stacks, or large-area waveguide structures is challenging and costly.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Form Factor and Miniaturization:&#039;&#039;&#039; Integrating complex optics and electronics into thin, lightweight, and power-efficient near-eye devices remains difficult.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Limited Field of View (FoV):&#039;&#039;&#039; Achieving wide FoV comparable to traditional VR headsets while maintaining high angular resolution is challenging.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Brightness and Efficiency:&#039;&#039;&#039; Techniques like MLAs and parallax barriers inherently block or redirect light, reducing overall display brightness and power efficiency.&lt;br /&gt;
* &#039;&#039;&#039;Content Ecosystem:&#039;&#039;&#039; The workflow for creating, distributing, and viewing native light field content is still developing compared to standard 2D or stereoscopic 3D, due to no consumer lightfield hardware.&lt;br /&gt;
&lt;br /&gt;
== Key Players and Commercial Landscape ==&lt;br /&gt;
Several companies and research groups are active in LFD development:&lt;br /&gt;
* &#039;&#039;&#039;[[CREAL]]:&#039;&#039;&#039; Swiss startup focused on compact near-eye LFD modules for AR/VR glasses aiming to solve VAC.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Light Field Lab]]:&#039;&#039;&#039; Developing large-scale, modular &amp;quot;holographic&amp;quot; LFD panels (SolidLight™) based on proprietary [[Waveguide (optics)|waveguide]] technology.&amp;lt;ref name=&amp;quot;LightFieldLabTech&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LightFieldLabSolidLightPR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Sony]]:&#039;&#039;&#039; Produces the Spatial Reality Display (ELF-SR series), a high-fidelity desktop LFD using eye-tracking.&amp;lt;ref name=&amp;quot;SonyELFSR2&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Avegant]]:&#039;&#039;&#039; Develops light field light engines, particularly for AR, focusing on VAC resolution.&amp;lt;ref name=&amp;quot;AvegantPR&amp;quot;&amp;gt;PR Newswire (2017, March 15). Avegant Introduces Light Field Technology for Mixed Reality. Retrieved from https://www.prnewswire.com/news-releases/avegant-introduces-light-field-technology-for-mixed-reality-300423855.html&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Holografika]]:&#039;&#039;&#039; Offers glasses-free 3D LFD systems for professional applications.&amp;lt;ref name=&amp;quot;Holografika&amp;quot;&amp;gt;Holografika. Light Field Displays. Retrieved from https://holografika.com/light-field-displays/&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Japan Display Inc. (JDI)]]:&#039;&#039;&#039; Demonstrated prototype LFDs for various applications.&amp;lt;ref name=&amp;quot;JDI_LFD_2019&amp;quot;&amp;gt;Japan Display Inc. News (2019, December 3). JDI Develops World&#039;s First 10.1-inch Light Field Display. Retrieved from https://www.j-display.com/english/news/2019/20191203_01.html&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[NVIDIA]]:&#039;&#039;&#039; Foundational research in near-eye LFDs and ongoing GPU development crucial for LFD rendering.&amp;lt;ref name=&amp;quot;NvidiaNELD&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Google]]:&#039;&#039;&#039; Research in LFDs, demonstrated through concepts like Project Starline.&amp;lt;ref name=&amp;quot;Starline&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Academic Research:&#039;&#039;&#039; Institutions like [[MIT Media Lab]], [[Stanford University]], University of Arizona, and others continue to push theoretical and practical boundaries.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Future Directions and Research ==&lt;br /&gt;
* &#039;&#039;&#039;Computational Display Optimization:&#039;&#039;&#039; Using [[Artificial intelligence|AI]] and sophisticated algorithms to optimize patterns on multi-layer displays or directional backlights for better quality with fewer resources.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt; Using neural representations (like NeRF) for efficient light field synthesis and compression.&amp;lt;ref name=&amp;quot;Mildenhall2020NeRF&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Varifocal and Multifocal Integration:&#039;&#039;&#039; Hybrid approaches combining LFD principles with dynamic focus elements (liquid lenses, deformable mirrors) to achieve focus cues potentially more efficiently than pure LFDs.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Liu2014OSTHMD&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Miniaturization for Wearables:&#039;&#039;&#039; Developing ultra-thin, efficient components using [[Metasurface|metasurfaces]], [[Holographic optical element|holographic optical elements (HOEs)]], advanced waveguides, and [[MicroLED]] displays for integration into consumer AR/VR glasses.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;SpringerReview2021&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Improved Content Capture and Creation Tools:&#039;&#039;&#039; Advancements in [[Plenoptic camera|plenoptic cameras]], AI-driven view synthesis, and streamlined software workflows.&amp;lt;ref name=&amp;quot;Mildenhall2020NeRF&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Higher Resolution and Efficiency:&#039;&#039;&#039; Addressing the spatio-angular trade-off and improving light efficiency through new materials, optical designs (for example polarization multiplexing&amp;lt;ref name=&amp;quot;Tan2019Polarization&amp;quot;&amp;gt;G. Tan, T. Zhan, Y.-H. Lee, J. Xiong, S.-T. Wu, “Near-eye light-field display with polarization multiplexing,” *Proceedings of SPIE* 10942, Advances in Display Technologies IX, paper 1094206, 2019, doi:10.1117/12.2509121.&amp;lt;/ref&amp;gt;), and display technologies.&lt;br /&gt;
&lt;br /&gt;
== See Also ==&lt;br /&gt;
* [[Light Field]]&lt;br /&gt;
* [[Plenoptic Function]]&lt;br /&gt;
* [[Integral imaging]]&lt;br /&gt;
* [[Autostereoscopic display]]&lt;br /&gt;
* [[Stereoscopy]]&lt;br /&gt;
* [[Holographic display]]&lt;br /&gt;
* [[Volumetric Display]]&lt;br /&gt;
* [[Varifocal display]]&lt;br /&gt;
* [[Vergence-accommodation conflict]]&lt;br /&gt;
* [[Virtual Reality]]&lt;br /&gt;
* [[Augmented Reality]]&lt;br /&gt;
* [[Head-mounted display]]&lt;br /&gt;
* [[Microlens array]]&lt;br /&gt;
* [[Spatial Light Modulator]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Terms]]&lt;br /&gt;
[[Category:Technical Terms]]&lt;br /&gt;
[[Category:Display technology]]&lt;br /&gt;
[[Category:3D display technology]]&lt;br /&gt;
[[Category:Autostereoscopy]]&lt;br /&gt;
[[Category:Virtual reality]]&lt;br /&gt;
[[Category:Augmented reality]]&lt;br /&gt;
[[Category:Optics]]&lt;br /&gt;
[[Category:Computational photography]]&lt;br /&gt;
[[Category:Emerging technologies]]&lt;br /&gt;
[[Category:Human-computer interaction]]&lt;/div&gt;</summary>
		<author><name>RealEditor</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Light_field_display&amp;diff=36366</id>
		<title>Light field display</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Light_field_display&amp;diff=36366"/>
		<updated>2025-08-04T05:58:35Z</updated>

		<summary type="html">&lt;p&gt;RealEditor: /* Comparison with Other 3D Display Technologies */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Light field display&#039;&#039;&#039; (&#039;&#039;&#039;LFD&#039;&#039;&#039;) is an advanced display technology designed to reproduce a [[light field]], the distribution of light rays in [[3D space]], including their intensity and direction.&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;&amp;gt;Wetzstein G. (2020). “Computational Displays: Achieving the Full Plenoptic Function.” ACM SIGGRAPH 2020 Courses. ACM Digital Library. doi:10.1145/3386569.3409414. Available: https://dl.acm.org/doi/10.1145/3386569.3409414 (accessed 3 May 2025).&amp;lt;/ref&amp;gt; Unlike conventional 2D displays or [[stereoscopic display|stereoscopic 3D]] systems that present flat images or fixed viewpoints requiring glasses, light field displays aim to recreate how light naturally propagates from a real scene.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;&amp;gt;Wetzstein, G., Lanman, D., Hirsch, M., &amp;amp; Raskar, R. (2012). Tensor displays: Compressive light field synthesis using multilayer displays with directional backlighting. ACM Transactions on Graphics, 31(4), Article 80. doi:10.1145/2185520.2185576&amp;lt;/ref&amp;gt; This allows viewers to perceive genuine [[depth]], [[parallax]] (both horizontal and vertical), and perspective changes without special eyewear. (in many implementations).&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;&amp;gt;Hollister, S. (2024, January 19). Leia is building a 3D empire on the back of the worst phone we&#039;ve ever reviewed. The Verge. Retrieved from https://www.theverge.com/24036574/leia-glasses-free-3d-ces-2024&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This method of display is crucial for the future of [[virtual reality]] (VR) and [[augmented reality]] (AR), because it can directly address the [[vergence-accommodation conflict]] (VAC).&amp;lt;ref name=&amp;quot;WiredVAC&amp;quot;&amp;gt;Zhang, S. (2015, August 11). The Obscure Neuroscience Problem That&#039;s Plaguing VR. WIRED. Retrieved from https://www.wired.com/2015/08/obscure-neuroscience-problem-thats-plaguing-vr&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;VACReview&amp;quot;&amp;gt;Y. Zhou, J. Zhang, F. Fang, “Vergence-accommodation conflict in optical see-through display: Review and prospect,” *Results in Optics*, vol. 5, p. 100160, 2021, doi:10.1016/j.rio.2021.100160.&amp;lt;/ref&amp;gt; By providing correct [[focal cues]] that match the [[vergence]] information, LFDs promise more immersive, realistic, and visually comfortable experiences, reducing eye strain and [[Virtual Reality Sickness|simulator sickness]] often associated with current [[head-mounted display]]s (HMDs).&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;&amp;gt;CREAL. Light-field: Seeing Virtual Worlds Naturally. Retrieved from https://creal.com/technology/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Definition and Principles ==&lt;br /&gt;
A light field display aims to replicate the [[Plenoptic Function]], a theoretical function describing the complete set of light rays passing through every point in space, in every direction, potentially across time and wavelength.&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt; In practice, light field displays generate a discretized (sampled) approximation of the relevant 4D subset of this function (typically spatial position and angular direction).&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;&amp;gt;Huang, F. C., Wetzstein, G., Barsky, B. A., &amp;amp; Raskar, R. (2014). Eyeglasses-free display: Towards correcting visual aberrations with computational light field displays. ACM Transactions on Graphics, 33(4), Article 59. doi:10.1145/2601097.2601122&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
By controlling the direction as well as the color and intensity of emitted light rays, these displays allow the viewer&#039;s eyes to naturally focus ([[accommodation]]) at different depths within the displayed scene, matching the depth cues provided by binocular vision ([[vergence]]).&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt; This recreation allows users to experience:&lt;br /&gt;
* Full motion [[parallax]] (horizontal and vertical look-around).&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&lt;br /&gt;
* Accurate [[occlusion]] cues.&lt;br /&gt;
* Natural [[focal cues]], mitigating the [[Vergence-accommodation conflict]].&amp;lt;ref name=&amp;quot;WiredVAC&amp;quot;/&amp;gt;&lt;br /&gt;
* [[Specular highlights]] and realistic reflections that change with viewpoint.&lt;br /&gt;
* Viewing without specialized eyewear (especially in non-headset formats).&lt;br /&gt;
&lt;br /&gt;
== Characteristics ==&lt;br /&gt;
* &#039;&#039;&#039;Glasses-Free 3D&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;Full Parallax:&#039;&#039;&#039; True LFDs provide both horizontal and vertical parallax, unlike earlier [[autostereoscopic display|autostereoscopic]] technologies that often limited parallax to side-to-side movement.&lt;br /&gt;
* &#039;&#039;&#039;Accommodation-Convergence Conflict Resolution:&#039;&#039;&#039; A primary driver for VR/AR, LFDs can render virtual objects at appropriate focal distances, aligning accommodation and vergence to significantly improve visual comfort and realism.&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;&amp;gt;&lt;br /&gt;
Lanman D., &amp;amp; Luebke D. (2013). “Near‑Eye Light Field Displays.”  &lt;br /&gt;
*ACM Transactions on Graphics*, 32 (6), 220:1–220:10. doi:10.1145/2508363.2508366.  &lt;br /&gt;
Project page: https://research.nvidia.com/publication/near-eye-light-field-displays (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Computational Requirements:&#039;&#039;&#039; Generating and processing the massive amount of data (multiple views or directional light information) needed for LFDs requires significant [[Graphics processing unit|GPU]] power and bandwidth.&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Resolution Trade-offs:&#039;&#039;&#039; A fundamental challenge involves balancing spatial resolution (image sharpness), angular resolution (smoothness of parallax/number of views), [[Field of view|field of view (FoV)]], and depth of field.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt; This is often referred to as the spatio-angular resolution trade-off.&lt;br /&gt;
&lt;br /&gt;
==History and Development==&lt;br /&gt;
===Early Concepts and Foundations===&lt;br /&gt;
The underlying concept can be traced back to Michael Faraday&#039;s 1846 suggestion of light as a field&amp;lt;ref name=&amp;quot;FaradayField&amp;quot;&amp;gt;Princeton University Press. Faraday, Maxwell, and the Electromagnetic Field - How Two Men Revolutionized Physics. Retrieved from https://press.princeton.edu/books/hardcover/9780691161664/faraday-maxwell-and-the-electromagnetic-field&amp;lt;/ref&amp;gt; and was mathematically formalized regarding radiance transfer by Andrey Gershun in 1936.&amp;lt;ref name=&amp;quot;Gershun1936&amp;quot;&amp;gt;Gershun, A. (1936). The Light Field. Moscow. (Translated by P. Moon &amp;amp; G. Timoshenko, 1939, Journal of Mathematics and Physics, XVIII, 51–151).&amp;lt;/ref&amp;gt; The practical groundwork for reproducing light fields was laid by Gabriel Lippmann&#039;s 1908 concept of [[Integral imaging|Integral Photography]] (&amp;quot;photographie intégrale&amp;quot;), which used an array of small lenses to capture and reproduce light fields.&amp;lt;ref name=&amp;quot;Lippmann1908&amp;quot;&amp;gt;Lippmann, G. (1908). Épreuves réversibles donnant la sensation du relief. Journal de Physique Théorique et Appliquée, 7(1), 821–825. doi:10.1051/jphystap:019080070082100&amp;lt;/ref&amp;gt; The modern computational understanding was significantly advanced by Adelson and Bergen&#039;s formalization of the [[Plenoptic Function]] in 1991.&amp;lt;ref name=&amp;quot;AdelsonBergen1991&amp;quot;&amp;gt;Adelson, E. H., &amp;amp; Bergen, J. R. (1991). The plenoptic function and the elements of early vision. In M. Landy &amp;amp; J. A. Movshon (Eds.), Computational Models of Visual Processing (pp. 3–20). MIT Press.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Key Development Milestones===&lt;br /&gt;
* &#039;&#039;&#039;1908:&#039;&#039;&#039; Gabriel Lippmann introduces integral photography.&amp;lt;ref name=&amp;quot;Lippmann1908&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;1936:&#039;&#039;&#039; Andrey Gershun formalizes the light field mathematically.&amp;lt;ref name=&amp;quot;Gershun1936&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;1991:&#039;&#039;&#039; Adelson and Bergen formalize the plenoptic function.&amp;lt;ref name=&amp;quot;AdelsonBergen1991&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;1996:&#039;&#039;&#039; Levoy and Hanrahan publish work on Light Field Rendering.&amp;lt;ref name=&amp;quot;Levoy1996&amp;quot;&amp;gt;Levoy, M., &amp;amp; Hanrahan, P. (1996). Light field rendering. Proceedings of the 23rd annual conference on Computer graphics and interactive techniques (SIGGRAPH &#039;96), 31-42. doi:10.1145/237170.237193&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2005:&#039;&#039;&#039; Stanford Multi-camera Array demonstrated for light field capture.&amp;lt;ref name=&amp;quot;Wilburn2005&amp;quot;&amp;gt;Wilburn, B., Joshi, N., Vaish, V., Talvala, E. V., Antunez, E., Barth, A., Adams, A., Horowitz, M., &amp;amp; Levoy, M. (2005). High performance imaging using large camera arrays. ACM SIGGRAPH 2005 Papers (SIGGRAPH &#039;05), 765-776. doi:10.1145/1186822.1073256&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2004-2008:&#039;&#039;&#039; Early computational light field displays developed (for example MIT Media Lab).&amp;lt;ref name=&amp;quot;Matusik2004&amp;quot;&amp;gt;Matusik, W., &amp;amp; Pfister, H. (2004). 3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes. ACM SIGGRAPH 2004 Papers (SIGGRAPH &#039;04), 814–824. doi:10.1145/1186562.1015805&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2010-2013:&#039;&#039;&#039; Introduction of multilayer, compressive, and tensor light field display concepts.&amp;lt;ref name=&amp;quot;Lanman2010ContentAdaptive&amp;quot;&amp;gt;Lanman, D., Hirsch, M., Kim, Y., &amp;amp; Raskar, R. (2010). Content-adaptive parallax barriers: optimizing dual-layer 3D displays using low-rank light field factorization. ACM SIGGRAPH Asia 2010 papers (SIGGRAPH ASIA &#039;10), Article 163. doi:10.1145/1882261.1866191&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2013:&#039;&#039;&#039; NVIDIA demonstrates near-eye light field display prototype for VR.&amp;lt;ref name=&amp;quot;NvidiaNELD&amp;quot;&amp;gt;Lanman, D., &amp;amp; Luebke, D. (2013). Near-Eye Light Field Displays (Technical Report NVR-2013-004). NVIDIA Research. Retrieved from https://research.nvidia.com/sites/default/files/pubs/2013-11_Near-Eye-Light-Field/NVIDIA-NELD.pdf&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2015 onwards:&#039;&#039;&#039; Emergence of advanced prototypes (for example CREAL, Light Field Lab, PetaRay).&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;&amp;gt;Lang, B. (2023, January 11). CREAL&#039;s Latest Light-field AR Demo Shows Continued Progress Toward Natural Depth &amp;amp; Focus. Road to VR. Retrieved from https://www.roadtovr.com/creal-light-field-ar-vr-headset-prototype/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Technical Implementations (How They Work) ==&lt;br /&gt;
Light field displays use various techniques to generate the 4D light field:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;[[Microlens Arrays]] (MLAs):&#039;&#039;&#039; A high-resolution display panel (LCD or OLED) is overlaid with an array of tiny lenses. Each lenslet directs light from the pixels beneath it into a specific set of directions, creating different views for different observer positions.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt; This is a common approach derived from integral imaging.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt; The trade-off is explicit: spatial resolution is determined by the lenslet count, angular resolution by the pixels per lenslet.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Multilayer Displays (Stacked LCDs):&#039;&#039;&#039; Several layers of transparent display panels (typically [[Liquid crystal display|LCDs]]) are stacked with air gaps. By computationally optimizing the opacity patterns on each layer, the display acts as a multiplicative [[Spatial Light Modulator|spatial light modulator]], shaping light from a backlight into a complex light field.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2010ContentAdaptive&amp;quot;/&amp;gt; These are often explored for near-eye displays.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Directional Backlighting:&#039;&#039;&#039; A standard display panel (for example LCD) is combined with a specialized backlight that emits light in controlled directions. The backlight might use another LCD panel coupled with optics like lenticular sheets to achieve directionality.&amp;lt;ref name=&amp;quot;Maimone2013Focus3D&amp;quot;&amp;gt;Maimone, A., Wetzstein, G., Hirsch, M., Lanman, D., Raskar, R., &amp;amp; Fuchs, H. (2013). Focus 3D: compressive accommodation display. ACM Transactions on Graphics, 32(5), Article 152. doi:10.1145/2516971.2516983&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Projector Arrays:&#039;&#039;&#039; Multiple projectors illuminate a screen (often lenticular or diffusive). Each projector provides a different perspective view, and their combined output forms the light field.&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Parallax Barrier]]s:&#039;&#039;&#039; An opaque layer with precisely positioned slits or apertures is placed in front of or between display panels. The barrier blocks light selectively, allowing different pixels to be seen from different angles.&amp;lt;ref name=&amp;quot;JDI_Parallax&amp;quot;&amp;gt;&lt;br /&gt;
Japan Display Inc. (2016, Dec 5). *Ultra‑High Resolution Display with Integrated Parallax Barrier for Glasses‑Free 3D* [Press release].  &lt;br /&gt;
Archived copy: https://web.archive.org/web/20161221045330/https://www.j-display.com/english/news/2016/20161205.html (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt; Often less light-efficient than MLAs.&lt;br /&gt;
* &#039;&#039;&#039;[[Waveguide]] Optics:&#039;&#039;&#039; Light is injected into thin optical waveguides (similar to those in some AR glasses) and then coupled out at specific points with controlled directionality, often using diffractive optical elements (DOEs) or gratings.&amp;lt;ref name=&amp;quot;LightFieldLabTech&amp;quot;&amp;gt;&lt;br /&gt;
Light Field Lab. *SolidLight™ Platform Overview.* https://www.lightfieldlab.com/ (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Maimone2017HolographicNED&amp;quot;&amp;gt;Maimone, A., Georgiou, A., &amp;amp; Kollin, J. S. (2017). Holographic near-eye displays for virtual and augmented reality. ACM Transactions on Graphics, 36(4), Article 85. doi:10.1145/3072959.3073624&amp;lt;/ref&amp;gt; This is explored for compact AR/VR systems.&lt;br /&gt;
* &#039;&#039;&#039;Time-Multiplexed Displays:&#039;&#039;&#039; This is what CREAL is doing. Different views or directional illumination patterns are presented rapidly in sequence. If cycled faster than human perception, this creates the illusion of a continuous light field. Can be combined with other techniques like directional backlighting.&amp;lt;ref name=&amp;quot;Liu2014OSTHMD&amp;quot;&amp;gt;Liu, S., Cheng, D., &amp;amp; Hua, H. (2014). An optical see-through head mounted display with addressable focal planes. 2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 33-42. doi:10.1109/ISMAR.2014.6948403&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Holographic and Diffractive Approaches:&#039;&#039;&#039; While [[Holographic display|holographic displays]] reconstruct wavefronts through diffraction, some LFDs utilize holographic optical elements (HOEs) or related diffractive principles to achieve high angular resolution and potentially overcome MLA limitations.&amp;lt;ref name=&amp;quot;SpringerReview2021&amp;quot;&amp;gt;M. Martínez-Corral, Z. Guan, Y. Li, Z. Xiong, B. Javidi, “Review of light field technologies,” *Visual Computing for Industry, Biomedicine and Art*, 4 (1): 29, 2021, doi:10.1186/s42492-021-00096-8.&amp;lt;/ref&amp;gt; Some companies use &amp;quot;holographic&amp;quot; terminology for their high-density LFDs.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;&amp;gt;C. Fink, “Light Field Lab Raises $50 Million to Bring SolidLight Holograms Into the Real World,” *Forbes*, 8 Feb 2023. Available: https://www.forbes.com/sites/charliefink/2023/02/08/light-field-lab-raises-50m-to-bring-solidlight-holograms-into-the-real-world/ (accessed 30 Apr 2025).&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Types of Light Field Displays ==&lt;br /&gt;
* &#039;&#039;&#039;Near-Eye Light Field Displays:&#039;&#039;&#039; Integrated into VR/AR [[Head-mounted display|HMDs]]. Primarily focused on solving the VAC for comfortable, realistic close-up interactions.&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt; Examples include research prototypes from NVIDIA&amp;lt;ref name=&amp;quot;NvidiaNELD&amp;quot;/&amp;gt; and academic groups,&amp;lt;ref name=&amp;quot;Huang2015Stereoscope&amp;quot;&amp;gt;Huang, F. C., Chen, K., &amp;amp; Wetzstein, G. (2015). The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues. ACM Transactions on Graphics, 34(4), Article 60. doi:10.1145/2766943&amp;lt;/ref&amp;gt; and commercial modules from companies like [[CREAL]].&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt; Often utilize MLAs, stacked LCDs, or waveguide/diffractive approaches.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Large Format / Tiled Displays:&#039;&#039;&#039; Aimed at creating large-scale, immersive 3D experiences without glasses for public venues, command centers, or collaborative environments.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LightFieldLabSolidLightPR&amp;quot;&amp;gt;&lt;br /&gt;
Light Field Lab Press Release (2021, Oct 7). *Light Field Lab Unveils SolidLight™ – The Highest Resolution Holographic Display Platform Ever Designed.*  &lt;br /&gt;
https://www.lightfieldlab.com/press-release-oct-2021 (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt; [[Light Field Lab]]&#039;s SolidLight™ platform uses modular panels designed to be tiled into large video walls.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LightFieldLabSolidLightPR&amp;quot;/&amp;gt; Sony&#039;s ELF-SR series (Spatial Reality Display) uses high-speed vision sensors and a micro-optical lens for a single user but demonstrates high-fidelity desktop light field effects.&amp;lt;ref name=&amp;quot;SonyELFSR2&amp;quot;&amp;gt;&lt;br /&gt;
Sony Professional. *ELF‑SR2 Spatial Reality Display.*  &lt;br /&gt;
https://pro.sony/ue_US/products/spatial-reality-displays/elf-sr2 (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Comparison with Other 3D Display Technologies ==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Comparison of Key 3D Display Technology Characteristics&lt;br /&gt;
! Technology&lt;br /&gt;
! Glasses Required&lt;br /&gt;
! Natural Focal Cues (Solves [[Vergence-accommodation conflict|VAC]])&lt;br /&gt;
! Full Motion [[Parallax]]&lt;br /&gt;
! Typical [[Field of view|View Field]]&lt;br /&gt;
! Key Trade-offs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Light Field Display&#039;&#039;&#039;&lt;br /&gt;
| No (often)&lt;br /&gt;
| Yes&lt;br /&gt;
| Yes&lt;br /&gt;
| &lt;br /&gt;
| Spatio-angular resolution trade-off, computation needs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Stereoscopic display|Stereoscopic Displays]]&#039;&#039;&#039;&lt;br /&gt;
| Yes&lt;br /&gt;
| No&lt;br /&gt;
| No &amp;lt;small&amp;gt;(requires head tracking)&amp;lt;/small&amp;gt;&lt;br /&gt;
| Wide&lt;br /&gt;
| VAC causes fatigue, requires glasses&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Autostereoscopic display|Autostereoscopic (non-LFD)]]&#039;&#039;&#039;&lt;br /&gt;
| No&lt;br /&gt;
| No&lt;br /&gt;
| Limited &amp;lt;small&amp;gt;(often Horizontal only)&amp;lt;/small&amp;gt;&lt;br /&gt;
| Limited&lt;br /&gt;
| Reduced resolution per view, fixed viewing zones&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Volumetric Display]]&#039;&#039;&#039;&lt;br /&gt;
| No&lt;br /&gt;
| Yes&lt;br /&gt;
| Yes&lt;br /&gt;
| 360° potential&lt;br /&gt;
| Limited resolution, transparency/opacity issues, bulk&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Holographic display|Holographic Displays]]&#039;&#039;&#039;&lt;br /&gt;
| No&lt;br /&gt;
| Yes&lt;br /&gt;
| Yes&lt;br /&gt;
| Often Limited&lt;br /&gt;
| Extreme computational demands, [[Speckle pattern|speckle]], small size typically&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
LFDs offer a compelling balance, providing natural depth cues without glasses (in many formats) and resolving the VAC, but face challenges in achieving high resolution across both spatial and angular domains simultaneously.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Content Creation ==&lt;br /&gt;
Creating content compatible with LFDs requires capturing or generating directional view information:&lt;br /&gt;
* &#039;&#039;&#039;[[Light Field Camera|Light Field Cameras]] / [[Plenoptic Camera|Plenoptic Cameras]]:&#039;&#039;&#039; Capture both intensity and direction of incoming light using specialized sensors (often with MLAs).&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt; The captured data can be processed for LFD playback.&lt;br /&gt;
* &#039;&#039;&#039;[[Computer Graphics]] Rendering:&#039;&#039;&#039; Standard 3D scenes built in engines like [[Unity (game engine)|Unity]] or [[Unreal Engine]] can be rendered from multiple viewpoints to generate the necessary data.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt; Specialized light field rendering techniques, potentially using [[Ray tracing (graphics)|ray tracing]] or neural methods like [[Neural Radiance Fields]] (NeRF), are employed.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Mildenhall2020NeRF&amp;quot;&amp;gt;Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T., Ramamoorthi, R., &amp;amp; Ng, R. (2020). NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. European Conference on Computer Vision (ECCV), 405-421. doi:10.1007/978-3-030-58452-8_24&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Photogrammetry]] and 3D Scanning:&#039;&#039;&#039; Real-world objects/scenes captured as 3D models can serve as input for rendering light field views.&lt;br /&gt;
* &#039;&#039;&#039;[[Focal Stack]] Conversion:&#039;&#039;&#039; Research explores converting image stacks captured at different focal depths into light field representations, particularly for multi-layer displays.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Applications==&lt;br /&gt;
===Applications in VR and AR===&lt;br /&gt;
* &#039;&#039;&#039;Enhanced Realism and Immersion:&#039;&#039;&#039; Correct depth cues make virtual objects appear more solid and stable, improving the sense of presence, especially for near-field interactions.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Improved Visual Comfort:&#039;&#039;&#039; Mitigating the VAC reduces eye strain, fatigue, and nausea, enabling longer and more comfortable VR/AR sessions.&amp;lt;ref name=&amp;quot;WiredVAC&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Natural Interaction:&#039;&#039;&#039; Accurate depth perception facilitates intuitive hand-eye coordination for manipulating virtual objects.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Seamless AR Integration:&#039;&#039;&#039; Allows virtual elements to appear more cohesively integrated with the real world at correct focal depths.&lt;br /&gt;
* &#039;&#039;&#039;Vision Correction:&#039;&#039;&#039; Near-eye LFDs can potentially pre-distort the displayed light field to correct for the user&#039;s refractive errors, eliminating the need for prescription glasses within the headset.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2015Stereoscope&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Other Applications===&lt;br /&gt;
* &#039;&#039;&#039;Medical Imaging and Visualization:&#039;&#039;&#039; Intuitive visualization of complex 3D scans (CT, MRI) for diagnostics, surgical planning, and education.&amp;lt;ref name=&amp;quot;Nam2019Medical&amp;quot;&amp;gt;Nam, J., McCormick, M., &amp;amp; Tate, A. J. (2019). Light field display systems for medical imaging applications. Journal of Display Technology, 15(3), 215-225. doi:10.1002/jsid.785&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Scientific Visualization:&#039;&#039;&#039; Analyzing complex datasets in fields like fluid dynamics, molecular modeling, geology.&amp;lt;ref name=&amp;quot;Halle2017SciVis&amp;quot;&amp;gt;Halle, M. W., &amp;amp; Meng, J. (2017). LightPlanets: GPU-based rendering of transparent astronomical objects using light field methods. IEEE Transactions on Visualization and Computer Graphics, 23(5), 1479-1488. doi:10.1109/TVCG.2016.2535388&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Product Design and Engineering (CAD/CAE):&#039;&#039;&#039; Collaborative visualization and review of 3D models.&amp;lt;ref name=&amp;quot;Nam2019Medical&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Entertainment and Gaming:&#039;&#039;&#039; Immersive experiences in arcades, museums, theme parks, and potentially future home entertainment.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Telepresence and Communication:&#039;&#039;&#039; Creating realistic, life-sized 3D representations of remote collaborators, like Google&#039;s [[Project Starline]] concept.&amp;lt;ref name=&amp;quot;Starline&amp;quot;&amp;gt;Google Blog (2023, May 10). A first look at Project Starline’s new, simpler prototype. Retrieved from https://blog.google/technology/research/project-starline-prototype/&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Microscopy]]:&#039;&#039;&#039; Viewing microscopic samples with natural depth perception.&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Challenges and Limitations ==&lt;br /&gt;
* &#039;&#039;&#039;Spatio-Angular Resolution Trade-off:&#039;&#039;&#039; Increasing the number of views (angular resolution) often decreases the perceived sharpness (spatial resolution) for a fixed display pixel count.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Computational Complexity &amp;amp; Bandwidth:&#039;&#039;&#039; Rendering, compressing, and transmitting the massive datasets for real-time LFDs is extremely demanding on GPUs and data infrastructure.&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Manufacturing Complexity and Cost:&#039;&#039;&#039; Producing precise optical components like high-density MLAs, perfectly aligned multi-layer stacks, or large-area waveguide structures is challenging and costly.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Form Factor and Miniaturization:&#039;&#039;&#039; Integrating complex optics and electronics into thin, lightweight, and power-efficient near-eye devices remains difficult.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Limited Field of View (FoV):&#039;&#039;&#039; Achieving wide FoV comparable to traditional VR headsets while maintaining high angular resolution is challenging.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Brightness and Efficiency:&#039;&#039;&#039; Techniques like MLAs and parallax barriers inherently block or redirect light, reducing overall display brightness and power efficiency.&lt;br /&gt;
* &#039;&#039;&#039;Content Ecosystem:&#039;&#039;&#039; The workflow for creating, distributing, and viewing native light field content is still developing compared to standard 2D or stereoscopic 3D, due to no consumer lightfield hardware.&lt;br /&gt;
* &#039;&#039;&#039;Visual Artifacts:&#039;&#039;&#039; Potential issues include [[Moiré pattern|moiré]] effects (from periodic structures like MLAs), ghosting/crosstalk between views, and latency.&lt;br /&gt;
&lt;br /&gt;
== Key Players and Commercial Landscape ==&lt;br /&gt;
Several companies and research groups are active in LFD development:&lt;br /&gt;
* &#039;&#039;&#039;[[CREAL]]:&#039;&#039;&#039; Swiss startup focused on compact near-eye LFD modules for AR/VR glasses aiming to solve VAC.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Light Field Lab]]:&#039;&#039;&#039; Developing large-scale, modular &amp;quot;holographic&amp;quot; LFD panels (SolidLight™) based on proprietary [[Waveguide (optics)|waveguide]] technology.&amp;lt;ref name=&amp;quot;LightFieldLabTech&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LightFieldLabSolidLightPR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Sony]]:&#039;&#039;&#039; Produces the Spatial Reality Display (ELF-SR series), a high-fidelity desktop LFD using eye-tracking.&amp;lt;ref name=&amp;quot;SonyELFSR2&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Avegant]]:&#039;&#039;&#039; Develops light field light engines, particularly for AR, focusing on VAC resolution.&amp;lt;ref name=&amp;quot;AvegantPR&amp;quot;&amp;gt;PR Newswire (2017, March 15). Avegant Introduces Light Field Technology for Mixed Reality. Retrieved from https://www.prnewswire.com/news-releases/avegant-introduces-light-field-technology-for-mixed-reality-300423855.html&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Holografika]]:&#039;&#039;&#039; Offers glasses-free 3D LFD systems for professional applications.&amp;lt;ref name=&amp;quot;Holografika&amp;quot;&amp;gt;Holografika. Light Field Displays. Retrieved from https://holografika.com/light-field-displays/&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Japan Display Inc. (JDI)]]:&#039;&#039;&#039; Demonstrated prototype LFDs for various applications.&amp;lt;ref name=&amp;quot;JDI_LFD_2019&amp;quot;&amp;gt;Japan Display Inc. News (2019, December 3). JDI Develops World&#039;s First 10.1-inch Light Field Display. Retrieved from https://www.j-display.com/english/news/2019/20191203_01.html&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[NVIDIA]]:&#039;&#039;&#039; Foundational research in near-eye LFDs and ongoing GPU development crucial for LFD rendering.&amp;lt;ref name=&amp;quot;NvidiaNELD&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Google]]:&#039;&#039;&#039; Research in LFDs, demonstrated through concepts like Project Starline.&amp;lt;ref name=&amp;quot;Starline&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Academic Research:&#039;&#039;&#039; Institutions like [[MIT Media Lab]], [[Stanford University]], University of Arizona, and others continue to push theoretical and practical boundaries.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Future Directions and Research ==&lt;br /&gt;
* &#039;&#039;&#039;Computational Display Optimization:&#039;&#039;&#039; Using [[Artificial intelligence|AI]] and sophisticated algorithms to optimize patterns on multi-layer displays or directional backlights for better quality with fewer resources.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt; Using neural representations (like NeRF) for efficient light field synthesis and compression.&amp;lt;ref name=&amp;quot;Mildenhall2020NeRF&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Varifocal and Multifocal Integration:&#039;&#039;&#039; Hybrid approaches combining LFD principles with dynamic focus elements (liquid lenses, deformable mirrors) to achieve focus cues potentially more efficiently than pure LFDs.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Liu2014OSTHMD&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Miniaturization for Wearables:&#039;&#039;&#039; Developing ultra-thin, efficient components using [[Metasurface|metasurfaces]], [[Holographic optical element|holographic optical elements (HOEs)]], advanced waveguides, and [[MicroLED]] displays for integration into consumer AR/VR glasses.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;SpringerReview2021&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Improved Content Capture and Creation Tools:&#039;&#039;&#039; Advancements in [[Plenoptic camera|plenoptic cameras]], AI-driven view synthesis, and streamlined software workflows.&amp;lt;ref name=&amp;quot;Mildenhall2020NeRF&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Higher Resolution and Efficiency:&#039;&#039;&#039; Addressing the spatio-angular trade-off and improving light efficiency through new materials, optical designs (for example polarization multiplexing&amp;lt;ref name=&amp;quot;Tan2019Polarization&amp;quot;&amp;gt;G. Tan, T. Zhan, Y.-H. Lee, J. Xiong, S.-T. Wu, “Near-eye light-field display with polarization multiplexing,” *Proceedings of SPIE* 10942, Advances in Display Technologies IX, paper 1094206, 2019, doi:10.1117/12.2509121.&amp;lt;/ref&amp;gt;), and display technologies.&lt;br /&gt;
&lt;br /&gt;
== See Also ==&lt;br /&gt;
* [[Light Field]]&lt;br /&gt;
* [[Plenoptic Function]]&lt;br /&gt;
* [[Integral imaging]]&lt;br /&gt;
* [[Autostereoscopic display]]&lt;br /&gt;
* [[Stereoscopy]]&lt;br /&gt;
* [[Holographic display]]&lt;br /&gt;
* [[Volumetric Display]]&lt;br /&gt;
* [[Varifocal display]]&lt;br /&gt;
* [[Vergence-accommodation conflict]]&lt;br /&gt;
* [[Virtual Reality]]&lt;br /&gt;
* [[Augmented Reality]]&lt;br /&gt;
* [[Head-mounted display]]&lt;br /&gt;
* [[Microlens array]]&lt;br /&gt;
* [[Spatial Light Modulator]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Terms]]&lt;br /&gt;
[[Category:Technical Terms]]&lt;br /&gt;
[[Category:Display technology]]&lt;br /&gt;
[[Category:3D display technology]]&lt;br /&gt;
[[Category:Autostereoscopy]]&lt;br /&gt;
[[Category:Virtual reality]]&lt;br /&gt;
[[Category:Augmented reality]]&lt;br /&gt;
[[Category:Optics]]&lt;br /&gt;
[[Category:Computational photography]]&lt;br /&gt;
[[Category:Emerging technologies]]&lt;br /&gt;
[[Category:Human-computer interaction]]&lt;/div&gt;</summary>
		<author><name>RealEditor</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Light_field_display&amp;diff=36365</id>
		<title>Light field display</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Light_field_display&amp;diff=36365"/>
		<updated>2025-08-04T05:57:47Z</updated>

		<summary type="html">&lt;p&gt;RealEditor: /* Comparison with Other 3D Display Technologies */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Light field display&#039;&#039;&#039; (&#039;&#039;&#039;LFD&#039;&#039;&#039;) is an advanced display technology designed to reproduce a [[light field]], the distribution of light rays in [[3D space]], including their intensity and direction.&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;&amp;gt;Wetzstein G. (2020). “Computational Displays: Achieving the Full Plenoptic Function.” ACM SIGGRAPH 2020 Courses. ACM Digital Library. doi:10.1145/3386569.3409414. Available: https://dl.acm.org/doi/10.1145/3386569.3409414 (accessed 3 May 2025).&amp;lt;/ref&amp;gt; Unlike conventional 2D displays or [[stereoscopic display|stereoscopic 3D]] systems that present flat images or fixed viewpoints requiring glasses, light field displays aim to recreate how light naturally propagates from a real scene.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;&amp;gt;Wetzstein, G., Lanman, D., Hirsch, M., &amp;amp; Raskar, R. (2012). Tensor displays: Compressive light field synthesis using multilayer displays with directional backlighting. ACM Transactions on Graphics, 31(4), Article 80. doi:10.1145/2185520.2185576&amp;lt;/ref&amp;gt; This allows viewers to perceive genuine [[depth]], [[parallax]] (both horizontal and vertical), and perspective changes without special eyewear. (in many implementations).&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;&amp;gt;Hollister, S. (2024, January 19). Leia is building a 3D empire on the back of the worst phone we&#039;ve ever reviewed. The Verge. Retrieved from https://www.theverge.com/24036574/leia-glasses-free-3d-ces-2024&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This method of display is crucial for the future of [[virtual reality]] (VR) and [[augmented reality]] (AR), because it can directly address the [[vergence-accommodation conflict]] (VAC).&amp;lt;ref name=&amp;quot;WiredVAC&amp;quot;&amp;gt;Zhang, S. (2015, August 11). The Obscure Neuroscience Problem That&#039;s Plaguing VR. WIRED. Retrieved from https://www.wired.com/2015/08/obscure-neuroscience-problem-thats-plaguing-vr&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;VACReview&amp;quot;&amp;gt;Y. Zhou, J. Zhang, F. Fang, “Vergence-accommodation conflict in optical see-through display: Review and prospect,” *Results in Optics*, vol. 5, p. 100160, 2021, doi:10.1016/j.rio.2021.100160.&amp;lt;/ref&amp;gt; By providing correct [[focal cues]] that match the [[vergence]] information, LFDs promise more immersive, realistic, and visually comfortable experiences, reducing eye strain and [[Virtual Reality Sickness|simulator sickness]] often associated with current [[head-mounted display]]s (HMDs).&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;&amp;gt;CREAL. Light-field: Seeing Virtual Worlds Naturally. Retrieved from https://creal.com/technology/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Definition and Principles ==&lt;br /&gt;
A light field display aims to replicate the [[Plenoptic Function]], a theoretical function describing the complete set of light rays passing through every point in space, in every direction, potentially across time and wavelength.&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt; In practice, light field displays generate a discretized (sampled) approximation of the relevant 4D subset of this function (typically spatial position and angular direction).&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;&amp;gt;Huang, F. C., Wetzstein, G., Barsky, B. A., &amp;amp; Raskar, R. (2014). Eyeglasses-free display: Towards correcting visual aberrations with computational light field displays. ACM Transactions on Graphics, 33(4), Article 59. doi:10.1145/2601097.2601122&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
By controlling the direction as well as the color and intensity of emitted light rays, these displays allow the viewer&#039;s eyes to naturally focus ([[accommodation]]) at different depths within the displayed scene, matching the depth cues provided by binocular vision ([[vergence]]).&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt; This recreation allows users to experience:&lt;br /&gt;
* Full motion [[parallax]] (horizontal and vertical look-around).&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&lt;br /&gt;
* Accurate [[occlusion]] cues.&lt;br /&gt;
* Natural [[focal cues]], mitigating the [[Vergence-accommodation conflict]].&amp;lt;ref name=&amp;quot;WiredVAC&amp;quot;/&amp;gt;&lt;br /&gt;
* [[Specular highlights]] and realistic reflections that change with viewpoint.&lt;br /&gt;
* Viewing without specialized eyewear (especially in non-headset formats).&lt;br /&gt;
&lt;br /&gt;
== Characteristics ==&lt;br /&gt;
* &#039;&#039;&#039;Glasses-Free 3D&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;Full Parallax:&#039;&#039;&#039; True LFDs provide both horizontal and vertical parallax, unlike earlier [[autostereoscopic display|autostereoscopic]] technologies that often limited parallax to side-to-side movement.&lt;br /&gt;
* &#039;&#039;&#039;Accommodation-Convergence Conflict Resolution:&#039;&#039;&#039; A primary driver for VR/AR, LFDs can render virtual objects at appropriate focal distances, aligning accommodation and vergence to significantly improve visual comfort and realism.&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;&amp;gt;&lt;br /&gt;
Lanman D., &amp;amp; Luebke D. (2013). “Near‑Eye Light Field Displays.”  &lt;br /&gt;
*ACM Transactions on Graphics*, 32 (6), 220:1–220:10. doi:10.1145/2508363.2508366.  &lt;br /&gt;
Project page: https://research.nvidia.com/publication/near-eye-light-field-displays (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Computational Requirements:&#039;&#039;&#039; Generating and processing the massive amount of data (multiple views or directional light information) needed for LFDs requires significant [[Graphics processing unit|GPU]] power and bandwidth.&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Resolution Trade-offs:&#039;&#039;&#039; A fundamental challenge involves balancing spatial resolution (image sharpness), angular resolution (smoothness of parallax/number of views), [[Field of view|field of view (FoV)]], and depth of field.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt; This is often referred to as the spatio-angular resolution trade-off.&lt;br /&gt;
&lt;br /&gt;
==History and Development==&lt;br /&gt;
===Early Concepts and Foundations===&lt;br /&gt;
The underlying concept can be traced back to Michael Faraday&#039;s 1846 suggestion of light as a field&amp;lt;ref name=&amp;quot;FaradayField&amp;quot;&amp;gt;Princeton University Press. Faraday, Maxwell, and the Electromagnetic Field - How Two Men Revolutionized Physics. Retrieved from https://press.princeton.edu/books/hardcover/9780691161664/faraday-maxwell-and-the-electromagnetic-field&amp;lt;/ref&amp;gt; and was mathematically formalized regarding radiance transfer by Andrey Gershun in 1936.&amp;lt;ref name=&amp;quot;Gershun1936&amp;quot;&amp;gt;Gershun, A. (1936). The Light Field. Moscow. (Translated by P. Moon &amp;amp; G. Timoshenko, 1939, Journal of Mathematics and Physics, XVIII, 51–151).&amp;lt;/ref&amp;gt; The practical groundwork for reproducing light fields was laid by Gabriel Lippmann&#039;s 1908 concept of [[Integral imaging|Integral Photography]] (&amp;quot;photographie intégrale&amp;quot;), which used an array of small lenses to capture and reproduce light fields.&amp;lt;ref name=&amp;quot;Lippmann1908&amp;quot;&amp;gt;Lippmann, G. (1908). Épreuves réversibles donnant la sensation du relief. Journal de Physique Théorique et Appliquée, 7(1), 821–825. doi:10.1051/jphystap:019080070082100&amp;lt;/ref&amp;gt; The modern computational understanding was significantly advanced by Adelson and Bergen&#039;s formalization of the [[Plenoptic Function]] in 1991.&amp;lt;ref name=&amp;quot;AdelsonBergen1991&amp;quot;&amp;gt;Adelson, E. H., &amp;amp; Bergen, J. R. (1991). The plenoptic function and the elements of early vision. In M. Landy &amp;amp; J. A. Movshon (Eds.), Computational Models of Visual Processing (pp. 3–20). MIT Press.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Key Development Milestones===&lt;br /&gt;
* &#039;&#039;&#039;1908:&#039;&#039;&#039; Gabriel Lippmann introduces integral photography.&amp;lt;ref name=&amp;quot;Lippmann1908&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;1936:&#039;&#039;&#039; Andrey Gershun formalizes the light field mathematically.&amp;lt;ref name=&amp;quot;Gershun1936&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;1991:&#039;&#039;&#039; Adelson and Bergen formalize the plenoptic function.&amp;lt;ref name=&amp;quot;AdelsonBergen1991&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;1996:&#039;&#039;&#039; Levoy and Hanrahan publish work on Light Field Rendering.&amp;lt;ref name=&amp;quot;Levoy1996&amp;quot;&amp;gt;Levoy, M., &amp;amp; Hanrahan, P. (1996). Light field rendering. Proceedings of the 23rd annual conference on Computer graphics and interactive techniques (SIGGRAPH &#039;96), 31-42. doi:10.1145/237170.237193&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2005:&#039;&#039;&#039; Stanford Multi-camera Array demonstrated for light field capture.&amp;lt;ref name=&amp;quot;Wilburn2005&amp;quot;&amp;gt;Wilburn, B., Joshi, N., Vaish, V., Talvala, E. V., Antunez, E., Barth, A., Adams, A., Horowitz, M., &amp;amp; Levoy, M. (2005). High performance imaging using large camera arrays. ACM SIGGRAPH 2005 Papers (SIGGRAPH &#039;05), 765-776. doi:10.1145/1186822.1073256&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2004-2008:&#039;&#039;&#039; Early computational light field displays developed (for example MIT Media Lab).&amp;lt;ref name=&amp;quot;Matusik2004&amp;quot;&amp;gt;Matusik, W., &amp;amp; Pfister, H. (2004). 3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes. ACM SIGGRAPH 2004 Papers (SIGGRAPH &#039;04), 814–824. doi:10.1145/1186562.1015805&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2010-2013:&#039;&#039;&#039; Introduction of multilayer, compressive, and tensor light field display concepts.&amp;lt;ref name=&amp;quot;Lanman2010ContentAdaptive&amp;quot;&amp;gt;Lanman, D., Hirsch, M., Kim, Y., &amp;amp; Raskar, R. (2010). Content-adaptive parallax barriers: optimizing dual-layer 3D displays using low-rank light field factorization. ACM SIGGRAPH Asia 2010 papers (SIGGRAPH ASIA &#039;10), Article 163. doi:10.1145/1882261.1866191&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2013:&#039;&#039;&#039; NVIDIA demonstrates near-eye light field display prototype for VR.&amp;lt;ref name=&amp;quot;NvidiaNELD&amp;quot;&amp;gt;Lanman, D., &amp;amp; Luebke, D. (2013). Near-Eye Light Field Displays (Technical Report NVR-2013-004). NVIDIA Research. Retrieved from https://research.nvidia.com/sites/default/files/pubs/2013-11_Near-Eye-Light-Field/NVIDIA-NELD.pdf&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2015 onwards:&#039;&#039;&#039; Emergence of advanced prototypes (for example CREAL, Light Field Lab, PetaRay).&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;&amp;gt;Lang, B. (2023, January 11). CREAL&#039;s Latest Light-field AR Demo Shows Continued Progress Toward Natural Depth &amp;amp; Focus. Road to VR. Retrieved from https://www.roadtovr.com/creal-light-field-ar-vr-headset-prototype/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Technical Implementations (How They Work) ==&lt;br /&gt;
Light field displays use various techniques to generate the 4D light field:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;[[Microlens Arrays]] (MLAs):&#039;&#039;&#039; A high-resolution display panel (LCD or OLED) is overlaid with an array of tiny lenses. Each lenslet directs light from the pixels beneath it into a specific set of directions, creating different views for different observer positions.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt; This is a common approach derived from integral imaging.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt; The trade-off is explicit: spatial resolution is determined by the lenslet count, angular resolution by the pixels per lenslet.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Multilayer Displays (Stacked LCDs):&#039;&#039;&#039; Several layers of transparent display panels (typically [[Liquid crystal display|LCDs]]) are stacked with air gaps. By computationally optimizing the opacity patterns on each layer, the display acts as a multiplicative [[Spatial Light Modulator|spatial light modulator]], shaping light from a backlight into a complex light field.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2010ContentAdaptive&amp;quot;/&amp;gt; These are often explored for near-eye displays.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Directional Backlighting:&#039;&#039;&#039; A standard display panel (for example LCD) is combined with a specialized backlight that emits light in controlled directions. The backlight might use another LCD panel coupled with optics like lenticular sheets to achieve directionality.&amp;lt;ref name=&amp;quot;Maimone2013Focus3D&amp;quot;&amp;gt;Maimone, A., Wetzstein, G., Hirsch, M., Lanman, D., Raskar, R., &amp;amp; Fuchs, H. (2013). Focus 3D: compressive accommodation display. ACM Transactions on Graphics, 32(5), Article 152. doi:10.1145/2516971.2516983&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Projector Arrays:&#039;&#039;&#039; Multiple projectors illuminate a screen (often lenticular or diffusive). Each projector provides a different perspective view, and their combined output forms the light field.&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Parallax Barrier]]s:&#039;&#039;&#039; An opaque layer with precisely positioned slits or apertures is placed in front of or between display panels. The barrier blocks light selectively, allowing different pixels to be seen from different angles.&amp;lt;ref name=&amp;quot;JDI_Parallax&amp;quot;&amp;gt;&lt;br /&gt;
Japan Display Inc. (2016, Dec 5). *Ultra‑High Resolution Display with Integrated Parallax Barrier for Glasses‑Free 3D* [Press release].  &lt;br /&gt;
Archived copy: https://web.archive.org/web/20161221045330/https://www.j-display.com/english/news/2016/20161205.html (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt; Often less light-efficient than MLAs.&lt;br /&gt;
* &#039;&#039;&#039;[[Waveguide]] Optics:&#039;&#039;&#039; Light is injected into thin optical waveguides (similar to those in some AR glasses) and then coupled out at specific points with controlled directionality, often using diffractive optical elements (DOEs) or gratings.&amp;lt;ref name=&amp;quot;LightFieldLabTech&amp;quot;&amp;gt;&lt;br /&gt;
Light Field Lab. *SolidLight™ Platform Overview.* https://www.lightfieldlab.com/ (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Maimone2017HolographicNED&amp;quot;&amp;gt;Maimone, A., Georgiou, A., &amp;amp; Kollin, J. S. (2017). Holographic near-eye displays for virtual and augmented reality. ACM Transactions on Graphics, 36(4), Article 85. doi:10.1145/3072959.3073624&amp;lt;/ref&amp;gt; This is explored for compact AR/VR systems.&lt;br /&gt;
* &#039;&#039;&#039;Time-Multiplexed Displays:&#039;&#039;&#039; This is what CREAL is doing. Different views or directional illumination patterns are presented rapidly in sequence. If cycled faster than human perception, this creates the illusion of a continuous light field. Can be combined with other techniques like directional backlighting.&amp;lt;ref name=&amp;quot;Liu2014OSTHMD&amp;quot;&amp;gt;Liu, S., Cheng, D., &amp;amp; Hua, H. (2014). An optical see-through head mounted display with addressable focal planes. 2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 33-42. doi:10.1109/ISMAR.2014.6948403&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Holographic and Diffractive Approaches:&#039;&#039;&#039; While [[Holographic display|holographic displays]] reconstruct wavefronts through diffraction, some LFDs utilize holographic optical elements (HOEs) or related diffractive principles to achieve high angular resolution and potentially overcome MLA limitations.&amp;lt;ref name=&amp;quot;SpringerReview2021&amp;quot;&amp;gt;M. Martínez-Corral, Z. Guan, Y. Li, Z. Xiong, B. Javidi, “Review of light field technologies,” *Visual Computing for Industry, Biomedicine and Art*, 4 (1): 29, 2021, doi:10.1186/s42492-021-00096-8.&amp;lt;/ref&amp;gt; Some companies use &amp;quot;holographic&amp;quot; terminology for their high-density LFDs.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;&amp;gt;C. Fink, “Light Field Lab Raises $50 Million to Bring SolidLight Holograms Into the Real World,” *Forbes*, 8 Feb 2023. Available: https://www.forbes.com/sites/charliefink/2023/02/08/light-field-lab-raises-50m-to-bring-solidlight-holograms-into-the-real-world/ (accessed 30 Apr 2025).&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Types of Light Field Displays ==&lt;br /&gt;
* &#039;&#039;&#039;Near-Eye Light Field Displays:&#039;&#039;&#039; Integrated into VR/AR [[Head-mounted display|HMDs]]. Primarily focused on solving the VAC for comfortable, realistic close-up interactions.&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt; Examples include research prototypes from NVIDIA&amp;lt;ref name=&amp;quot;NvidiaNELD&amp;quot;/&amp;gt; and academic groups,&amp;lt;ref name=&amp;quot;Huang2015Stereoscope&amp;quot;&amp;gt;Huang, F. C., Chen, K., &amp;amp; Wetzstein, G. (2015). The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues. ACM Transactions on Graphics, 34(4), Article 60. doi:10.1145/2766943&amp;lt;/ref&amp;gt; and commercial modules from companies like [[CREAL]].&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt; Often utilize MLAs, stacked LCDs, or waveguide/diffractive approaches.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Large Format / Tiled Displays:&#039;&#039;&#039; Aimed at creating large-scale, immersive 3D experiences without glasses for public venues, command centers, or collaborative environments.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LightFieldLabSolidLightPR&amp;quot;&amp;gt;&lt;br /&gt;
Light Field Lab Press Release (2021, Oct 7). *Light Field Lab Unveils SolidLight™ – The Highest Resolution Holographic Display Platform Ever Designed.*  &lt;br /&gt;
https://www.lightfieldlab.com/press-release-oct-2021 (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt; [[Light Field Lab]]&#039;s SolidLight™ platform uses modular panels designed to be tiled into large video walls.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LightFieldLabSolidLightPR&amp;quot;/&amp;gt; Sony&#039;s ELF-SR series (Spatial Reality Display) uses high-speed vision sensors and a micro-optical lens for a single user but demonstrates high-fidelity desktop light field effects.&amp;lt;ref name=&amp;quot;SonyELFSR2&amp;quot;&amp;gt;&lt;br /&gt;
Sony Professional. *ELF‑SR2 Spatial Reality Display.*  &lt;br /&gt;
https://pro.sony/ue_US/products/spatial-reality-displays/elf-sr2 (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Comparison with Other 3D Display Technologies ==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Comparison of Key 3D Display Technology Characteristics&lt;br /&gt;
! Technology&lt;br /&gt;
! Glasses Required&lt;br /&gt;
! Natural Focal Cues (Solves [[Vergence-accommodation conflict|VAC]])&lt;br /&gt;
! Full Motion [[Parallax]]&lt;br /&gt;
! Typical [[Field of view|View Field]]&lt;br /&gt;
! Key Trade-offs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Light Field Display&#039;&#039;&#039;&lt;br /&gt;
| No (often)&lt;br /&gt;
| Yes&lt;br /&gt;
| Yes&lt;br /&gt;
| Limited to Wide&lt;br /&gt;
| Spatio-angular resolution trade-off, computation needs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Stereoscopic display|Stereoscopic Displays]]&#039;&#039;&#039;&lt;br /&gt;
| Yes&lt;br /&gt;
| No&lt;br /&gt;
| No &amp;lt;small&amp;gt;(requires head tracking)&amp;lt;/small&amp;gt;&lt;br /&gt;
| Wide&lt;br /&gt;
| VAC causes fatigue, requires glasses&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Autostereoscopic display|Autostereoscopic (non-LFD)]]&#039;&#039;&#039;&lt;br /&gt;
| No&lt;br /&gt;
| No&lt;br /&gt;
| Limited &amp;lt;small&amp;gt;(often Horizontal only)&amp;lt;/small&amp;gt;&lt;br /&gt;
| Limited&lt;br /&gt;
| Reduced resolution per view, fixed viewing zones&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Volumetric Display]]&#039;&#039;&#039;&lt;br /&gt;
| No&lt;br /&gt;
| Yes&lt;br /&gt;
| Yes&lt;br /&gt;
| 360° potential&lt;br /&gt;
| Limited resolution, transparency/opacity issues, bulk&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Holographic display|Holographic Displays]]&#039;&#039;&#039;&lt;br /&gt;
| No&lt;br /&gt;
| Yes&lt;br /&gt;
| Yes&lt;br /&gt;
| Often Limited&lt;br /&gt;
| Extreme computational demands, [[Speckle pattern|speckle]], small size typically&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
LFDs offer a compelling balance, providing natural depth cues without glasses (in many formats) and resolving the VAC, but face challenges in achieving high resolution across both spatial and angular domains simultaneously.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Content Creation ==&lt;br /&gt;
Creating content compatible with LFDs requires capturing or generating directional view information:&lt;br /&gt;
* &#039;&#039;&#039;[[Light Field Camera|Light Field Cameras]] / [[Plenoptic Camera|Plenoptic Cameras]]:&#039;&#039;&#039; Capture both intensity and direction of incoming light using specialized sensors (often with MLAs).&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt; The captured data can be processed for LFD playback.&lt;br /&gt;
* &#039;&#039;&#039;[[Computer Graphics]] Rendering:&#039;&#039;&#039; Standard 3D scenes built in engines like [[Unity (game engine)|Unity]] or [[Unreal Engine]] can be rendered from multiple viewpoints to generate the necessary data.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt; Specialized light field rendering techniques, potentially using [[Ray tracing (graphics)|ray tracing]] or neural methods like [[Neural Radiance Fields]] (NeRF), are employed.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Mildenhall2020NeRF&amp;quot;&amp;gt;Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T., Ramamoorthi, R., &amp;amp; Ng, R. (2020). NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. European Conference on Computer Vision (ECCV), 405-421. doi:10.1007/978-3-030-58452-8_24&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Photogrammetry]] and 3D Scanning:&#039;&#039;&#039; Real-world objects/scenes captured as 3D models can serve as input for rendering light field views.&lt;br /&gt;
* &#039;&#039;&#039;[[Focal Stack]] Conversion:&#039;&#039;&#039; Research explores converting image stacks captured at different focal depths into light field representations, particularly for multi-layer displays.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Applications==&lt;br /&gt;
===Applications in VR and AR===&lt;br /&gt;
* &#039;&#039;&#039;Enhanced Realism and Immersion:&#039;&#039;&#039; Correct depth cues make virtual objects appear more solid and stable, improving the sense of presence, especially for near-field interactions.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Improved Visual Comfort:&#039;&#039;&#039; Mitigating the VAC reduces eye strain, fatigue, and nausea, enabling longer and more comfortable VR/AR sessions.&amp;lt;ref name=&amp;quot;WiredVAC&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Natural Interaction:&#039;&#039;&#039; Accurate depth perception facilitates intuitive hand-eye coordination for manipulating virtual objects.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Seamless AR Integration:&#039;&#039;&#039; Allows virtual elements to appear more cohesively integrated with the real world at correct focal depths.&lt;br /&gt;
* &#039;&#039;&#039;Vision Correction:&#039;&#039;&#039; Near-eye LFDs can potentially pre-distort the displayed light field to correct for the user&#039;s refractive errors, eliminating the need for prescription glasses within the headset.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2015Stereoscope&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Other Applications===&lt;br /&gt;
* &#039;&#039;&#039;Medical Imaging and Visualization:&#039;&#039;&#039; Intuitive visualization of complex 3D scans (CT, MRI) for diagnostics, surgical planning, and education.&amp;lt;ref name=&amp;quot;Nam2019Medical&amp;quot;&amp;gt;Nam, J., McCormick, M., &amp;amp; Tate, A. J. (2019). Light field display systems for medical imaging applications. Journal of Display Technology, 15(3), 215-225. doi:10.1002/jsid.785&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Scientific Visualization:&#039;&#039;&#039; Analyzing complex datasets in fields like fluid dynamics, molecular modeling, geology.&amp;lt;ref name=&amp;quot;Halle2017SciVis&amp;quot;&amp;gt;Halle, M. W., &amp;amp; Meng, J. (2017). LightPlanets: GPU-based rendering of transparent astronomical objects using light field methods. IEEE Transactions on Visualization and Computer Graphics, 23(5), 1479-1488. doi:10.1109/TVCG.2016.2535388&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Product Design and Engineering (CAD/CAE):&#039;&#039;&#039; Collaborative visualization and review of 3D models.&amp;lt;ref name=&amp;quot;Nam2019Medical&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Entertainment and Gaming:&#039;&#039;&#039; Immersive experiences in arcades, museums, theme parks, and potentially future home entertainment.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Telepresence and Communication:&#039;&#039;&#039; Creating realistic, life-sized 3D representations of remote collaborators, like Google&#039;s [[Project Starline]] concept.&amp;lt;ref name=&amp;quot;Starline&amp;quot;&amp;gt;Google Blog (2023, May 10). A first look at Project Starline’s new, simpler prototype. Retrieved from https://blog.google/technology/research/project-starline-prototype/&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Microscopy]]:&#039;&#039;&#039; Viewing microscopic samples with natural depth perception.&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Challenges and Limitations ==&lt;br /&gt;
* &#039;&#039;&#039;Spatio-Angular Resolution Trade-off:&#039;&#039;&#039; Increasing the number of views (angular resolution) often decreases the perceived sharpness (spatial resolution) for a fixed display pixel count.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Computational Complexity &amp;amp; Bandwidth:&#039;&#039;&#039; Rendering, compressing, and transmitting the massive datasets for real-time LFDs is extremely demanding on GPUs and data infrastructure.&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Manufacturing Complexity and Cost:&#039;&#039;&#039; Producing precise optical components like high-density MLAs, perfectly aligned multi-layer stacks, or large-area waveguide structures is challenging and costly.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Form Factor and Miniaturization:&#039;&#039;&#039; Integrating complex optics and electronics into thin, lightweight, and power-efficient near-eye devices remains difficult.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Limited Field of View (FoV):&#039;&#039;&#039; Achieving wide FoV comparable to traditional VR headsets while maintaining high angular resolution is challenging.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Brightness and Efficiency:&#039;&#039;&#039; Techniques like MLAs and parallax barriers inherently block or redirect light, reducing overall display brightness and power efficiency.&lt;br /&gt;
* &#039;&#039;&#039;Content Ecosystem:&#039;&#039;&#039; The workflow for creating, distributing, and viewing native light field content is still developing compared to standard 2D or stereoscopic 3D, due to no consumer lightfield hardware.&lt;br /&gt;
* &#039;&#039;&#039;Visual Artifacts:&#039;&#039;&#039; Potential issues include [[Moiré pattern|moiré]] effects (from periodic structures like MLAs), ghosting/crosstalk between views, and latency.&lt;br /&gt;
&lt;br /&gt;
== Key Players and Commercial Landscape ==&lt;br /&gt;
Several companies and research groups are active in LFD development:&lt;br /&gt;
* &#039;&#039;&#039;[[CREAL]]:&#039;&#039;&#039; Swiss startup focused on compact near-eye LFD modules for AR/VR glasses aiming to solve VAC.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Light Field Lab]]:&#039;&#039;&#039; Developing large-scale, modular &amp;quot;holographic&amp;quot; LFD panels (SolidLight™) based on proprietary [[Waveguide (optics)|waveguide]] technology.&amp;lt;ref name=&amp;quot;LightFieldLabTech&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LightFieldLabSolidLightPR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Sony]]:&#039;&#039;&#039; Produces the Spatial Reality Display (ELF-SR series), a high-fidelity desktop LFD using eye-tracking.&amp;lt;ref name=&amp;quot;SonyELFSR2&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Avegant]]:&#039;&#039;&#039; Develops light field light engines, particularly for AR, focusing on VAC resolution.&amp;lt;ref name=&amp;quot;AvegantPR&amp;quot;&amp;gt;PR Newswire (2017, March 15). Avegant Introduces Light Field Technology for Mixed Reality. Retrieved from https://www.prnewswire.com/news-releases/avegant-introduces-light-field-technology-for-mixed-reality-300423855.html&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Holografika]]:&#039;&#039;&#039; Offers glasses-free 3D LFD systems for professional applications.&amp;lt;ref name=&amp;quot;Holografika&amp;quot;&amp;gt;Holografika. Light Field Displays. Retrieved from https://holografika.com/light-field-displays/&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Japan Display Inc. (JDI)]]:&#039;&#039;&#039; Demonstrated prototype LFDs for various applications.&amp;lt;ref name=&amp;quot;JDI_LFD_2019&amp;quot;&amp;gt;Japan Display Inc. News (2019, December 3). JDI Develops World&#039;s First 10.1-inch Light Field Display. Retrieved from https://www.j-display.com/english/news/2019/20191203_01.html&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[NVIDIA]]:&#039;&#039;&#039; Foundational research in near-eye LFDs and ongoing GPU development crucial for LFD rendering.&amp;lt;ref name=&amp;quot;NvidiaNELD&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Google]]:&#039;&#039;&#039; Research in LFDs, demonstrated through concepts like Project Starline.&amp;lt;ref name=&amp;quot;Starline&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Academic Research:&#039;&#039;&#039; Institutions like [[MIT Media Lab]], [[Stanford University]], University of Arizona, and others continue to push theoretical and practical boundaries.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Future Directions and Research ==&lt;br /&gt;
* &#039;&#039;&#039;Computational Display Optimization:&#039;&#039;&#039; Using [[Artificial intelligence|AI]] and sophisticated algorithms to optimize patterns on multi-layer displays or directional backlights for better quality with fewer resources.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt; Using neural representations (like NeRF) for efficient light field synthesis and compression.&amp;lt;ref name=&amp;quot;Mildenhall2020NeRF&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Varifocal and Multifocal Integration:&#039;&#039;&#039; Hybrid approaches combining LFD principles with dynamic focus elements (liquid lenses, deformable mirrors) to achieve focus cues potentially more efficiently than pure LFDs.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Liu2014OSTHMD&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Miniaturization for Wearables:&#039;&#039;&#039; Developing ultra-thin, efficient components using [[Metasurface|metasurfaces]], [[Holographic optical element|holographic optical elements (HOEs)]], advanced waveguides, and [[MicroLED]] displays for integration into consumer AR/VR glasses.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;SpringerReview2021&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Improved Content Capture and Creation Tools:&#039;&#039;&#039; Advancements in [[Plenoptic camera|plenoptic cameras]], AI-driven view synthesis, and streamlined software workflows.&amp;lt;ref name=&amp;quot;Mildenhall2020NeRF&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Higher Resolution and Efficiency:&#039;&#039;&#039; Addressing the spatio-angular trade-off and improving light efficiency through new materials, optical designs (for example polarization multiplexing&amp;lt;ref name=&amp;quot;Tan2019Polarization&amp;quot;&amp;gt;G. Tan, T. Zhan, Y.-H. Lee, J. Xiong, S.-T. Wu, “Near-eye light-field display with polarization multiplexing,” *Proceedings of SPIE* 10942, Advances in Display Technologies IX, paper 1094206, 2019, doi:10.1117/12.2509121.&amp;lt;/ref&amp;gt;), and display technologies.&lt;br /&gt;
&lt;br /&gt;
== See Also ==&lt;br /&gt;
* [[Light Field]]&lt;br /&gt;
* [[Plenoptic Function]]&lt;br /&gt;
* [[Integral imaging]]&lt;br /&gt;
* [[Autostereoscopic display]]&lt;br /&gt;
* [[Stereoscopy]]&lt;br /&gt;
* [[Holographic display]]&lt;br /&gt;
* [[Volumetric Display]]&lt;br /&gt;
* [[Varifocal display]]&lt;br /&gt;
* [[Vergence-accommodation conflict]]&lt;br /&gt;
* [[Virtual Reality]]&lt;br /&gt;
* [[Augmented Reality]]&lt;br /&gt;
* [[Head-mounted display]]&lt;br /&gt;
* [[Microlens array]]&lt;br /&gt;
* [[Spatial Light Modulator]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Terms]]&lt;br /&gt;
[[Category:Technical Terms]]&lt;br /&gt;
[[Category:Display technology]]&lt;br /&gt;
[[Category:3D display technology]]&lt;br /&gt;
[[Category:Autostereoscopy]]&lt;br /&gt;
[[Category:Virtual reality]]&lt;br /&gt;
[[Category:Augmented reality]]&lt;br /&gt;
[[Category:Optics]]&lt;br /&gt;
[[Category:Computational photography]]&lt;br /&gt;
[[Category:Emerging technologies]]&lt;br /&gt;
[[Category:Human-computer interaction]]&lt;/div&gt;</summary>
		<author><name>RealEditor</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Light_field_display&amp;diff=36364</id>
		<title>Light field display</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Light_field_display&amp;diff=36364"/>
		<updated>2025-08-04T05:57:25Z</updated>

		<summary type="html">&lt;p&gt;RealEditor: /* Types of Light Field Displays */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Light field display&#039;&#039;&#039; (&#039;&#039;&#039;LFD&#039;&#039;&#039;) is an advanced display technology designed to reproduce a [[light field]], the distribution of light rays in [[3D space]], including their intensity and direction.&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;&amp;gt;Wetzstein G. (2020). “Computational Displays: Achieving the Full Plenoptic Function.” ACM SIGGRAPH 2020 Courses. ACM Digital Library. doi:10.1145/3386569.3409414. Available: https://dl.acm.org/doi/10.1145/3386569.3409414 (accessed 3 May 2025).&amp;lt;/ref&amp;gt; Unlike conventional 2D displays or [[stereoscopic display|stereoscopic 3D]] systems that present flat images or fixed viewpoints requiring glasses, light field displays aim to recreate how light naturally propagates from a real scene.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;&amp;gt;Wetzstein, G., Lanman, D., Hirsch, M., &amp;amp; Raskar, R. (2012). Tensor displays: Compressive light field synthesis using multilayer displays with directional backlighting. ACM Transactions on Graphics, 31(4), Article 80. doi:10.1145/2185520.2185576&amp;lt;/ref&amp;gt; This allows viewers to perceive genuine [[depth]], [[parallax]] (both horizontal and vertical), and perspective changes without special eyewear. (in many implementations).&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;&amp;gt;Hollister, S. (2024, January 19). Leia is building a 3D empire on the back of the worst phone we&#039;ve ever reviewed. The Verge. Retrieved from https://www.theverge.com/24036574/leia-glasses-free-3d-ces-2024&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This method of display is crucial for the future of [[virtual reality]] (VR) and [[augmented reality]] (AR), because it can directly address the [[vergence-accommodation conflict]] (VAC).&amp;lt;ref name=&amp;quot;WiredVAC&amp;quot;&amp;gt;Zhang, S. (2015, August 11). The Obscure Neuroscience Problem That&#039;s Plaguing VR. WIRED. Retrieved from https://www.wired.com/2015/08/obscure-neuroscience-problem-thats-plaguing-vr&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;VACReview&amp;quot;&amp;gt;Y. Zhou, J. Zhang, F. Fang, “Vergence-accommodation conflict in optical see-through display: Review and prospect,” *Results in Optics*, vol. 5, p. 100160, 2021, doi:10.1016/j.rio.2021.100160.&amp;lt;/ref&amp;gt; By providing correct [[focal cues]] that match the [[vergence]] information, LFDs promise more immersive, realistic, and visually comfortable experiences, reducing eye strain and [[Virtual Reality Sickness|simulator sickness]] often associated with current [[head-mounted display]]s (HMDs).&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;&amp;gt;CREAL. Light-field: Seeing Virtual Worlds Naturally. Retrieved from https://creal.com/technology/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Definition and Principles ==&lt;br /&gt;
A light field display aims to replicate the [[Plenoptic Function]], a theoretical function describing the complete set of light rays passing through every point in space, in every direction, potentially across time and wavelength.&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt; In practice, light field displays generate a discretized (sampled) approximation of the relevant 4D subset of this function (typically spatial position and angular direction).&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;&amp;gt;Huang, F. C., Wetzstein, G., Barsky, B. A., &amp;amp; Raskar, R. (2014). Eyeglasses-free display: Towards correcting visual aberrations with computational light field displays. ACM Transactions on Graphics, 33(4), Article 59. doi:10.1145/2601097.2601122&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
By controlling the direction as well as the color and intensity of emitted light rays, these displays allow the viewer&#039;s eyes to naturally focus ([[accommodation]]) at different depths within the displayed scene, matching the depth cues provided by binocular vision ([[vergence]]).&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt; This recreation allows users to experience:&lt;br /&gt;
* Full motion [[parallax]] (horizontal and vertical look-around).&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&lt;br /&gt;
* Accurate [[occlusion]] cues.&lt;br /&gt;
* Natural [[focal cues]], mitigating the [[Vergence-accommodation conflict]].&amp;lt;ref name=&amp;quot;WiredVAC&amp;quot;/&amp;gt;&lt;br /&gt;
* [[Specular highlights]] and realistic reflections that change with viewpoint.&lt;br /&gt;
* Viewing without specialized eyewear (especially in non-headset formats).&lt;br /&gt;
&lt;br /&gt;
== Characteristics ==&lt;br /&gt;
* &#039;&#039;&#039;Glasses-Free 3D&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;Full Parallax:&#039;&#039;&#039; True LFDs provide both horizontal and vertical parallax, unlike earlier [[autostereoscopic display|autostereoscopic]] technologies that often limited parallax to side-to-side movement.&lt;br /&gt;
* &#039;&#039;&#039;Accommodation-Convergence Conflict Resolution:&#039;&#039;&#039; A primary driver for VR/AR, LFDs can render virtual objects at appropriate focal distances, aligning accommodation and vergence to significantly improve visual comfort and realism.&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;&amp;gt;&lt;br /&gt;
Lanman D., &amp;amp; Luebke D. (2013). “Near‑Eye Light Field Displays.”  &lt;br /&gt;
*ACM Transactions on Graphics*, 32 (6), 220:1–220:10. doi:10.1145/2508363.2508366.  &lt;br /&gt;
Project page: https://research.nvidia.com/publication/near-eye-light-field-displays (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Computational Requirements:&#039;&#039;&#039; Generating and processing the massive amount of data (multiple views or directional light information) needed for LFDs requires significant [[Graphics processing unit|GPU]] power and bandwidth.&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Resolution Trade-offs:&#039;&#039;&#039; A fundamental challenge involves balancing spatial resolution (image sharpness), angular resolution (smoothness of parallax/number of views), [[Field of view|field of view (FoV)]], and depth of field.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt; This is often referred to as the spatio-angular resolution trade-off.&lt;br /&gt;
&lt;br /&gt;
==History and Development==&lt;br /&gt;
===Early Concepts and Foundations===&lt;br /&gt;
The underlying concept can be traced back to Michael Faraday&#039;s 1846 suggestion of light as a field&amp;lt;ref name=&amp;quot;FaradayField&amp;quot;&amp;gt;Princeton University Press. Faraday, Maxwell, and the Electromagnetic Field - How Two Men Revolutionized Physics. Retrieved from https://press.princeton.edu/books/hardcover/9780691161664/faraday-maxwell-and-the-electromagnetic-field&amp;lt;/ref&amp;gt; and was mathematically formalized regarding radiance transfer by Andrey Gershun in 1936.&amp;lt;ref name=&amp;quot;Gershun1936&amp;quot;&amp;gt;Gershun, A. (1936). The Light Field. Moscow. (Translated by P. Moon &amp;amp; G. Timoshenko, 1939, Journal of Mathematics and Physics, XVIII, 51–151).&amp;lt;/ref&amp;gt; The practical groundwork for reproducing light fields was laid by Gabriel Lippmann&#039;s 1908 concept of [[Integral imaging|Integral Photography]] (&amp;quot;photographie intégrale&amp;quot;), which used an array of small lenses to capture and reproduce light fields.&amp;lt;ref name=&amp;quot;Lippmann1908&amp;quot;&amp;gt;Lippmann, G. (1908). Épreuves réversibles donnant la sensation du relief. Journal de Physique Théorique et Appliquée, 7(1), 821–825. doi:10.1051/jphystap:019080070082100&amp;lt;/ref&amp;gt; The modern computational understanding was significantly advanced by Adelson and Bergen&#039;s formalization of the [[Plenoptic Function]] in 1991.&amp;lt;ref name=&amp;quot;AdelsonBergen1991&amp;quot;&amp;gt;Adelson, E. H., &amp;amp; Bergen, J. R. (1991). The plenoptic function and the elements of early vision. In M. Landy &amp;amp; J. A. Movshon (Eds.), Computational Models of Visual Processing (pp. 3–20). MIT Press.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Key Development Milestones===&lt;br /&gt;
* &#039;&#039;&#039;1908:&#039;&#039;&#039; Gabriel Lippmann introduces integral photography.&amp;lt;ref name=&amp;quot;Lippmann1908&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;1936:&#039;&#039;&#039; Andrey Gershun formalizes the light field mathematically.&amp;lt;ref name=&amp;quot;Gershun1936&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;1991:&#039;&#039;&#039; Adelson and Bergen formalize the plenoptic function.&amp;lt;ref name=&amp;quot;AdelsonBergen1991&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;1996:&#039;&#039;&#039; Levoy and Hanrahan publish work on Light Field Rendering.&amp;lt;ref name=&amp;quot;Levoy1996&amp;quot;&amp;gt;Levoy, M., &amp;amp; Hanrahan, P. (1996). Light field rendering. Proceedings of the 23rd annual conference on Computer graphics and interactive techniques (SIGGRAPH &#039;96), 31-42. doi:10.1145/237170.237193&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2005:&#039;&#039;&#039; Stanford Multi-camera Array demonstrated for light field capture.&amp;lt;ref name=&amp;quot;Wilburn2005&amp;quot;&amp;gt;Wilburn, B., Joshi, N., Vaish, V., Talvala, E. V., Antunez, E., Barth, A., Adams, A., Horowitz, M., &amp;amp; Levoy, M. (2005). High performance imaging using large camera arrays. ACM SIGGRAPH 2005 Papers (SIGGRAPH &#039;05), 765-776. doi:10.1145/1186822.1073256&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2004-2008:&#039;&#039;&#039; Early computational light field displays developed (for example MIT Media Lab).&amp;lt;ref name=&amp;quot;Matusik2004&amp;quot;&amp;gt;Matusik, W., &amp;amp; Pfister, H. (2004). 3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes. ACM SIGGRAPH 2004 Papers (SIGGRAPH &#039;04), 814–824. doi:10.1145/1186562.1015805&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2010-2013:&#039;&#039;&#039; Introduction of multilayer, compressive, and tensor light field display concepts.&amp;lt;ref name=&amp;quot;Lanman2010ContentAdaptive&amp;quot;&amp;gt;Lanman, D., Hirsch, M., Kim, Y., &amp;amp; Raskar, R. (2010). Content-adaptive parallax barriers: optimizing dual-layer 3D displays using low-rank light field factorization. ACM SIGGRAPH Asia 2010 papers (SIGGRAPH ASIA &#039;10), Article 163. doi:10.1145/1882261.1866191&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2013:&#039;&#039;&#039; NVIDIA demonstrates near-eye light field display prototype for VR.&amp;lt;ref name=&amp;quot;NvidiaNELD&amp;quot;&amp;gt;Lanman, D., &amp;amp; Luebke, D. (2013). Near-Eye Light Field Displays (Technical Report NVR-2013-004). NVIDIA Research. Retrieved from https://research.nvidia.com/sites/default/files/pubs/2013-11_Near-Eye-Light-Field/NVIDIA-NELD.pdf&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2015 onwards:&#039;&#039;&#039; Emergence of advanced prototypes (for example CREAL, Light Field Lab, PetaRay).&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;&amp;gt;Lang, B. (2023, January 11). CREAL&#039;s Latest Light-field AR Demo Shows Continued Progress Toward Natural Depth &amp;amp; Focus. Road to VR. Retrieved from https://www.roadtovr.com/creal-light-field-ar-vr-headset-prototype/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Technical Implementations (How They Work) ==&lt;br /&gt;
Light field displays use various techniques to generate the 4D light field:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;[[Microlens Arrays]] (MLAs):&#039;&#039;&#039; A high-resolution display panel (LCD or OLED) is overlaid with an array of tiny lenses. Each lenslet directs light from the pixels beneath it into a specific set of directions, creating different views for different observer positions.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt; This is a common approach derived from integral imaging.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt; The trade-off is explicit: spatial resolution is determined by the lenslet count, angular resolution by the pixels per lenslet.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Multilayer Displays (Stacked LCDs):&#039;&#039;&#039; Several layers of transparent display panels (typically [[Liquid crystal display|LCDs]]) are stacked with air gaps. By computationally optimizing the opacity patterns on each layer, the display acts as a multiplicative [[Spatial Light Modulator|spatial light modulator]], shaping light from a backlight into a complex light field.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2010ContentAdaptive&amp;quot;/&amp;gt; These are often explored for near-eye displays.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Directional Backlighting:&#039;&#039;&#039; A standard display panel (for example LCD) is combined with a specialized backlight that emits light in controlled directions. The backlight might use another LCD panel coupled with optics like lenticular sheets to achieve directionality.&amp;lt;ref name=&amp;quot;Maimone2013Focus3D&amp;quot;&amp;gt;Maimone, A., Wetzstein, G., Hirsch, M., Lanman, D., Raskar, R., &amp;amp; Fuchs, H. (2013). Focus 3D: compressive accommodation display. ACM Transactions on Graphics, 32(5), Article 152. doi:10.1145/2516971.2516983&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Projector Arrays:&#039;&#039;&#039; Multiple projectors illuminate a screen (often lenticular or diffusive). Each projector provides a different perspective view, and their combined output forms the light field.&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Parallax Barrier]]s:&#039;&#039;&#039; An opaque layer with precisely positioned slits or apertures is placed in front of or between display panels. The barrier blocks light selectively, allowing different pixels to be seen from different angles.&amp;lt;ref name=&amp;quot;JDI_Parallax&amp;quot;&amp;gt;&lt;br /&gt;
Japan Display Inc. (2016, Dec 5). *Ultra‑High Resolution Display with Integrated Parallax Barrier for Glasses‑Free 3D* [Press release].  &lt;br /&gt;
Archived copy: https://web.archive.org/web/20161221045330/https://www.j-display.com/english/news/2016/20161205.html (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt; Often less light-efficient than MLAs.&lt;br /&gt;
* &#039;&#039;&#039;[[Waveguide]] Optics:&#039;&#039;&#039; Light is injected into thin optical waveguides (similar to those in some AR glasses) and then coupled out at specific points with controlled directionality, often using diffractive optical elements (DOEs) or gratings.&amp;lt;ref name=&amp;quot;LightFieldLabTech&amp;quot;&amp;gt;&lt;br /&gt;
Light Field Lab. *SolidLight™ Platform Overview.* https://www.lightfieldlab.com/ (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Maimone2017HolographicNED&amp;quot;&amp;gt;Maimone, A., Georgiou, A., &amp;amp; Kollin, J. S. (2017). Holographic near-eye displays for virtual and augmented reality. ACM Transactions on Graphics, 36(4), Article 85. doi:10.1145/3072959.3073624&amp;lt;/ref&amp;gt; This is explored for compact AR/VR systems.&lt;br /&gt;
* &#039;&#039;&#039;Time-Multiplexed Displays:&#039;&#039;&#039; This is what CREAL is doing. Different views or directional illumination patterns are presented rapidly in sequence. If cycled faster than human perception, this creates the illusion of a continuous light field. Can be combined with other techniques like directional backlighting.&amp;lt;ref name=&amp;quot;Liu2014OSTHMD&amp;quot;&amp;gt;Liu, S., Cheng, D., &amp;amp; Hua, H. (2014). An optical see-through head mounted display with addressable focal planes. 2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 33-42. doi:10.1109/ISMAR.2014.6948403&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Holographic and Diffractive Approaches:&#039;&#039;&#039; While [[Holographic display|holographic displays]] reconstruct wavefronts through diffraction, some LFDs utilize holographic optical elements (HOEs) or related diffractive principles to achieve high angular resolution and potentially overcome MLA limitations.&amp;lt;ref name=&amp;quot;SpringerReview2021&amp;quot;&amp;gt;M. Martínez-Corral, Z. Guan, Y. Li, Z. Xiong, B. Javidi, “Review of light field technologies,” *Visual Computing for Industry, Biomedicine and Art*, 4 (1): 29, 2021, doi:10.1186/s42492-021-00096-8.&amp;lt;/ref&amp;gt; Some companies use &amp;quot;holographic&amp;quot; terminology for their high-density LFDs.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;&amp;gt;C. Fink, “Light Field Lab Raises $50 Million to Bring SolidLight Holograms Into the Real World,” *Forbes*, 8 Feb 2023. Available: https://www.forbes.com/sites/charliefink/2023/02/08/light-field-lab-raises-50m-to-bring-solidlight-holograms-into-the-real-world/ (accessed 30 Apr 2025).&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Types of Light Field Displays ==&lt;br /&gt;
* &#039;&#039;&#039;Near-Eye Light Field Displays:&#039;&#039;&#039; Integrated into VR/AR [[Head-mounted display|HMDs]]. Primarily focused on solving the VAC for comfortable, realistic close-up interactions.&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt; Examples include research prototypes from NVIDIA&amp;lt;ref name=&amp;quot;NvidiaNELD&amp;quot;/&amp;gt; and academic groups,&amp;lt;ref name=&amp;quot;Huang2015Stereoscope&amp;quot;&amp;gt;Huang, F. C., Chen, K., &amp;amp; Wetzstein, G. (2015). The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues. ACM Transactions on Graphics, 34(4), Article 60. doi:10.1145/2766943&amp;lt;/ref&amp;gt; and commercial modules from companies like [[CREAL]].&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt; Often utilize MLAs, stacked LCDs, or waveguide/diffractive approaches.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Large Format / Tiled Displays:&#039;&#039;&#039; Aimed at creating large-scale, immersive 3D experiences without glasses for public venues, command centers, or collaborative environments.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LightFieldLabSolidLightPR&amp;quot;&amp;gt;&lt;br /&gt;
Light Field Lab Press Release (2021, Oct 7). *Light Field Lab Unveils SolidLight™ – The Highest Resolution Holographic Display Platform Ever Designed.*  &lt;br /&gt;
https://www.lightfieldlab.com/press-release-oct-2021 (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt; [[Light Field Lab]]&#039;s SolidLight™ platform uses modular panels designed to be tiled into large video walls.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LightFieldLabSolidLightPR&amp;quot;/&amp;gt; Sony&#039;s ELF-SR series (Spatial Reality Display) uses high-speed vision sensors and a micro-optical lens for a single user but demonstrates high-fidelity desktop light field effects.&amp;lt;ref name=&amp;quot;SonyELFSR2&amp;quot;&amp;gt;&lt;br /&gt;
Sony Professional. *ELF‑SR2 Spatial Reality Display.*  &lt;br /&gt;
https://pro.sony/ue_US/products/spatial-reality-displays/elf-sr2 (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Comparison with Other 3D Display Technologies ==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Comparison of Key 3D Display Technology Characteristics&lt;br /&gt;
! Technology&lt;br /&gt;
! Glasses Required&lt;br /&gt;
! Natural Focal Cues (Solves [[Vergence-accommodation conflict|VAC]])&lt;br /&gt;
! Full Motion [[Parallax]]&lt;br /&gt;
! Typical [[Field of view|View Field]]&lt;br /&gt;
! Key Trade-offs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Light Field Display]]&#039;&#039;&#039;&lt;br /&gt;
| No (often)&lt;br /&gt;
| Yes&lt;br /&gt;
| Yes&lt;br /&gt;
| Limited to Wide&lt;br /&gt;
| Spatio-angular resolution trade-off, computation needs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Stereoscopic display|Stereoscopic Displays]]&#039;&#039;&#039;&lt;br /&gt;
| Yes&lt;br /&gt;
| No&lt;br /&gt;
| No &amp;lt;small&amp;gt;(requires head tracking)&amp;lt;/small&amp;gt;&lt;br /&gt;
| Wide&lt;br /&gt;
| VAC causes fatigue, requires glasses&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Autostereoscopic display|Autostereoscopic (non-LFD)]]&#039;&#039;&#039;&lt;br /&gt;
| No&lt;br /&gt;
| No&lt;br /&gt;
| Limited &amp;lt;small&amp;gt;(often Horizontal only)&amp;lt;/small&amp;gt;&lt;br /&gt;
| Limited&lt;br /&gt;
| Reduced resolution per view, fixed viewing zones&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Volumetric Display]]&#039;&#039;&#039;&lt;br /&gt;
| No&lt;br /&gt;
| Yes&lt;br /&gt;
| Yes&lt;br /&gt;
| 360° potential&lt;br /&gt;
| Limited resolution, transparency/opacity issues, bulk&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Holographic display|Holographic Displays]]&#039;&#039;&#039;&lt;br /&gt;
| No&lt;br /&gt;
| Yes&lt;br /&gt;
| Yes&lt;br /&gt;
| Often Limited&lt;br /&gt;
| Extreme computational demands, [[Speckle pattern|speckle]], small size typically&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
LFDs offer a compelling balance, providing natural depth cues without glasses (in many formats) and resolving the VAC, but face challenges in achieving high resolution across both spatial and angular domains simultaneously.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Content Creation ==&lt;br /&gt;
Creating content compatible with LFDs requires capturing or generating directional view information:&lt;br /&gt;
* &#039;&#039;&#039;[[Light Field Camera|Light Field Cameras]] / [[Plenoptic Camera|Plenoptic Cameras]]:&#039;&#039;&#039; Capture both intensity and direction of incoming light using specialized sensors (often with MLAs).&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt; The captured data can be processed for LFD playback.&lt;br /&gt;
* &#039;&#039;&#039;[[Computer Graphics]] Rendering:&#039;&#039;&#039; Standard 3D scenes built in engines like [[Unity (game engine)|Unity]] or [[Unreal Engine]] can be rendered from multiple viewpoints to generate the necessary data.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt; Specialized light field rendering techniques, potentially using [[Ray tracing (graphics)|ray tracing]] or neural methods like [[Neural Radiance Fields]] (NeRF), are employed.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Mildenhall2020NeRF&amp;quot;&amp;gt;Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T., Ramamoorthi, R., &amp;amp; Ng, R. (2020). NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. European Conference on Computer Vision (ECCV), 405-421. doi:10.1007/978-3-030-58452-8_24&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Photogrammetry]] and 3D Scanning:&#039;&#039;&#039; Real-world objects/scenes captured as 3D models can serve as input for rendering light field views.&lt;br /&gt;
* &#039;&#039;&#039;[[Focal Stack]] Conversion:&#039;&#039;&#039; Research explores converting image stacks captured at different focal depths into light field representations, particularly for multi-layer displays.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Applications==&lt;br /&gt;
===Applications in VR and AR===&lt;br /&gt;
* &#039;&#039;&#039;Enhanced Realism and Immersion:&#039;&#039;&#039; Correct depth cues make virtual objects appear more solid and stable, improving the sense of presence, especially for near-field interactions.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Improved Visual Comfort:&#039;&#039;&#039; Mitigating the VAC reduces eye strain, fatigue, and nausea, enabling longer and more comfortable VR/AR sessions.&amp;lt;ref name=&amp;quot;WiredVAC&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Natural Interaction:&#039;&#039;&#039; Accurate depth perception facilitates intuitive hand-eye coordination for manipulating virtual objects.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Seamless AR Integration:&#039;&#039;&#039; Allows virtual elements to appear more cohesively integrated with the real world at correct focal depths.&lt;br /&gt;
* &#039;&#039;&#039;Vision Correction:&#039;&#039;&#039; Near-eye LFDs can potentially pre-distort the displayed light field to correct for the user&#039;s refractive errors, eliminating the need for prescription glasses within the headset.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2015Stereoscope&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Other Applications===&lt;br /&gt;
* &#039;&#039;&#039;Medical Imaging and Visualization:&#039;&#039;&#039; Intuitive visualization of complex 3D scans (CT, MRI) for diagnostics, surgical planning, and education.&amp;lt;ref name=&amp;quot;Nam2019Medical&amp;quot;&amp;gt;Nam, J., McCormick, M., &amp;amp; Tate, A. J. (2019). Light field display systems for medical imaging applications. Journal of Display Technology, 15(3), 215-225. doi:10.1002/jsid.785&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Scientific Visualization:&#039;&#039;&#039; Analyzing complex datasets in fields like fluid dynamics, molecular modeling, geology.&amp;lt;ref name=&amp;quot;Halle2017SciVis&amp;quot;&amp;gt;Halle, M. W., &amp;amp; Meng, J. (2017). LightPlanets: GPU-based rendering of transparent astronomical objects using light field methods. IEEE Transactions on Visualization and Computer Graphics, 23(5), 1479-1488. doi:10.1109/TVCG.2016.2535388&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Product Design and Engineering (CAD/CAE):&#039;&#039;&#039; Collaborative visualization and review of 3D models.&amp;lt;ref name=&amp;quot;Nam2019Medical&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Entertainment and Gaming:&#039;&#039;&#039; Immersive experiences in arcades, museums, theme parks, and potentially future home entertainment.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Telepresence and Communication:&#039;&#039;&#039; Creating realistic, life-sized 3D representations of remote collaborators, like Google&#039;s [[Project Starline]] concept.&amp;lt;ref name=&amp;quot;Starline&amp;quot;&amp;gt;Google Blog (2023, May 10). A first look at Project Starline’s new, simpler prototype. Retrieved from https://blog.google/technology/research/project-starline-prototype/&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Microscopy]]:&#039;&#039;&#039; Viewing microscopic samples with natural depth perception.&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Challenges and Limitations ==&lt;br /&gt;
* &#039;&#039;&#039;Spatio-Angular Resolution Trade-off:&#039;&#039;&#039; Increasing the number of views (angular resolution) often decreases the perceived sharpness (spatial resolution) for a fixed display pixel count.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Computational Complexity &amp;amp; Bandwidth:&#039;&#039;&#039; Rendering, compressing, and transmitting the massive datasets for real-time LFDs is extremely demanding on GPUs and data infrastructure.&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Manufacturing Complexity and Cost:&#039;&#039;&#039; Producing precise optical components like high-density MLAs, perfectly aligned multi-layer stacks, or large-area waveguide structures is challenging and costly.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Form Factor and Miniaturization:&#039;&#039;&#039; Integrating complex optics and electronics into thin, lightweight, and power-efficient near-eye devices remains difficult.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Limited Field of View (FoV):&#039;&#039;&#039; Achieving wide FoV comparable to traditional VR headsets while maintaining high angular resolution is challenging.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Brightness and Efficiency:&#039;&#039;&#039; Techniques like MLAs and parallax barriers inherently block or redirect light, reducing overall display brightness and power efficiency.&lt;br /&gt;
* &#039;&#039;&#039;Content Ecosystem:&#039;&#039;&#039; The workflow for creating, distributing, and viewing native light field content is still developing compared to standard 2D or stereoscopic 3D, due to no consumer lightfield hardware.&lt;br /&gt;
* &#039;&#039;&#039;Visual Artifacts:&#039;&#039;&#039; Potential issues include [[Moiré pattern|moiré]] effects (from periodic structures like MLAs), ghosting/crosstalk between views, and latency.&lt;br /&gt;
&lt;br /&gt;
== Key Players and Commercial Landscape ==&lt;br /&gt;
Several companies and research groups are active in LFD development:&lt;br /&gt;
* &#039;&#039;&#039;[[CREAL]]:&#039;&#039;&#039; Swiss startup focused on compact near-eye LFD modules for AR/VR glasses aiming to solve VAC.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Light Field Lab]]:&#039;&#039;&#039; Developing large-scale, modular &amp;quot;holographic&amp;quot; LFD panels (SolidLight™) based on proprietary [[Waveguide (optics)|waveguide]] technology.&amp;lt;ref name=&amp;quot;LightFieldLabTech&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LightFieldLabSolidLightPR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Sony]]:&#039;&#039;&#039; Produces the Spatial Reality Display (ELF-SR series), a high-fidelity desktop LFD using eye-tracking.&amp;lt;ref name=&amp;quot;SonyELFSR2&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Avegant]]:&#039;&#039;&#039; Develops light field light engines, particularly for AR, focusing on VAC resolution.&amp;lt;ref name=&amp;quot;AvegantPR&amp;quot;&amp;gt;PR Newswire (2017, March 15). Avegant Introduces Light Field Technology for Mixed Reality. Retrieved from https://www.prnewswire.com/news-releases/avegant-introduces-light-field-technology-for-mixed-reality-300423855.html&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Holografika]]:&#039;&#039;&#039; Offers glasses-free 3D LFD systems for professional applications.&amp;lt;ref name=&amp;quot;Holografika&amp;quot;&amp;gt;Holografika. Light Field Displays. Retrieved from https://holografika.com/light-field-displays/&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Japan Display Inc. (JDI)]]:&#039;&#039;&#039; Demonstrated prototype LFDs for various applications.&amp;lt;ref name=&amp;quot;JDI_LFD_2019&amp;quot;&amp;gt;Japan Display Inc. News (2019, December 3). JDI Develops World&#039;s First 10.1-inch Light Field Display. Retrieved from https://www.j-display.com/english/news/2019/20191203_01.html&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[NVIDIA]]:&#039;&#039;&#039; Foundational research in near-eye LFDs and ongoing GPU development crucial for LFD rendering.&amp;lt;ref name=&amp;quot;NvidiaNELD&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Google]]:&#039;&#039;&#039; Research in LFDs, demonstrated through concepts like Project Starline.&amp;lt;ref name=&amp;quot;Starline&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Academic Research:&#039;&#039;&#039; Institutions like [[MIT Media Lab]], [[Stanford University]], University of Arizona, and others continue to push theoretical and practical boundaries.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Future Directions and Research ==&lt;br /&gt;
* &#039;&#039;&#039;Computational Display Optimization:&#039;&#039;&#039; Using [[Artificial intelligence|AI]] and sophisticated algorithms to optimize patterns on multi-layer displays or directional backlights for better quality with fewer resources.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt; Using neural representations (like NeRF) for efficient light field synthesis and compression.&amp;lt;ref name=&amp;quot;Mildenhall2020NeRF&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Varifocal and Multifocal Integration:&#039;&#039;&#039; Hybrid approaches combining LFD principles with dynamic focus elements (liquid lenses, deformable mirrors) to achieve focus cues potentially more efficiently than pure LFDs.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Liu2014OSTHMD&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Miniaturization for Wearables:&#039;&#039;&#039; Developing ultra-thin, efficient components using [[Metasurface|metasurfaces]], [[Holographic optical element|holographic optical elements (HOEs)]], advanced waveguides, and [[MicroLED]] displays for integration into consumer AR/VR glasses.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;SpringerReview2021&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Improved Content Capture and Creation Tools:&#039;&#039;&#039; Advancements in [[Plenoptic camera|plenoptic cameras]], AI-driven view synthesis, and streamlined software workflows.&amp;lt;ref name=&amp;quot;Mildenhall2020NeRF&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Higher Resolution and Efficiency:&#039;&#039;&#039; Addressing the spatio-angular trade-off and improving light efficiency through new materials, optical designs (for example polarization multiplexing&amp;lt;ref name=&amp;quot;Tan2019Polarization&amp;quot;&amp;gt;G. Tan, T. Zhan, Y.-H. Lee, J. Xiong, S.-T. Wu, “Near-eye light-field display with polarization multiplexing,” *Proceedings of SPIE* 10942, Advances in Display Technologies IX, paper 1094206, 2019, doi:10.1117/12.2509121.&amp;lt;/ref&amp;gt;), and display technologies.&lt;br /&gt;
&lt;br /&gt;
== See Also ==&lt;br /&gt;
* [[Light Field]]&lt;br /&gt;
* [[Plenoptic Function]]&lt;br /&gt;
* [[Integral imaging]]&lt;br /&gt;
* [[Autostereoscopic display]]&lt;br /&gt;
* [[Stereoscopy]]&lt;br /&gt;
* [[Holographic display]]&lt;br /&gt;
* [[Volumetric Display]]&lt;br /&gt;
* [[Varifocal display]]&lt;br /&gt;
* [[Vergence-accommodation conflict]]&lt;br /&gt;
* [[Virtual Reality]]&lt;br /&gt;
* [[Augmented Reality]]&lt;br /&gt;
* [[Head-mounted display]]&lt;br /&gt;
* [[Microlens array]]&lt;br /&gt;
* [[Spatial Light Modulator]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Terms]]&lt;br /&gt;
[[Category:Technical Terms]]&lt;br /&gt;
[[Category:Display technology]]&lt;br /&gt;
[[Category:3D display technology]]&lt;br /&gt;
[[Category:Autostereoscopy]]&lt;br /&gt;
[[Category:Virtual reality]]&lt;br /&gt;
[[Category:Augmented reality]]&lt;br /&gt;
[[Category:Optics]]&lt;br /&gt;
[[Category:Computational photography]]&lt;br /&gt;
[[Category:Emerging technologies]]&lt;br /&gt;
[[Category:Human-computer interaction]]&lt;/div&gt;</summary>
		<author><name>RealEditor</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Light_field_display&amp;diff=36363</id>
		<title>Light field display</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Light_field_display&amp;diff=36363"/>
		<updated>2025-08-04T05:56:56Z</updated>

		<summary type="html">&lt;p&gt;RealEditor: /* Technical Implementations (How They Work) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Light field display&#039;&#039;&#039; (&#039;&#039;&#039;LFD&#039;&#039;&#039;) is an advanced display technology designed to reproduce a [[light field]], the distribution of light rays in [[3D space]], including their intensity and direction.&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;&amp;gt;Wetzstein G. (2020). “Computational Displays: Achieving the Full Plenoptic Function.” ACM SIGGRAPH 2020 Courses. ACM Digital Library. doi:10.1145/3386569.3409414. Available: https://dl.acm.org/doi/10.1145/3386569.3409414 (accessed 3 May 2025).&amp;lt;/ref&amp;gt; Unlike conventional 2D displays or [[stereoscopic display|stereoscopic 3D]] systems that present flat images or fixed viewpoints requiring glasses, light field displays aim to recreate how light naturally propagates from a real scene.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;&amp;gt;Wetzstein, G., Lanman, D., Hirsch, M., &amp;amp; Raskar, R. (2012). Tensor displays: Compressive light field synthesis using multilayer displays with directional backlighting. ACM Transactions on Graphics, 31(4), Article 80. doi:10.1145/2185520.2185576&amp;lt;/ref&amp;gt; This allows viewers to perceive genuine [[depth]], [[parallax]] (both horizontal and vertical), and perspective changes without special eyewear. (in many implementations).&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;&amp;gt;Hollister, S. (2024, January 19). Leia is building a 3D empire on the back of the worst phone we&#039;ve ever reviewed. The Verge. Retrieved from https://www.theverge.com/24036574/leia-glasses-free-3d-ces-2024&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This method of display is crucial for the future of [[virtual reality]] (VR) and [[augmented reality]] (AR), because it can directly address the [[vergence-accommodation conflict]] (VAC).&amp;lt;ref name=&amp;quot;WiredVAC&amp;quot;&amp;gt;Zhang, S. (2015, August 11). The Obscure Neuroscience Problem That&#039;s Plaguing VR. WIRED. Retrieved from https://www.wired.com/2015/08/obscure-neuroscience-problem-thats-plaguing-vr&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;VACReview&amp;quot;&amp;gt;Y. Zhou, J. Zhang, F. Fang, “Vergence-accommodation conflict in optical see-through display: Review and prospect,” *Results in Optics*, vol. 5, p. 100160, 2021, doi:10.1016/j.rio.2021.100160.&amp;lt;/ref&amp;gt; By providing correct [[focal cues]] that match the [[vergence]] information, LFDs promise more immersive, realistic, and visually comfortable experiences, reducing eye strain and [[Virtual Reality Sickness|simulator sickness]] often associated with current [[head-mounted display]]s (HMDs).&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;&amp;gt;CREAL. Light-field: Seeing Virtual Worlds Naturally. Retrieved from https://creal.com/technology/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Definition and Principles ==&lt;br /&gt;
A light field display aims to replicate the [[Plenoptic Function]], a theoretical function describing the complete set of light rays passing through every point in space, in every direction, potentially across time and wavelength.&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt; In practice, light field displays generate a discretized (sampled) approximation of the relevant 4D subset of this function (typically spatial position and angular direction).&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;&amp;gt;Huang, F. C., Wetzstein, G., Barsky, B. A., &amp;amp; Raskar, R. (2014). Eyeglasses-free display: Towards correcting visual aberrations with computational light field displays. ACM Transactions on Graphics, 33(4), Article 59. doi:10.1145/2601097.2601122&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
By controlling the direction as well as the color and intensity of emitted light rays, these displays allow the viewer&#039;s eyes to naturally focus ([[accommodation]]) at different depths within the displayed scene, matching the depth cues provided by binocular vision ([[vergence]]).&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt; This recreation allows users to experience:&lt;br /&gt;
* Full motion [[parallax]] (horizontal and vertical look-around).&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&lt;br /&gt;
* Accurate [[occlusion]] cues.&lt;br /&gt;
* Natural [[focal cues]], mitigating the [[Vergence-accommodation conflict]].&amp;lt;ref name=&amp;quot;WiredVAC&amp;quot;/&amp;gt;&lt;br /&gt;
* [[Specular highlights]] and realistic reflections that change with viewpoint.&lt;br /&gt;
* Viewing without specialized eyewear (especially in non-headset formats).&lt;br /&gt;
&lt;br /&gt;
== Characteristics ==&lt;br /&gt;
* &#039;&#039;&#039;Glasses-Free 3D&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;Full Parallax:&#039;&#039;&#039; True LFDs provide both horizontal and vertical parallax, unlike earlier [[autostereoscopic display|autostereoscopic]] technologies that often limited parallax to side-to-side movement.&lt;br /&gt;
* &#039;&#039;&#039;Accommodation-Convergence Conflict Resolution:&#039;&#039;&#039; A primary driver for VR/AR, LFDs can render virtual objects at appropriate focal distances, aligning accommodation and vergence to significantly improve visual comfort and realism.&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;&amp;gt;&lt;br /&gt;
Lanman D., &amp;amp; Luebke D. (2013). “Near‑Eye Light Field Displays.”  &lt;br /&gt;
*ACM Transactions on Graphics*, 32 (6), 220:1–220:10. doi:10.1145/2508363.2508366.  &lt;br /&gt;
Project page: https://research.nvidia.com/publication/near-eye-light-field-displays (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Computational Requirements:&#039;&#039;&#039; Generating and processing the massive amount of data (multiple views or directional light information) needed for LFDs requires significant [[Graphics processing unit|GPU]] power and bandwidth.&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Resolution Trade-offs:&#039;&#039;&#039; A fundamental challenge involves balancing spatial resolution (image sharpness), angular resolution (smoothness of parallax/number of views), [[Field of view|field of view (FoV)]], and depth of field.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt; This is often referred to as the spatio-angular resolution trade-off.&lt;br /&gt;
&lt;br /&gt;
==History and Development==&lt;br /&gt;
===Early Concepts and Foundations===&lt;br /&gt;
The underlying concept can be traced back to Michael Faraday&#039;s 1846 suggestion of light as a field&amp;lt;ref name=&amp;quot;FaradayField&amp;quot;&amp;gt;Princeton University Press. Faraday, Maxwell, and the Electromagnetic Field - How Two Men Revolutionized Physics. Retrieved from https://press.princeton.edu/books/hardcover/9780691161664/faraday-maxwell-and-the-electromagnetic-field&amp;lt;/ref&amp;gt; and was mathematically formalized regarding radiance transfer by Andrey Gershun in 1936.&amp;lt;ref name=&amp;quot;Gershun1936&amp;quot;&amp;gt;Gershun, A. (1936). The Light Field. Moscow. (Translated by P. Moon &amp;amp; G. Timoshenko, 1939, Journal of Mathematics and Physics, XVIII, 51–151).&amp;lt;/ref&amp;gt; The practical groundwork for reproducing light fields was laid by Gabriel Lippmann&#039;s 1908 concept of [[Integral imaging|Integral Photography]] (&amp;quot;photographie intégrale&amp;quot;), which used an array of small lenses to capture and reproduce light fields.&amp;lt;ref name=&amp;quot;Lippmann1908&amp;quot;&amp;gt;Lippmann, G. (1908). Épreuves réversibles donnant la sensation du relief. Journal de Physique Théorique et Appliquée, 7(1), 821–825. doi:10.1051/jphystap:019080070082100&amp;lt;/ref&amp;gt; The modern computational understanding was significantly advanced by Adelson and Bergen&#039;s formalization of the [[Plenoptic Function]] in 1991.&amp;lt;ref name=&amp;quot;AdelsonBergen1991&amp;quot;&amp;gt;Adelson, E. H., &amp;amp; Bergen, J. R. (1991). The plenoptic function and the elements of early vision. In M. Landy &amp;amp; J. A. Movshon (Eds.), Computational Models of Visual Processing (pp. 3–20). MIT Press.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Key Development Milestones===&lt;br /&gt;
* &#039;&#039;&#039;1908:&#039;&#039;&#039; Gabriel Lippmann introduces integral photography.&amp;lt;ref name=&amp;quot;Lippmann1908&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;1936:&#039;&#039;&#039; Andrey Gershun formalizes the light field mathematically.&amp;lt;ref name=&amp;quot;Gershun1936&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;1991:&#039;&#039;&#039; Adelson and Bergen formalize the plenoptic function.&amp;lt;ref name=&amp;quot;AdelsonBergen1991&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;1996:&#039;&#039;&#039; Levoy and Hanrahan publish work on Light Field Rendering.&amp;lt;ref name=&amp;quot;Levoy1996&amp;quot;&amp;gt;Levoy, M., &amp;amp; Hanrahan, P. (1996). Light field rendering. Proceedings of the 23rd annual conference on Computer graphics and interactive techniques (SIGGRAPH &#039;96), 31-42. doi:10.1145/237170.237193&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2005:&#039;&#039;&#039; Stanford Multi-camera Array demonstrated for light field capture.&amp;lt;ref name=&amp;quot;Wilburn2005&amp;quot;&amp;gt;Wilburn, B., Joshi, N., Vaish, V., Talvala, E. V., Antunez, E., Barth, A., Adams, A., Horowitz, M., &amp;amp; Levoy, M. (2005). High performance imaging using large camera arrays. ACM SIGGRAPH 2005 Papers (SIGGRAPH &#039;05), 765-776. doi:10.1145/1186822.1073256&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2004-2008:&#039;&#039;&#039; Early computational light field displays developed (for example MIT Media Lab).&amp;lt;ref name=&amp;quot;Matusik2004&amp;quot;&amp;gt;Matusik, W., &amp;amp; Pfister, H. (2004). 3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes. ACM SIGGRAPH 2004 Papers (SIGGRAPH &#039;04), 814–824. doi:10.1145/1186562.1015805&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2010-2013:&#039;&#039;&#039; Introduction of multilayer, compressive, and tensor light field display concepts.&amp;lt;ref name=&amp;quot;Lanman2010ContentAdaptive&amp;quot;&amp;gt;Lanman, D., Hirsch, M., Kim, Y., &amp;amp; Raskar, R. (2010). Content-adaptive parallax barriers: optimizing dual-layer 3D displays using low-rank light field factorization. ACM SIGGRAPH Asia 2010 papers (SIGGRAPH ASIA &#039;10), Article 163. doi:10.1145/1882261.1866191&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2013:&#039;&#039;&#039; NVIDIA demonstrates near-eye light field display prototype for VR.&amp;lt;ref name=&amp;quot;NvidiaNELD&amp;quot;&amp;gt;Lanman, D., &amp;amp; Luebke, D. (2013). Near-Eye Light Field Displays (Technical Report NVR-2013-004). NVIDIA Research. Retrieved from https://research.nvidia.com/sites/default/files/pubs/2013-11_Near-Eye-Light-Field/NVIDIA-NELD.pdf&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2015 onwards:&#039;&#039;&#039; Emergence of advanced prototypes (for example CREAL, Light Field Lab, PetaRay).&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;&amp;gt;Lang, B. (2023, January 11). CREAL&#039;s Latest Light-field AR Demo Shows Continued Progress Toward Natural Depth &amp;amp; Focus. Road to VR. Retrieved from https://www.roadtovr.com/creal-light-field-ar-vr-headset-prototype/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Technical Implementations (How They Work) ==&lt;br /&gt;
Light field displays use various techniques to generate the 4D light field:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;[[Microlens Arrays]] (MLAs):&#039;&#039;&#039; A high-resolution display panel (LCD or OLED) is overlaid with an array of tiny lenses. Each lenslet directs light from the pixels beneath it into a specific set of directions, creating different views for different observer positions.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt; This is a common approach derived from integral imaging.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt; The trade-off is explicit: spatial resolution is determined by the lenslet count, angular resolution by the pixels per lenslet.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Multilayer Displays (Stacked LCDs):&#039;&#039;&#039; Several layers of transparent display panels (typically [[Liquid crystal display|LCDs]]) are stacked with air gaps. By computationally optimizing the opacity patterns on each layer, the display acts as a multiplicative [[Spatial Light Modulator|spatial light modulator]], shaping light from a backlight into a complex light field.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2010ContentAdaptive&amp;quot;/&amp;gt; These are often explored for near-eye displays.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Directional Backlighting:&#039;&#039;&#039; A standard display panel (for example LCD) is combined with a specialized backlight that emits light in controlled directions. The backlight might use another LCD panel coupled with optics like lenticular sheets to achieve directionality.&amp;lt;ref name=&amp;quot;Maimone2013Focus3D&amp;quot;&amp;gt;Maimone, A., Wetzstein, G., Hirsch, M., Lanman, D., Raskar, R., &amp;amp; Fuchs, H. (2013). Focus 3D: compressive accommodation display. ACM Transactions on Graphics, 32(5), Article 152. doi:10.1145/2516971.2516983&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Projector Arrays:&#039;&#039;&#039; Multiple projectors illuminate a screen (often lenticular or diffusive). Each projector provides a different perspective view, and their combined output forms the light field.&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Parallax Barrier]]s:&#039;&#039;&#039; An opaque layer with precisely positioned slits or apertures is placed in front of or between display panels. The barrier blocks light selectively, allowing different pixels to be seen from different angles.&amp;lt;ref name=&amp;quot;JDI_Parallax&amp;quot;&amp;gt;&lt;br /&gt;
Japan Display Inc. (2016, Dec 5). *Ultra‑High Resolution Display with Integrated Parallax Barrier for Glasses‑Free 3D* [Press release].  &lt;br /&gt;
Archived copy: https://web.archive.org/web/20161221045330/https://www.j-display.com/english/news/2016/20161205.html (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt; Often less light-efficient than MLAs.&lt;br /&gt;
* &#039;&#039;&#039;[[Waveguide]] Optics:&#039;&#039;&#039; Light is injected into thin optical waveguides (similar to those in some AR glasses) and then coupled out at specific points with controlled directionality, often using diffractive optical elements (DOEs) or gratings.&amp;lt;ref name=&amp;quot;LightFieldLabTech&amp;quot;&amp;gt;&lt;br /&gt;
Light Field Lab. *SolidLight™ Platform Overview.* https://www.lightfieldlab.com/ (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Maimone2017HolographicNED&amp;quot;&amp;gt;Maimone, A., Georgiou, A., &amp;amp; Kollin, J. S. (2017). Holographic near-eye displays for virtual and augmented reality. ACM Transactions on Graphics, 36(4), Article 85. doi:10.1145/3072959.3073624&amp;lt;/ref&amp;gt; This is explored for compact AR/VR systems.&lt;br /&gt;
* &#039;&#039;&#039;Time-Multiplexed Displays:&#039;&#039;&#039; This is what CREAL is doing. Different views or directional illumination patterns are presented rapidly in sequence. If cycled faster than human perception, this creates the illusion of a continuous light field. Can be combined with other techniques like directional backlighting.&amp;lt;ref name=&amp;quot;Liu2014OSTHMD&amp;quot;&amp;gt;Liu, S., Cheng, D., &amp;amp; Hua, H. (2014). An optical see-through head mounted display with addressable focal planes. 2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 33-42. doi:10.1109/ISMAR.2014.6948403&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Holographic and Diffractive Approaches:&#039;&#039;&#039; While [[Holographic display|holographic displays]] reconstruct wavefronts through diffraction, some LFDs utilize holographic optical elements (HOEs) or related diffractive principles to achieve high angular resolution and potentially overcome MLA limitations.&amp;lt;ref name=&amp;quot;SpringerReview2021&amp;quot;&amp;gt;M. Martínez-Corral, Z. Guan, Y. Li, Z. Xiong, B. Javidi, “Review of light field technologies,” *Visual Computing for Industry, Biomedicine and Art*, 4 (1): 29, 2021, doi:10.1186/s42492-021-00096-8.&amp;lt;/ref&amp;gt; Some companies use &amp;quot;holographic&amp;quot; terminology for their high-density LFDs.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;&amp;gt;C. Fink, “Light Field Lab Raises $50 Million to Bring SolidLight Holograms Into the Real World,” *Forbes*, 8 Feb 2023. Available: https://www.forbes.com/sites/charliefink/2023/02/08/light-field-lab-raises-50m-to-bring-solidlight-holograms-into-the-real-world/ (accessed 30 Apr 2025).&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Types of Light Field Displays ==&lt;br /&gt;
* &#039;&#039;&#039;Near-Eye Light Field Displays:&#039;&#039;&#039; Integrated into VR/AR [[Head-mounted display|HMDs]]. Primarily focused on solving the VAC for comfortable, realistic close-up interactions.&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt; Examples include research prototypes from NVIDIA&amp;lt;ref name=&amp;quot;NvidiaNELD&amp;quot;/&amp;gt; and academic groups,&amp;lt;ref name=&amp;quot;Huang2015Stereoscope&amp;quot;&amp;gt;Huang, F. C., Chen, K., &amp;amp; Wetzstein, G. (2015). The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues. ACM Transactions on Graphics, 34(4), Article 60. doi:10.1145/2766943&amp;lt;/ref&amp;gt; and commercial modules from companies like [[CREAL]].&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt; Often utilize MLAs, stacked LCDs, or waveguide/diffractive approaches.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Large Format / Tiled Displays:&#039;&#039;&#039; Aimed at creating large-scale, immersive &amp;quot;holographic&amp;quot; experiences without glasses for public venues, command centers, or collaborative environments.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LightFieldLabSolidLightPR&amp;quot;&amp;gt;&lt;br /&gt;
Light Field Lab Press Release (2021, Oct 7). *Light Field Lab Unveils SolidLight™ – The Highest Resolution Holographic Display Platform Ever Designed.*  &lt;br /&gt;
https://www.lightfieldlab.com/press-release-oct-2021 (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt; [[Light Field Lab]]&#039;s SolidLight™ platform uses modular panels designed to be tiled into large video walls.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LightFieldLabSolidLightPR&amp;quot;/&amp;gt; Sony&#039;s ELF-SR series (Spatial Reality Display) uses high-speed vision sensors and a micro-optical lens for a single user but demonstrates high-fidelity desktop light field effects.&amp;lt;ref name=&amp;quot;SonyELFSR2&amp;quot;&amp;gt;&lt;br /&gt;
Sony Professional. *ELF‑SR2 Spatial Reality Display.*  &lt;br /&gt;
https://pro.sony/ue_US/products/spatial-reality-displays/elf-sr2 (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Comparison with Other 3D Display Technologies ==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Comparison of Key 3D Display Technology Characteristics&lt;br /&gt;
! Technology&lt;br /&gt;
! Glasses Required&lt;br /&gt;
! Natural Focal Cues (Solves [[Vergence-accommodation conflict|VAC]])&lt;br /&gt;
! Full Motion [[Parallax]]&lt;br /&gt;
! Typical [[Field of view|View Field]]&lt;br /&gt;
! Key Trade-offs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Light Field Display]]&#039;&#039;&#039;&lt;br /&gt;
| No (often)&lt;br /&gt;
| Yes&lt;br /&gt;
| Yes&lt;br /&gt;
| Limited to Wide&lt;br /&gt;
| Spatio-angular resolution trade-off, computation needs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Stereoscopic display|Stereoscopic Displays]]&#039;&#039;&#039;&lt;br /&gt;
| Yes&lt;br /&gt;
| No&lt;br /&gt;
| No &amp;lt;small&amp;gt;(requires head tracking)&amp;lt;/small&amp;gt;&lt;br /&gt;
| Wide&lt;br /&gt;
| VAC causes fatigue, requires glasses&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Autostereoscopic display|Autostereoscopic (non-LFD)]]&#039;&#039;&#039;&lt;br /&gt;
| No&lt;br /&gt;
| No&lt;br /&gt;
| Limited &amp;lt;small&amp;gt;(often Horizontal only)&amp;lt;/small&amp;gt;&lt;br /&gt;
| Limited&lt;br /&gt;
| Reduced resolution per view, fixed viewing zones&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Volumetric Display]]&#039;&#039;&#039;&lt;br /&gt;
| No&lt;br /&gt;
| Yes&lt;br /&gt;
| Yes&lt;br /&gt;
| 360° potential&lt;br /&gt;
| Limited resolution, transparency/opacity issues, bulk&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Holographic display|Holographic Displays]]&#039;&#039;&#039;&lt;br /&gt;
| No&lt;br /&gt;
| Yes&lt;br /&gt;
| Yes&lt;br /&gt;
| Often Limited&lt;br /&gt;
| Extreme computational demands, [[Speckle pattern|speckle]], small size typically&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
LFDs offer a compelling balance, providing natural depth cues without glasses (in many formats) and resolving the VAC, but face challenges in achieving high resolution across both spatial and angular domains simultaneously.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Content Creation ==&lt;br /&gt;
Creating content compatible with LFDs requires capturing or generating directional view information:&lt;br /&gt;
* &#039;&#039;&#039;[[Light Field Camera|Light Field Cameras]] / [[Plenoptic Camera|Plenoptic Cameras]]:&#039;&#039;&#039; Capture both intensity and direction of incoming light using specialized sensors (often with MLAs).&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt; The captured data can be processed for LFD playback.&lt;br /&gt;
* &#039;&#039;&#039;[[Computer Graphics]] Rendering:&#039;&#039;&#039; Standard 3D scenes built in engines like [[Unity (game engine)|Unity]] or [[Unreal Engine]] can be rendered from multiple viewpoints to generate the necessary data.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt; Specialized light field rendering techniques, potentially using [[Ray tracing (graphics)|ray tracing]] or neural methods like [[Neural Radiance Fields]] (NeRF), are employed.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Mildenhall2020NeRF&amp;quot;&amp;gt;Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T., Ramamoorthi, R., &amp;amp; Ng, R. (2020). NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. European Conference on Computer Vision (ECCV), 405-421. doi:10.1007/978-3-030-58452-8_24&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Photogrammetry]] and 3D Scanning:&#039;&#039;&#039; Real-world objects/scenes captured as 3D models can serve as input for rendering light field views.&lt;br /&gt;
* &#039;&#039;&#039;[[Focal Stack]] Conversion:&#039;&#039;&#039; Research explores converting image stacks captured at different focal depths into light field representations, particularly for multi-layer displays.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Applications==&lt;br /&gt;
===Applications in VR and AR===&lt;br /&gt;
* &#039;&#039;&#039;Enhanced Realism and Immersion:&#039;&#039;&#039; Correct depth cues make virtual objects appear more solid and stable, improving the sense of presence, especially for near-field interactions.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Improved Visual Comfort:&#039;&#039;&#039; Mitigating the VAC reduces eye strain, fatigue, and nausea, enabling longer and more comfortable VR/AR sessions.&amp;lt;ref name=&amp;quot;WiredVAC&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Natural Interaction:&#039;&#039;&#039; Accurate depth perception facilitates intuitive hand-eye coordination for manipulating virtual objects.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Seamless AR Integration:&#039;&#039;&#039; Allows virtual elements to appear more cohesively integrated with the real world at correct focal depths.&lt;br /&gt;
* &#039;&#039;&#039;Vision Correction:&#039;&#039;&#039; Near-eye LFDs can potentially pre-distort the displayed light field to correct for the user&#039;s refractive errors, eliminating the need for prescription glasses within the headset.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2015Stereoscope&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Other Applications===&lt;br /&gt;
* &#039;&#039;&#039;Medical Imaging and Visualization:&#039;&#039;&#039; Intuitive visualization of complex 3D scans (CT, MRI) for diagnostics, surgical planning, and education.&amp;lt;ref name=&amp;quot;Nam2019Medical&amp;quot;&amp;gt;Nam, J., McCormick, M., &amp;amp; Tate, A. J. (2019). Light field display systems for medical imaging applications. Journal of Display Technology, 15(3), 215-225. doi:10.1002/jsid.785&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Scientific Visualization:&#039;&#039;&#039; Analyzing complex datasets in fields like fluid dynamics, molecular modeling, geology.&amp;lt;ref name=&amp;quot;Halle2017SciVis&amp;quot;&amp;gt;Halle, M. W., &amp;amp; Meng, J. (2017). LightPlanets: GPU-based rendering of transparent astronomical objects using light field methods. IEEE Transactions on Visualization and Computer Graphics, 23(5), 1479-1488. doi:10.1109/TVCG.2016.2535388&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Product Design and Engineering (CAD/CAE):&#039;&#039;&#039; Collaborative visualization and review of 3D models.&amp;lt;ref name=&amp;quot;Nam2019Medical&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Entertainment and Gaming:&#039;&#039;&#039; Immersive experiences in arcades, museums, theme parks, and potentially future home entertainment.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Telepresence and Communication:&#039;&#039;&#039; Creating realistic, life-sized 3D representations of remote collaborators, like Google&#039;s [[Project Starline]] concept.&amp;lt;ref name=&amp;quot;Starline&amp;quot;&amp;gt;Google Blog (2023, May 10). A first look at Project Starline’s new, simpler prototype. Retrieved from https://blog.google/technology/research/project-starline-prototype/&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Microscopy]]:&#039;&#039;&#039; Viewing microscopic samples with natural depth perception.&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Challenges and Limitations ==&lt;br /&gt;
* &#039;&#039;&#039;Spatio-Angular Resolution Trade-off:&#039;&#039;&#039; Increasing the number of views (angular resolution) often decreases the perceived sharpness (spatial resolution) for a fixed display pixel count.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Computational Complexity &amp;amp; Bandwidth:&#039;&#039;&#039; Rendering, compressing, and transmitting the massive datasets for real-time LFDs is extremely demanding on GPUs and data infrastructure.&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Manufacturing Complexity and Cost:&#039;&#039;&#039; Producing precise optical components like high-density MLAs, perfectly aligned multi-layer stacks, or large-area waveguide structures is challenging and costly.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Form Factor and Miniaturization:&#039;&#039;&#039; Integrating complex optics and electronics into thin, lightweight, and power-efficient near-eye devices remains difficult.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Limited Field of View (FoV):&#039;&#039;&#039; Achieving wide FoV comparable to traditional VR headsets while maintaining high angular resolution is challenging.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Brightness and Efficiency:&#039;&#039;&#039; Techniques like MLAs and parallax barriers inherently block or redirect light, reducing overall display brightness and power efficiency.&lt;br /&gt;
* &#039;&#039;&#039;Content Ecosystem:&#039;&#039;&#039; The workflow for creating, distributing, and viewing native light field content is still developing compared to standard 2D or stereoscopic 3D, due to no consumer lightfield hardware.&lt;br /&gt;
* &#039;&#039;&#039;Visual Artifacts:&#039;&#039;&#039; Potential issues include [[Moiré pattern|moiré]] effects (from periodic structures like MLAs), ghosting/crosstalk between views, and latency.&lt;br /&gt;
&lt;br /&gt;
== Key Players and Commercial Landscape ==&lt;br /&gt;
Several companies and research groups are active in LFD development:&lt;br /&gt;
* &#039;&#039;&#039;[[CREAL]]:&#039;&#039;&#039; Swiss startup focused on compact near-eye LFD modules for AR/VR glasses aiming to solve VAC.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Light Field Lab]]:&#039;&#039;&#039; Developing large-scale, modular &amp;quot;holographic&amp;quot; LFD panels (SolidLight™) based on proprietary [[Waveguide (optics)|waveguide]] technology.&amp;lt;ref name=&amp;quot;LightFieldLabTech&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LightFieldLabSolidLightPR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Sony]]:&#039;&#039;&#039; Produces the Spatial Reality Display (ELF-SR series), a high-fidelity desktop LFD using eye-tracking.&amp;lt;ref name=&amp;quot;SonyELFSR2&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Avegant]]:&#039;&#039;&#039; Develops light field light engines, particularly for AR, focusing on VAC resolution.&amp;lt;ref name=&amp;quot;AvegantPR&amp;quot;&amp;gt;PR Newswire (2017, March 15). Avegant Introduces Light Field Technology for Mixed Reality. Retrieved from https://www.prnewswire.com/news-releases/avegant-introduces-light-field-technology-for-mixed-reality-300423855.html&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Holografika]]:&#039;&#039;&#039; Offers glasses-free 3D LFD systems for professional applications.&amp;lt;ref name=&amp;quot;Holografika&amp;quot;&amp;gt;Holografika. Light Field Displays. Retrieved from https://holografika.com/light-field-displays/&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Japan Display Inc. (JDI)]]:&#039;&#039;&#039; Demonstrated prototype LFDs for various applications.&amp;lt;ref name=&amp;quot;JDI_LFD_2019&amp;quot;&amp;gt;Japan Display Inc. News (2019, December 3). JDI Develops World&#039;s First 10.1-inch Light Field Display. Retrieved from https://www.j-display.com/english/news/2019/20191203_01.html&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[NVIDIA]]:&#039;&#039;&#039; Foundational research in near-eye LFDs and ongoing GPU development crucial for LFD rendering.&amp;lt;ref name=&amp;quot;NvidiaNELD&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Google]]:&#039;&#039;&#039; Research in LFDs, demonstrated through concepts like Project Starline.&amp;lt;ref name=&amp;quot;Starline&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Academic Research:&#039;&#039;&#039; Institutions like [[MIT Media Lab]], [[Stanford University]], University of Arizona, and others continue to push theoretical and practical boundaries.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Future Directions and Research ==&lt;br /&gt;
* &#039;&#039;&#039;Computational Display Optimization:&#039;&#039;&#039; Using [[Artificial intelligence|AI]] and sophisticated algorithms to optimize patterns on multi-layer displays or directional backlights for better quality with fewer resources.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt; Using neural representations (like NeRF) for efficient light field synthesis and compression.&amp;lt;ref name=&amp;quot;Mildenhall2020NeRF&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Varifocal and Multifocal Integration:&#039;&#039;&#039; Hybrid approaches combining LFD principles with dynamic focus elements (liquid lenses, deformable mirrors) to achieve focus cues potentially more efficiently than pure LFDs.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Liu2014OSTHMD&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Miniaturization for Wearables:&#039;&#039;&#039; Developing ultra-thin, efficient components using [[Metasurface|metasurfaces]], [[Holographic optical element|holographic optical elements (HOEs)]], advanced waveguides, and [[MicroLED]] displays for integration into consumer AR/VR glasses.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;SpringerReview2021&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Improved Content Capture and Creation Tools:&#039;&#039;&#039; Advancements in [[Plenoptic camera|plenoptic cameras]], AI-driven view synthesis, and streamlined software workflows.&amp;lt;ref name=&amp;quot;Mildenhall2020NeRF&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Higher Resolution and Efficiency:&#039;&#039;&#039; Addressing the spatio-angular trade-off and improving light efficiency through new materials, optical designs (for example polarization multiplexing&amp;lt;ref name=&amp;quot;Tan2019Polarization&amp;quot;&amp;gt;G. Tan, T. Zhan, Y.-H. Lee, J. Xiong, S.-T. Wu, “Near-eye light-field display with polarization multiplexing,” *Proceedings of SPIE* 10942, Advances in Display Technologies IX, paper 1094206, 2019, doi:10.1117/12.2509121.&amp;lt;/ref&amp;gt;), and display technologies.&lt;br /&gt;
&lt;br /&gt;
== See Also ==&lt;br /&gt;
* [[Light Field]]&lt;br /&gt;
* [[Plenoptic Function]]&lt;br /&gt;
* [[Integral imaging]]&lt;br /&gt;
* [[Autostereoscopic display]]&lt;br /&gt;
* [[Stereoscopy]]&lt;br /&gt;
* [[Holographic display]]&lt;br /&gt;
* [[Volumetric Display]]&lt;br /&gt;
* [[Varifocal display]]&lt;br /&gt;
* [[Vergence-accommodation conflict]]&lt;br /&gt;
* [[Virtual Reality]]&lt;br /&gt;
* [[Augmented Reality]]&lt;br /&gt;
* [[Head-mounted display]]&lt;br /&gt;
* [[Microlens array]]&lt;br /&gt;
* [[Spatial Light Modulator]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Terms]]&lt;br /&gt;
[[Category:Technical Terms]]&lt;br /&gt;
[[Category:Display technology]]&lt;br /&gt;
[[Category:3D display technology]]&lt;br /&gt;
[[Category:Autostereoscopy]]&lt;br /&gt;
[[Category:Virtual reality]]&lt;br /&gt;
[[Category:Augmented reality]]&lt;br /&gt;
[[Category:Optics]]&lt;br /&gt;
[[Category:Computational photography]]&lt;br /&gt;
[[Category:Emerging technologies]]&lt;br /&gt;
[[Category:Human-computer interaction]]&lt;/div&gt;</summary>
		<author><name>RealEditor</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Light_field_display&amp;diff=36362</id>
		<title>Light field display</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Light_field_display&amp;diff=36362"/>
		<updated>2025-08-04T05:55:21Z</updated>

		<summary type="html">&lt;p&gt;RealEditor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Light field display&#039;&#039;&#039; (&#039;&#039;&#039;LFD&#039;&#039;&#039;) is an advanced display technology designed to reproduce a [[light field]], the distribution of light rays in [[3D space]], including their intensity and direction.&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;&amp;gt;Wetzstein G. (2020). “Computational Displays: Achieving the Full Plenoptic Function.” ACM SIGGRAPH 2020 Courses. ACM Digital Library. doi:10.1145/3386569.3409414. Available: https://dl.acm.org/doi/10.1145/3386569.3409414 (accessed 3 May 2025).&amp;lt;/ref&amp;gt; Unlike conventional 2D displays or [[stereoscopic display|stereoscopic 3D]] systems that present flat images or fixed viewpoints requiring glasses, light field displays aim to recreate how light naturally propagates from a real scene.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;&amp;gt;Wetzstein, G., Lanman, D., Hirsch, M., &amp;amp; Raskar, R. (2012). Tensor displays: Compressive light field synthesis using multilayer displays with directional backlighting. ACM Transactions on Graphics, 31(4), Article 80. doi:10.1145/2185520.2185576&amp;lt;/ref&amp;gt; This allows viewers to perceive genuine [[depth]], [[parallax]] (both horizontal and vertical), and perspective changes without special eyewear. (in many implementations).&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;&amp;gt;Hollister, S. (2024, January 19). Leia is building a 3D empire on the back of the worst phone we&#039;ve ever reviewed. The Verge. Retrieved from https://www.theverge.com/24036574/leia-glasses-free-3d-ces-2024&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This method of display is crucial for the future of [[virtual reality]] (VR) and [[augmented reality]] (AR), because it can directly address the [[vergence-accommodation conflict]] (VAC).&amp;lt;ref name=&amp;quot;WiredVAC&amp;quot;&amp;gt;Zhang, S. (2015, August 11). The Obscure Neuroscience Problem That&#039;s Plaguing VR. WIRED. Retrieved from https://www.wired.com/2015/08/obscure-neuroscience-problem-thats-plaguing-vr&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;VACReview&amp;quot;&amp;gt;Y. Zhou, J. Zhang, F. Fang, “Vergence-accommodation conflict in optical see-through display: Review and prospect,” *Results in Optics*, vol. 5, p. 100160, 2021, doi:10.1016/j.rio.2021.100160.&amp;lt;/ref&amp;gt; By providing correct [[focal cues]] that match the [[vergence]] information, LFDs promise more immersive, realistic, and visually comfortable experiences, reducing eye strain and [[Virtual Reality Sickness|simulator sickness]] often associated with current [[head-mounted display]]s (HMDs).&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;&amp;gt;CREAL. Light-field: Seeing Virtual Worlds Naturally. Retrieved from https://creal.com/technology/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Definition and Principles ==&lt;br /&gt;
A light field display aims to replicate the [[Plenoptic Function]], a theoretical function describing the complete set of light rays passing through every point in space, in every direction, potentially across time and wavelength.&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt; In practice, light field displays generate a discretized (sampled) approximation of the relevant 4D subset of this function (typically spatial position and angular direction).&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;&amp;gt;Huang, F. C., Wetzstein, G., Barsky, B. A., &amp;amp; Raskar, R. (2014). Eyeglasses-free display: Towards correcting visual aberrations with computational light field displays. ACM Transactions on Graphics, 33(4), Article 59. doi:10.1145/2601097.2601122&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
By controlling the direction as well as the color and intensity of emitted light rays, these displays allow the viewer&#039;s eyes to naturally focus ([[accommodation]]) at different depths within the displayed scene, matching the depth cues provided by binocular vision ([[vergence]]).&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt; This recreation allows users to experience:&lt;br /&gt;
* Full motion [[parallax]] (horizontal and vertical look-around).&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&lt;br /&gt;
* Accurate [[occlusion]] cues.&lt;br /&gt;
* Natural [[focal cues]], mitigating the [[Vergence-accommodation conflict]].&amp;lt;ref name=&amp;quot;WiredVAC&amp;quot;/&amp;gt;&lt;br /&gt;
* [[Specular highlights]] and realistic reflections that change with viewpoint.&lt;br /&gt;
* Viewing without specialized eyewear (especially in non-headset formats).&lt;br /&gt;
&lt;br /&gt;
== Characteristics ==&lt;br /&gt;
* &#039;&#039;&#039;Glasses-Free 3D&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;Full Parallax:&#039;&#039;&#039; True LFDs provide both horizontal and vertical parallax, unlike earlier [[autostereoscopic display|autostereoscopic]] technologies that often limited parallax to side-to-side movement.&lt;br /&gt;
* &#039;&#039;&#039;Accommodation-Convergence Conflict Resolution:&#039;&#039;&#039; A primary driver for VR/AR, LFDs can render virtual objects at appropriate focal distances, aligning accommodation and vergence to significantly improve visual comfort and realism.&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;&amp;gt;&lt;br /&gt;
Lanman D., &amp;amp; Luebke D. (2013). “Near‑Eye Light Field Displays.”  &lt;br /&gt;
*ACM Transactions on Graphics*, 32 (6), 220:1–220:10. doi:10.1145/2508363.2508366.  &lt;br /&gt;
Project page: https://research.nvidia.com/publication/near-eye-light-field-displays (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Computational Requirements:&#039;&#039;&#039; Generating and processing the massive amount of data (multiple views or directional light information) needed for LFDs requires significant [[Graphics processing unit|GPU]] power and bandwidth.&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Resolution Trade-offs:&#039;&#039;&#039; A fundamental challenge involves balancing spatial resolution (image sharpness), angular resolution (smoothness of parallax/number of views), [[Field of view|field of view (FoV)]], and depth of field.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt; This is often referred to as the spatio-angular resolution trade-off.&lt;br /&gt;
&lt;br /&gt;
==History and Development==&lt;br /&gt;
===Early Concepts and Foundations===&lt;br /&gt;
The underlying concept can be traced back to Michael Faraday&#039;s 1846 suggestion of light as a field&amp;lt;ref name=&amp;quot;FaradayField&amp;quot;&amp;gt;Princeton University Press. Faraday, Maxwell, and the Electromagnetic Field - How Two Men Revolutionized Physics. Retrieved from https://press.princeton.edu/books/hardcover/9780691161664/faraday-maxwell-and-the-electromagnetic-field&amp;lt;/ref&amp;gt; and was mathematically formalized regarding radiance transfer by Andrey Gershun in 1936.&amp;lt;ref name=&amp;quot;Gershun1936&amp;quot;&amp;gt;Gershun, A. (1936). The Light Field. Moscow. (Translated by P. Moon &amp;amp; G. Timoshenko, 1939, Journal of Mathematics and Physics, XVIII, 51–151).&amp;lt;/ref&amp;gt; The practical groundwork for reproducing light fields was laid by Gabriel Lippmann&#039;s 1908 concept of [[Integral imaging|Integral Photography]] (&amp;quot;photographie intégrale&amp;quot;), which used an array of small lenses to capture and reproduce light fields.&amp;lt;ref name=&amp;quot;Lippmann1908&amp;quot;&amp;gt;Lippmann, G. (1908). Épreuves réversibles donnant la sensation du relief. Journal de Physique Théorique et Appliquée, 7(1), 821–825. doi:10.1051/jphystap:019080070082100&amp;lt;/ref&amp;gt; The modern computational understanding was significantly advanced by Adelson and Bergen&#039;s formalization of the [[Plenoptic Function]] in 1991.&amp;lt;ref name=&amp;quot;AdelsonBergen1991&amp;quot;&amp;gt;Adelson, E. H., &amp;amp; Bergen, J. R. (1991). The plenoptic function and the elements of early vision. In M. Landy &amp;amp; J. A. Movshon (Eds.), Computational Models of Visual Processing (pp. 3–20). MIT Press.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Key Development Milestones===&lt;br /&gt;
* &#039;&#039;&#039;1908:&#039;&#039;&#039; Gabriel Lippmann introduces integral photography.&amp;lt;ref name=&amp;quot;Lippmann1908&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;1936:&#039;&#039;&#039; Andrey Gershun formalizes the light field mathematically.&amp;lt;ref name=&amp;quot;Gershun1936&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;1991:&#039;&#039;&#039; Adelson and Bergen formalize the plenoptic function.&amp;lt;ref name=&amp;quot;AdelsonBergen1991&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;1996:&#039;&#039;&#039; Levoy and Hanrahan publish work on Light Field Rendering.&amp;lt;ref name=&amp;quot;Levoy1996&amp;quot;&amp;gt;Levoy, M., &amp;amp; Hanrahan, P. (1996). Light field rendering. Proceedings of the 23rd annual conference on Computer graphics and interactive techniques (SIGGRAPH &#039;96), 31-42. doi:10.1145/237170.237193&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2005:&#039;&#039;&#039; Stanford Multi-camera Array demonstrated for light field capture.&amp;lt;ref name=&amp;quot;Wilburn2005&amp;quot;&amp;gt;Wilburn, B., Joshi, N., Vaish, V., Talvala, E. V., Antunez, E., Barth, A., Adams, A., Horowitz, M., &amp;amp; Levoy, M. (2005). High performance imaging using large camera arrays. ACM SIGGRAPH 2005 Papers (SIGGRAPH &#039;05), 765-776. doi:10.1145/1186822.1073256&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2004-2008:&#039;&#039;&#039; Early computational light field displays developed (for example MIT Media Lab).&amp;lt;ref name=&amp;quot;Matusik2004&amp;quot;&amp;gt;Matusik, W., &amp;amp; Pfister, H. (2004). 3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes. ACM SIGGRAPH 2004 Papers (SIGGRAPH &#039;04), 814–824. doi:10.1145/1186562.1015805&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2010-2013:&#039;&#039;&#039; Introduction of multilayer, compressive, and tensor light field display concepts.&amp;lt;ref name=&amp;quot;Lanman2010ContentAdaptive&amp;quot;&amp;gt;Lanman, D., Hirsch, M., Kim, Y., &amp;amp; Raskar, R. (2010). Content-adaptive parallax barriers: optimizing dual-layer 3D displays using low-rank light field factorization. ACM SIGGRAPH Asia 2010 papers (SIGGRAPH ASIA &#039;10), Article 163. doi:10.1145/1882261.1866191&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2013:&#039;&#039;&#039; NVIDIA demonstrates near-eye light field display prototype for VR.&amp;lt;ref name=&amp;quot;NvidiaNELD&amp;quot;&amp;gt;Lanman, D., &amp;amp; Luebke, D. (2013). Near-Eye Light Field Displays (Technical Report NVR-2013-004). NVIDIA Research. Retrieved from https://research.nvidia.com/sites/default/files/pubs/2013-11_Near-Eye-Light-Field/NVIDIA-NELD.pdf&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2015 onwards:&#039;&#039;&#039; Emergence of advanced prototypes (for example CREAL, Light Field Lab, PetaRay).&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;&amp;gt;Lang, B. (2023, January 11). CREAL&#039;s Latest Light-field AR Demo Shows Continued Progress Toward Natural Depth &amp;amp; Focus. Road to VR. Retrieved from https://www.roadtovr.com/creal-light-field-ar-vr-headset-prototype/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Technical Implementations (How They Work) ==&lt;br /&gt;
Light field displays use various techniques to generate the 4D light field:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;[[Microlens Arrays]] (MLAs):&#039;&#039;&#039; A high-resolution display panel (LCD or OLED) is overlaid with an array of tiny lenses. Each lenslet directs light from the pixels beneath it into a specific set of directions, creating different views for different observer positions.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt; This is a common approach derived from integral imaging.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt; The trade-off is explicit: spatial resolution is determined by the lenslet count, angular resolution by the pixels per lenslet.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Multilayer Displays (Stacked LCDs):&#039;&#039;&#039; Several layers of transparent display panels (typically [[Liquid crystal display|LCDs]]) are stacked with air gaps. By computationally optimizing the opacity patterns on each layer, the display acts as a multiplicative [[Spatial Light Modulator|spatial light modulator]], shaping light from a backlight into a complex light field.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2010ContentAdaptive&amp;quot;/&amp;gt; These are often explored for near-eye displays.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Directional Backlighting:&#039;&#039;&#039; A standard display panel (for example LCD) is combined with a specialized backlight that emits light in controlled directions. The backlight might use another LCD panel coupled with optics like lenticular sheets to achieve directionality.&amp;lt;ref name=&amp;quot;Maimone2013Focus3D&amp;quot;&amp;gt;Maimone, A., Wetzstein, G., Hirsch, M., Lanman, D., Raskar, R., &amp;amp; Fuchs, H. (2013). Focus 3D: compressive accommodation display. ACM Transactions on Graphics, 32(5), Article 152. doi:10.1145/2516971.2516983&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Projector Arrays:&#039;&#039;&#039; Multiple projectors illuminate a screen (often lenticular or diffusive). Each projector provides a different perspective view, and their combined output forms the light field.&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Parallax Barrier]]s:&#039;&#039;&#039; An opaque layer with precisely positioned slits or apertures is placed in front of or between display panels. The barrier blocks light selectively, allowing different pixels to be seen from different angles.&amp;lt;ref name=&amp;quot;JDI_Parallax&amp;quot;&amp;gt;&lt;br /&gt;
Japan Display Inc. (2016, Dec 5). *Ultra‑High Resolution Display with Integrated Parallax Barrier for Glasses‑Free 3D* [Press release].  &lt;br /&gt;
Archived copy: https://web.archive.org/web/20161221045330/https://www.j-display.com/english/news/2016/20161205.html (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt; Often less light-efficient than MLAs.&lt;br /&gt;
* &#039;&#039;&#039;[[Waveguide]] Optics:&#039;&#039;&#039; Light is injected into thin optical waveguides (similar to those in some AR glasses) and then coupled out at specific points with controlled directionality, often using diffractive optical elements (DOEs) or gratings.&amp;lt;ref name=&amp;quot;LightFieldLabTech&amp;quot;&amp;gt;&lt;br /&gt;
Light Field Lab. *SolidLight™ Platform Overview.* https://www.lightfieldlab.com/ (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Maimone2017HolographicNED&amp;quot;&amp;gt;Maimone, A., Georgiou, A., &amp;amp; Kollin, J. S. (2017). Holographic near-eye displays for virtual and augmented reality. ACM Transactions on Graphics, 36(4), Article 85. doi:10.1145/3072959.3073624&amp;lt;/ref&amp;gt; This is explored for compact AR/VR systems.&lt;br /&gt;
* &#039;&#039;&#039;Time-Multiplexed Displays:&#039;&#039;&#039; Different views or directional illumination patterns are presented rapidly in sequence. If cycled faster than human perception, this creates the illusion of a continuous light field. Can be combined with other techniques like directional backlighting.&amp;lt;ref name=&amp;quot;Liu2014OSTHMD&amp;quot;&amp;gt;Liu, S., Cheng, D., &amp;amp; Hua, H. (2014). An optical see-through head mounted display with addressable focal planes. 2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 33-42. doi:10.1109/ISMAR.2014.6948403&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Holographic and Diffractive Approaches:&#039;&#039;&#039; While [[Holographic display|holographic displays]] reconstruct wavefronts through diffraction, some LFDs utilize holographic optical elements (HOEs) or related diffractive principles to achieve high angular resolution and potentially overcome MLA limitations.&amp;lt;ref name=&amp;quot;SpringerReview2021&amp;quot;&amp;gt;M. Martínez-Corral, Z. Guan, Y. Li, Z. Xiong, B. Javidi, “Review of light field technologies,” *Visual Computing for Industry, Biomedicine and Art*, 4 (1): 29, 2021, doi:10.1186/s42492-021-00096-8.&amp;lt;/ref&amp;gt; Some companies use &amp;quot;holographic&amp;quot; terminology for their high-density LFDs.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;&amp;gt;C. Fink, “Light Field Lab Raises $50 Million to Bring SolidLight Holograms Into the Real World,” *Forbes*, 8 Feb 2023. Available: https://www.forbes.com/sites/charliefink/2023/02/08/light-field-lab-raises-50m-to-bring-solidlight-holograms-into-the-real-world/ (accessed 30 Apr 2025).&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Types of Light Field Displays ==&lt;br /&gt;
* &#039;&#039;&#039;Near-Eye Light Field Displays:&#039;&#039;&#039; Integrated into VR/AR [[Head-mounted display|HMDs]]. Primarily focused on solving the VAC for comfortable, realistic close-up interactions.&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt; Examples include research prototypes from NVIDIA&amp;lt;ref name=&amp;quot;NvidiaNELD&amp;quot;/&amp;gt; and academic groups,&amp;lt;ref name=&amp;quot;Huang2015Stereoscope&amp;quot;&amp;gt;Huang, F. C., Chen, K., &amp;amp; Wetzstein, G. (2015). The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues. ACM Transactions on Graphics, 34(4), Article 60. doi:10.1145/2766943&amp;lt;/ref&amp;gt; and commercial modules from companies like [[CREAL]].&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt; Often utilize MLAs, stacked LCDs, or waveguide/diffractive approaches.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Large Format / Tiled Displays:&#039;&#039;&#039; Aimed at creating large-scale, immersive &amp;quot;holographic&amp;quot; experiences without glasses for public venues, command centers, or collaborative environments.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LightFieldLabSolidLightPR&amp;quot;&amp;gt;&lt;br /&gt;
Light Field Lab Press Release (2021, Oct 7). *Light Field Lab Unveils SolidLight™ – The Highest Resolution Holographic Display Platform Ever Designed.*  &lt;br /&gt;
https://www.lightfieldlab.com/press-release-oct-2021 (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt; [[Light Field Lab]]&#039;s SolidLight™ platform uses modular panels designed to be tiled into large video walls.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LightFieldLabSolidLightPR&amp;quot;/&amp;gt; Sony&#039;s ELF-SR series (Spatial Reality Display) uses high-speed vision sensors and a micro-optical lens for a single user but demonstrates high-fidelity desktop light field effects.&amp;lt;ref name=&amp;quot;SonyELFSR2&amp;quot;&amp;gt;&lt;br /&gt;
Sony Professional. *ELF‑SR2 Spatial Reality Display.*  &lt;br /&gt;
https://pro.sony/ue_US/products/spatial-reality-displays/elf-sr2 (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Comparison with Other 3D Display Technologies ==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Comparison of Key 3D Display Technology Characteristics&lt;br /&gt;
! Technology&lt;br /&gt;
! Glasses Required&lt;br /&gt;
! Natural Focal Cues (Solves [[Vergence-accommodation conflict|VAC]])&lt;br /&gt;
! Full Motion [[Parallax]]&lt;br /&gt;
! Typical [[Field of view|View Field]]&lt;br /&gt;
! Key Trade-offs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Light Field Display]]&#039;&#039;&#039;&lt;br /&gt;
| No (often)&lt;br /&gt;
| Yes&lt;br /&gt;
| Yes&lt;br /&gt;
| Limited to Wide&lt;br /&gt;
| Spatio-angular resolution trade-off, computation needs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Stereoscopic display|Stereoscopic Displays]]&#039;&#039;&#039;&lt;br /&gt;
| Yes&lt;br /&gt;
| No&lt;br /&gt;
| No &amp;lt;small&amp;gt;(requires head tracking)&amp;lt;/small&amp;gt;&lt;br /&gt;
| Wide&lt;br /&gt;
| VAC causes fatigue, requires glasses&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Autostereoscopic display|Autostereoscopic (non-LFD)]]&#039;&#039;&#039;&lt;br /&gt;
| No&lt;br /&gt;
| No&lt;br /&gt;
| Limited &amp;lt;small&amp;gt;(often Horizontal only)&amp;lt;/small&amp;gt;&lt;br /&gt;
| Limited&lt;br /&gt;
| Reduced resolution per view, fixed viewing zones&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Volumetric Display]]&#039;&#039;&#039;&lt;br /&gt;
| No&lt;br /&gt;
| Yes&lt;br /&gt;
| Yes&lt;br /&gt;
| 360° potential&lt;br /&gt;
| Limited resolution, transparency/opacity issues, bulk&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Holographic display|Holographic Displays]]&#039;&#039;&#039;&lt;br /&gt;
| No&lt;br /&gt;
| Yes&lt;br /&gt;
| Yes&lt;br /&gt;
| Often Limited&lt;br /&gt;
| Extreme computational demands, [[Speckle pattern|speckle]], small size typically&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
LFDs offer a compelling balance, providing natural depth cues without glasses (in many formats) and resolving the VAC, but face challenges in achieving high resolution across both spatial and angular domains simultaneously.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Content Creation ==&lt;br /&gt;
Creating content compatible with LFDs requires capturing or generating directional view information:&lt;br /&gt;
* &#039;&#039;&#039;[[Light Field Camera|Light Field Cameras]] / [[Plenoptic Camera|Plenoptic Cameras]]:&#039;&#039;&#039; Capture both intensity and direction of incoming light using specialized sensors (often with MLAs).&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt; The captured data can be processed for LFD playback.&lt;br /&gt;
* &#039;&#039;&#039;[[Computer Graphics]] Rendering:&#039;&#039;&#039; Standard 3D scenes built in engines like [[Unity (game engine)|Unity]] or [[Unreal Engine]] can be rendered from multiple viewpoints to generate the necessary data.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt; Specialized light field rendering techniques, potentially using [[Ray tracing (graphics)|ray tracing]] or neural methods like [[Neural Radiance Fields]] (NeRF), are employed.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Mildenhall2020NeRF&amp;quot;&amp;gt;Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T., Ramamoorthi, R., &amp;amp; Ng, R. (2020). NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. European Conference on Computer Vision (ECCV), 405-421. doi:10.1007/978-3-030-58452-8_24&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Photogrammetry]] and 3D Scanning:&#039;&#039;&#039; Real-world objects/scenes captured as 3D models can serve as input for rendering light field views.&lt;br /&gt;
* &#039;&#039;&#039;[[Focal Stack]] Conversion:&#039;&#039;&#039; Research explores converting image stacks captured at different focal depths into light field representations, particularly for multi-layer displays.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Applications==&lt;br /&gt;
===Applications in VR and AR===&lt;br /&gt;
* &#039;&#039;&#039;Enhanced Realism and Immersion:&#039;&#039;&#039; Correct depth cues make virtual objects appear more solid and stable, improving the sense of presence, especially for near-field interactions.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Improved Visual Comfort:&#039;&#039;&#039; Mitigating the VAC reduces eye strain, fatigue, and nausea, enabling longer and more comfortable VR/AR sessions.&amp;lt;ref name=&amp;quot;WiredVAC&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Natural Interaction:&#039;&#039;&#039; Accurate depth perception facilitates intuitive hand-eye coordination for manipulating virtual objects.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Seamless AR Integration:&#039;&#039;&#039; Allows virtual elements to appear more cohesively integrated with the real world at correct focal depths.&lt;br /&gt;
* &#039;&#039;&#039;Vision Correction:&#039;&#039;&#039; Near-eye LFDs can potentially pre-distort the displayed light field to correct for the user&#039;s refractive errors, eliminating the need for prescription glasses within the headset.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2015Stereoscope&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Other Applications===&lt;br /&gt;
* &#039;&#039;&#039;Medical Imaging and Visualization:&#039;&#039;&#039; Intuitive visualization of complex 3D scans (CT, MRI) for diagnostics, surgical planning, and education.&amp;lt;ref name=&amp;quot;Nam2019Medical&amp;quot;&amp;gt;Nam, J., McCormick, M., &amp;amp; Tate, A. J. (2019). Light field display systems for medical imaging applications. Journal of Display Technology, 15(3), 215-225. doi:10.1002/jsid.785&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Scientific Visualization:&#039;&#039;&#039; Analyzing complex datasets in fields like fluid dynamics, molecular modeling, geology.&amp;lt;ref name=&amp;quot;Halle2017SciVis&amp;quot;&amp;gt;Halle, M. W., &amp;amp; Meng, J. (2017). LightPlanets: GPU-based rendering of transparent astronomical objects using light field methods. IEEE Transactions on Visualization and Computer Graphics, 23(5), 1479-1488. doi:10.1109/TVCG.2016.2535388&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Product Design and Engineering (CAD/CAE):&#039;&#039;&#039; Collaborative visualization and review of 3D models.&amp;lt;ref name=&amp;quot;Nam2019Medical&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Entertainment and Gaming:&#039;&#039;&#039; Immersive experiences in arcades, museums, theme parks, and potentially future home entertainment.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Telepresence and Communication:&#039;&#039;&#039; Creating realistic, life-sized 3D representations of remote collaborators, like Google&#039;s [[Project Starline]] concept.&amp;lt;ref name=&amp;quot;Starline&amp;quot;&amp;gt;Google Blog (2023, May 10). A first look at Project Starline’s new, simpler prototype. Retrieved from https://blog.google/technology/research/project-starline-prototype/&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Microscopy]]:&#039;&#039;&#039; Viewing microscopic samples with natural depth perception.&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Challenges and Limitations ==&lt;br /&gt;
* &#039;&#039;&#039;Spatio-Angular Resolution Trade-off:&#039;&#039;&#039; Increasing the number of views (angular resolution) often decreases the perceived sharpness (spatial resolution) for a fixed display pixel count.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Computational Complexity &amp;amp; Bandwidth:&#039;&#039;&#039; Rendering, compressing, and transmitting the massive datasets for real-time LFDs is extremely demanding on GPUs and data infrastructure.&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Manufacturing Complexity and Cost:&#039;&#039;&#039; Producing precise optical components like high-density MLAs, perfectly aligned multi-layer stacks, or large-area waveguide structures is challenging and costly.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Form Factor and Miniaturization:&#039;&#039;&#039; Integrating complex optics and electronics into thin, lightweight, and power-efficient near-eye devices remains difficult.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Limited Field of View (FoV):&#039;&#039;&#039; Achieving wide FoV comparable to traditional VR headsets while maintaining high angular resolution is challenging.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Brightness and Efficiency:&#039;&#039;&#039; Techniques like MLAs and parallax barriers inherently block or redirect light, reducing overall display brightness and power efficiency.&lt;br /&gt;
* &#039;&#039;&#039;Content Ecosystem:&#039;&#039;&#039; The workflow for creating, distributing, and viewing native light field content is still developing compared to standard 2D or stereoscopic 3D, due to no consumer lightfield hardware.&lt;br /&gt;
* &#039;&#039;&#039;Visual Artifacts:&#039;&#039;&#039; Potential issues include [[Moiré pattern|moiré]] effects (from periodic structures like MLAs), ghosting/crosstalk between views, and latency.&lt;br /&gt;
&lt;br /&gt;
== Key Players and Commercial Landscape ==&lt;br /&gt;
Several companies and research groups are active in LFD development:&lt;br /&gt;
* &#039;&#039;&#039;[[CREAL]]:&#039;&#039;&#039; Swiss startup focused on compact near-eye LFD modules for AR/VR glasses aiming to solve VAC.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Light Field Lab]]:&#039;&#039;&#039; Developing large-scale, modular &amp;quot;holographic&amp;quot; LFD panels (SolidLight™) based on proprietary [[Waveguide (optics)|waveguide]] technology.&amp;lt;ref name=&amp;quot;LightFieldLabTech&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LightFieldLabSolidLightPR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Sony]]:&#039;&#039;&#039; Produces the Spatial Reality Display (ELF-SR series), a high-fidelity desktop LFD using eye-tracking.&amp;lt;ref name=&amp;quot;SonyELFSR2&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Avegant]]:&#039;&#039;&#039; Develops light field light engines, particularly for AR, focusing on VAC resolution.&amp;lt;ref name=&amp;quot;AvegantPR&amp;quot;&amp;gt;PR Newswire (2017, March 15). Avegant Introduces Light Field Technology for Mixed Reality. Retrieved from https://www.prnewswire.com/news-releases/avegant-introduces-light-field-technology-for-mixed-reality-300423855.html&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Holografika]]:&#039;&#039;&#039; Offers glasses-free 3D LFD systems for professional applications.&amp;lt;ref name=&amp;quot;Holografika&amp;quot;&amp;gt;Holografika. Light Field Displays. Retrieved from https://holografika.com/light-field-displays/&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Japan Display Inc. (JDI)]]:&#039;&#039;&#039; Demonstrated prototype LFDs for various applications.&amp;lt;ref name=&amp;quot;JDI_LFD_2019&amp;quot;&amp;gt;Japan Display Inc. News (2019, December 3). JDI Develops World&#039;s First 10.1-inch Light Field Display. Retrieved from https://www.j-display.com/english/news/2019/20191203_01.html&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[NVIDIA]]:&#039;&#039;&#039; Foundational research in near-eye LFDs and ongoing GPU development crucial for LFD rendering.&amp;lt;ref name=&amp;quot;NvidiaNELD&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Google]]:&#039;&#039;&#039; Research in LFDs, demonstrated through concepts like Project Starline.&amp;lt;ref name=&amp;quot;Starline&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Academic Research:&#039;&#039;&#039; Institutions like [[MIT Media Lab]], [[Stanford University]], University of Arizona, and others continue to push theoretical and practical boundaries.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Future Directions and Research ==&lt;br /&gt;
* &#039;&#039;&#039;Computational Display Optimization:&#039;&#039;&#039; Using [[Artificial intelligence|AI]] and sophisticated algorithms to optimize patterns on multi-layer displays or directional backlights for better quality with fewer resources.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt; Using neural representations (like NeRF) for efficient light field synthesis and compression.&amp;lt;ref name=&amp;quot;Mildenhall2020NeRF&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Varifocal and Multifocal Integration:&#039;&#039;&#039; Hybrid approaches combining LFD principles with dynamic focus elements (liquid lenses, deformable mirrors) to achieve focus cues potentially more efficiently than pure LFDs.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Liu2014OSTHMD&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Miniaturization for Wearables:&#039;&#039;&#039; Developing ultra-thin, efficient components using [[Metasurface|metasurfaces]], [[Holographic optical element|holographic optical elements (HOEs)]], advanced waveguides, and [[MicroLED]] displays for integration into consumer AR/VR glasses.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;SpringerReview2021&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Improved Content Capture and Creation Tools:&#039;&#039;&#039; Advancements in [[Plenoptic camera|plenoptic cameras]], AI-driven view synthesis, and streamlined software workflows.&amp;lt;ref name=&amp;quot;Mildenhall2020NeRF&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Higher Resolution and Efficiency:&#039;&#039;&#039; Addressing the spatio-angular trade-off and improving light efficiency through new materials, optical designs (for example polarization multiplexing&amp;lt;ref name=&amp;quot;Tan2019Polarization&amp;quot;&amp;gt;G. Tan, T. Zhan, Y.-H. Lee, J. Xiong, S.-T. Wu, “Near-eye light-field display with polarization multiplexing,” *Proceedings of SPIE* 10942, Advances in Display Technologies IX, paper 1094206, 2019, doi:10.1117/12.2509121.&amp;lt;/ref&amp;gt;), and display technologies.&lt;br /&gt;
&lt;br /&gt;
== See Also ==&lt;br /&gt;
* [[Light Field]]&lt;br /&gt;
* [[Plenoptic Function]]&lt;br /&gt;
* [[Integral imaging]]&lt;br /&gt;
* [[Autostereoscopic display]]&lt;br /&gt;
* [[Stereoscopy]]&lt;br /&gt;
* [[Holographic display]]&lt;br /&gt;
* [[Volumetric Display]]&lt;br /&gt;
* [[Varifocal display]]&lt;br /&gt;
* [[Vergence-accommodation conflict]]&lt;br /&gt;
* [[Virtual Reality]]&lt;br /&gt;
* [[Augmented Reality]]&lt;br /&gt;
* [[Head-mounted display]]&lt;br /&gt;
* [[Microlens array]]&lt;br /&gt;
* [[Spatial Light Modulator]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Terms]]&lt;br /&gt;
[[Category:Technical Terms]]&lt;br /&gt;
[[Category:Display technology]]&lt;br /&gt;
[[Category:3D display technology]]&lt;br /&gt;
[[Category:Autostereoscopy]]&lt;br /&gt;
[[Category:Virtual reality]]&lt;br /&gt;
[[Category:Augmented reality]]&lt;br /&gt;
[[Category:Optics]]&lt;br /&gt;
[[Category:Computational photography]]&lt;br /&gt;
[[Category:Emerging technologies]]&lt;br /&gt;
[[Category:Human-computer interaction]]&lt;/div&gt;</summary>
		<author><name>RealEditor</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Light_field_display&amp;diff=36361</id>
		<title>Light field display</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Light_field_display&amp;diff=36361"/>
		<updated>2025-08-04T05:54:41Z</updated>

		<summary type="html">&lt;p&gt;RealEditor: Looking Glass Factory and Leia DO NOT and NEVER HAVE made REAL light field displays. It is false advertising.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Light field display&#039;&#039;&#039; (&#039;&#039;&#039;LFD&#039;&#039;&#039;) is an advanced display technology designed to reproduce a [[light field]], the distribution of light rays in [[3D space]], including their intensity and direction.&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;&amp;gt;Wetzstein G. (2020). “Computational Displays: Achieving the Full Plenoptic Function.” ACM SIGGRAPH 2020 Courses. ACM Digital Library. doi:10.1145/3386569.3409414. Available: https://dl.acm.org/doi/10.1145/3386569.3409414 (accessed 3 May 2025).&amp;lt;/ref&amp;gt; Unlike conventional 2D displays or [[stereoscopic display|stereoscopic 3D]] systems that present flat images or fixed viewpoints requiring glasses, light field displays aim to recreate how light naturally propagates from a real scene.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;&amp;gt;Wetzstein, G., Lanman, D., Hirsch, M., &amp;amp; Raskar, R. (2012). Tensor displays: Compressive light field synthesis using multilayer displays with directional backlighting. ACM Transactions on Graphics, 31(4), Article 80. doi:10.1145/2185520.2185576&amp;lt;/ref&amp;gt; This allows viewers to perceive genuine [[depth]], [[parallax]] (both horizontal and vertical), and perspective changes without special eyewear. (in many implementations).&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;&amp;gt;Hollister, S. (2024, January 19). Leia is building a 3D empire on the back of the worst phone we&#039;ve ever reviewed. The Verge. Retrieved from https://www.theverge.com/24036574/leia-glasses-free-3d-ces-2024&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This method of display is crucial for the future of [[virtual reality]] (VR) and [[augmented reality]] (AR), because it can directly address the [[vergence-accommodation conflict]] (VAC).&amp;lt;ref name=&amp;quot;WiredVAC&amp;quot;&amp;gt;Zhang, S. (2015, August 11). The Obscure Neuroscience Problem That&#039;s Plaguing VR. WIRED. Retrieved from https://www.wired.com/2015/08/obscure-neuroscience-problem-thats-plaguing-vr&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;VACReview&amp;quot;&amp;gt;Y. Zhou, J. Zhang, F. Fang, “Vergence-accommodation conflict in optical see-through display: Review and prospect,” *Results in Optics*, vol. 5, p. 100160, 2021, doi:10.1016/j.rio.2021.100160.&amp;lt;/ref&amp;gt; By providing correct [[focal cues]] that match the [[vergence]] information, LFDs promise more immersive, realistic, and visually comfortable experiences, reducing eye strain and [[Virtual Reality Sickness|simulator sickness]] often associated with current [[head-mounted display]]s (HMDs).&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;&amp;gt;CREAL. Light-field: Seeing Virtual Worlds Naturally. Retrieved from https://creal.com/technology/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Definition and Principles ==&lt;br /&gt;
A light field display aims to replicate the [[Plenoptic Function]], a theoretical function describing the complete set of light rays passing through every point in space, in every direction, potentially across time and wavelength.&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt; In practice, light field displays generate a discretized (sampled) approximation of the relevant 4D subset of this function (typically spatial position and angular direction).&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;&amp;gt;Huang, F. C., Wetzstein, G., Barsky, B. A., &amp;amp; Raskar, R. (2014). Eyeglasses-free display: Towards correcting visual aberrations with computational light field displays. ACM Transactions on Graphics, 33(4), Article 59. doi:10.1145/2601097.2601122&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
By controlling the direction as well as the color and intensity of emitted light rays, these displays allow the viewer&#039;s eyes to naturally focus ([[accommodation]]) at different depths within the displayed scene, matching the depth cues provided by binocular vision ([[vergence]]).&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt; This recreation allows users to experience:&lt;br /&gt;
* Full motion [[parallax]] (horizontal and vertical look-around).&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&lt;br /&gt;
* Accurate [[occlusion]] cues.&lt;br /&gt;
* Natural [[focal cues]], mitigating the [[Vergence-accommodation conflict]].&amp;lt;ref name=&amp;quot;WiredVAC&amp;quot;/&amp;gt;&lt;br /&gt;
* [[Specular highlights]] and realistic reflections that change with viewpoint.&lt;br /&gt;
* Viewing without specialized eyewear (especially in non-headset formats).&lt;br /&gt;
&lt;br /&gt;
== Characteristics ==&lt;br /&gt;
* &#039;&#039;&#039;Glasses-Free 3D&#039;&#039;&#039;&lt;br /&gt;
* &#039;&#039;&#039;Full Parallax:&#039;&#039;&#039; True LFDs provide both horizontal and vertical parallax, unlike earlier [[autostereoscopic display|autostereoscopic]] technologies that often limited parallax to side-to-side movement.&lt;br /&gt;
* &#039;&#039;&#039;Accommodation-Convergence Conflict Resolution:&#039;&#039;&#039; A primary driver for VR/AR, LFDs can render virtual objects at appropriate focal distances, aligning accommodation and vergence to significantly improve visual comfort and realism.&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;&amp;gt;&lt;br /&gt;
Lanman D., &amp;amp; Luebke D. (2013). “Near‑Eye Light Field Displays.”  &lt;br /&gt;
*ACM Transactions on Graphics*, 32 (6), 220:1–220:10. doi:10.1145/2508363.2508366.  &lt;br /&gt;
Project page: https://research.nvidia.com/publication/near-eye-light-field-displays (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Computational Requirements:&#039;&#039;&#039; Generating and processing the massive amount of data (multiple views or directional light information) needed for LFDs requires significant [[Graphics processing unit|GPU]] power and bandwidth.&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Resolution Trade-offs:&#039;&#039;&#039; A fundamental challenge involves balancing spatial resolution (image sharpness), angular resolution (smoothness of parallax/number of views), [[Field of view|field of view (FoV)]], and depth of field.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt; This is often referred to as the spatio-angular resolution trade-off.&lt;br /&gt;
&lt;br /&gt;
==History and Development==&lt;br /&gt;
===Early Concepts and Foundations===&lt;br /&gt;
The underlying concept can be traced back to Michael Faraday&#039;s 1846 suggestion of light as a field&amp;lt;ref name=&amp;quot;FaradayField&amp;quot;&amp;gt;Princeton University Press. Faraday, Maxwell, and the Electromagnetic Field - How Two Men Revolutionized Physics. Retrieved from https://press.princeton.edu/books/hardcover/9780691161664/faraday-maxwell-and-the-electromagnetic-field&amp;lt;/ref&amp;gt; and was mathematically formalized regarding radiance transfer by Andrey Gershun in 1936.&amp;lt;ref name=&amp;quot;Gershun1936&amp;quot;&amp;gt;Gershun, A. (1936). The Light Field. Moscow. (Translated by P. Moon &amp;amp; G. Timoshenko, 1939, Journal of Mathematics and Physics, XVIII, 51–151).&amp;lt;/ref&amp;gt; The practical groundwork for reproducing light fields was laid by Gabriel Lippmann&#039;s 1908 concept of [[Integral imaging|Integral Photography]] (&amp;quot;photographie intégrale&amp;quot;), which used an array of small lenses to capture and reproduce light fields.&amp;lt;ref name=&amp;quot;Lippmann1908&amp;quot;&amp;gt;Lippmann, G. (1908). Épreuves réversibles donnant la sensation du relief. Journal de Physique Théorique et Appliquée, 7(1), 821–825. doi:10.1051/jphystap:019080070082100&amp;lt;/ref&amp;gt; The modern computational understanding was significantly advanced by Adelson and Bergen&#039;s formalization of the [[Plenoptic Function]] in 1991.&amp;lt;ref name=&amp;quot;AdelsonBergen1991&amp;quot;&amp;gt;Adelson, E. H., &amp;amp; Bergen, J. R. (1991). The plenoptic function and the elements of early vision. In M. Landy &amp;amp; J. A. Movshon (Eds.), Computational Models of Visual Processing (pp. 3–20). MIT Press.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Key Development Milestones===&lt;br /&gt;
* &#039;&#039;&#039;1908:&#039;&#039;&#039; Gabriel Lippmann introduces integral photography.&amp;lt;ref name=&amp;quot;Lippmann1908&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;1936:&#039;&#039;&#039; Andrey Gershun formalizes the light field mathematically.&amp;lt;ref name=&amp;quot;Gershun1936&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;1991:&#039;&#039;&#039; Adelson and Bergen formalize the plenoptic function.&amp;lt;ref name=&amp;quot;AdelsonBergen1991&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;1996:&#039;&#039;&#039; Levoy and Hanrahan publish work on Light Field Rendering.&amp;lt;ref name=&amp;quot;Levoy1996&amp;quot;&amp;gt;Levoy, M., &amp;amp; Hanrahan, P. (1996). Light field rendering. Proceedings of the 23rd annual conference on Computer graphics and interactive techniques (SIGGRAPH &#039;96), 31-42. doi:10.1145/237170.237193&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2005:&#039;&#039;&#039; Stanford Multi-camera Array demonstrated for light field capture.&amp;lt;ref name=&amp;quot;Wilburn2005&amp;quot;&amp;gt;Wilburn, B., Joshi, N., Vaish, V., Talvala, E. V., Antunez, E., Barth, A., Adams, A., Horowitz, M., &amp;amp; Levoy, M. (2005). High performance imaging using large camera arrays. ACM SIGGRAPH 2005 Papers (SIGGRAPH &#039;05), 765-776. doi:10.1145/1186822.1073256&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2004-2008:&#039;&#039;&#039; Early computational light field displays developed (for example MIT Media Lab).&amp;lt;ref name=&amp;quot;Matusik2004&amp;quot;&amp;gt;Matusik, W., &amp;amp; Pfister, H. (2004). 3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes. ACM SIGGRAPH 2004 Papers (SIGGRAPH &#039;04), 814–824. doi:10.1145/1186562.1015805&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2010-2013:&#039;&#039;&#039; Introduction of multilayer, compressive, and tensor light field display concepts.&amp;lt;ref name=&amp;quot;Lanman2010ContentAdaptive&amp;quot;&amp;gt;Lanman, D., Hirsch, M., Kim, Y., &amp;amp; Raskar, R. (2010). Content-adaptive parallax barriers: optimizing dual-layer 3D displays using low-rank light field factorization. ACM SIGGRAPH Asia 2010 papers (SIGGRAPH ASIA &#039;10), Article 163. doi:10.1145/1882261.1866191&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2013:&#039;&#039;&#039; NVIDIA demonstrates near-eye light field display prototype for VR.&amp;lt;ref name=&amp;quot;NvidiaNELD&amp;quot;&amp;gt;Lanman, D., &amp;amp; Luebke, D. (2013). Near-Eye Light Field Displays (Technical Report NVR-2013-004). NVIDIA Research. Retrieved from https://research.nvidia.com/sites/default/files/pubs/2013-11_Near-Eye-Light-Field/NVIDIA-NELD.pdf&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2015 onwards:&#039;&#039;&#039; Emergence of advanced prototypes (for example CREAL, Light Field Lab, PetaRay).&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;&amp;gt;Lang, B. (2023, January 11). CREAL&#039;s Latest Light-field AR Demo Shows Continued Progress Toward Natural Depth &amp;amp; Focus. Road to VR. Retrieved from https://www.roadtovr.com/creal-light-field-ar-vr-headset-prototype/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Technical Implementations (How They Work) ==&lt;br /&gt;
Light field displays use various techniques to generate the 4D light field:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;[[Microlens Arrays]] (MLAs):&#039;&#039;&#039; A high-resolution display panel (LCD or OLED) is overlaid with an array of tiny lenses. Each lenslet directs light from the pixels beneath it into a specific set of directions, creating different views for different observer positions.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt; This is a common approach derived from integral imaging.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt; The trade-off is explicit: spatial resolution is determined by the lenslet count, angular resolution by the pixels per lenslet.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Multilayer Displays (Stacked LCDs):&#039;&#039;&#039; Several layers of transparent display panels (typically [[Liquid crystal display|LCDs]]) are stacked with air gaps. By computationally optimizing the opacity patterns on each layer, the display acts as a multiplicative [[Spatial Light Modulator|spatial light modulator]], shaping light from a backlight into a complex light field.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2010ContentAdaptive&amp;quot;/&amp;gt; These are often explored for near-eye displays.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Directional Backlighting:&#039;&#039;&#039; A standard display panel (for example LCD) is combined with a specialized backlight that emits light in controlled directions. The backlight might use another LCD panel coupled with optics like lenticular sheets to achieve directionality.&amp;lt;ref name=&amp;quot;Maimone2013Focus3D&amp;quot;&amp;gt;Maimone, A., Wetzstein, G., Hirsch, M., Lanman, D., Raskar, R., &amp;amp; Fuchs, H. (2013). Focus 3D: compressive accommodation display. ACM Transactions on Graphics, 32(5), Article 152. doi:10.1145/2516971.2516983&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Projector Arrays:&#039;&#039;&#039; Multiple projectors illuminate a screen (often lenticular or diffusive). Each projector provides a different perspective view, and their combined output forms the light field.&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Parallax Barrier]]s:&#039;&#039;&#039; An opaque layer with precisely positioned slits or apertures is placed in front of or between display panels. The barrier blocks light selectively, allowing different pixels to be seen from different angles.&amp;lt;ref name=&amp;quot;JDI_Parallax&amp;quot;&amp;gt;&lt;br /&gt;
Japan Display Inc. (2016, Dec 5). *Ultra‑High Resolution Display with Integrated Parallax Barrier for Glasses‑Free 3D* [Press release].  &lt;br /&gt;
Archived copy: https://web.archive.org/web/20161221045330/https://www.j-display.com/english/news/2016/20161205.html (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt; Often less light-efficient than MLAs.&lt;br /&gt;
* &#039;&#039;&#039;[[Waveguide]] Optics:&#039;&#039;&#039; Light is injected into thin optical waveguides (similar to those in some AR glasses) and then coupled out at specific points with controlled directionality, often using diffractive optical elements (DOEs) or gratings.&amp;lt;ref name=&amp;quot;LightFieldLabTech&amp;quot;&amp;gt;&lt;br /&gt;
Light Field Lab. *SolidLight™ Platform Overview.* https://www.lightfieldlab.com/ (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Maimone2017HolographicNED&amp;quot;&amp;gt;Maimone, A., Georgiou, A., &amp;amp; Kollin, J. S. (2017). Holographic near-eye displays for virtual and augmented reality. ACM Transactions on Graphics, 36(4), Article 85. doi:10.1145/3072959.3073624&amp;lt;/ref&amp;gt; This is explored for compact AR/VR systems.&lt;br /&gt;
* &#039;&#039;&#039;Time-Multiplexed Displays:&#039;&#039;&#039; Different views or directional illumination patterns are presented rapidly in sequence. If cycled faster than human perception, this creates the illusion of a continuous light field. Can be combined with other techniques like directional backlighting.&amp;lt;ref name=&amp;quot;Liu2014OSTHMD&amp;quot;&amp;gt;Liu, S., Cheng, D., &amp;amp; Hua, H. (2014). An optical see-through head mounted display with addressable focal planes. 2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 33-42. doi:10.1109/ISMAR.2014.6948403&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Holographic and Diffractive Approaches:&#039;&#039;&#039; While [[Holographic display|holographic displays]] reconstruct wavefronts through diffraction, some LFDs utilize holographic optical elements (HOEs) or related diffractive principles to achieve high angular resolution and potentially overcome MLA limitations.&amp;lt;ref name=&amp;quot;SpringerReview2021&amp;quot;&amp;gt;M. Martínez-Corral, Z. Guan, Y. Li, Z. Xiong, B. Javidi, “Review of light field technologies,” *Visual Computing for Industry, Biomedicine and Art*, 4 (1): 29, 2021, doi:10.1186/s42492-021-00096-8.&amp;lt;/ref&amp;gt; Some companies use &amp;quot;holographic&amp;quot; terminology for their high-density LFDs.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;&amp;gt;C. Fink, “Light Field Lab Raises $50 Million to Bring SolidLight Holograms Into the Real World,” *Forbes*, 8 Feb 2023. Available: https://www.forbes.com/sites/charliefink/2023/02/08/light-field-lab-raises-50m-to-bring-solidlight-holograms-into-the-real-world/ (accessed 30 Apr 2025).&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Types of Light Field Displays ==&lt;br /&gt;
* &#039;&#039;&#039;Near-Eye Light Field Displays:&#039;&#039;&#039; Integrated into VR/AR [[Head-mounted display|HMDs]]. Primarily focused on solving the VAC for comfortable, realistic close-up interactions.&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt; Examples include research prototypes from NVIDIA&amp;lt;ref name=&amp;quot;NvidiaNELD&amp;quot;/&amp;gt; and academic groups,&amp;lt;ref name=&amp;quot;Huang2015Stereoscope&amp;quot;&amp;gt;Huang, F. C., Chen, K., &amp;amp; Wetzstein, G. (2015). The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues. ACM Transactions on Graphics, 34(4), Article 60. doi:10.1145/2766943&amp;lt;/ref&amp;gt; and commercial modules from companies like [[CREAL]].&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt; Often utilize MLAs, stacked LCDs, or waveguide/diffractive approaches.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Large Format / Tiled Displays:&#039;&#039;&#039; Aimed at creating large-scale, immersive &amp;quot;holographic&amp;quot; experiences without glasses for public venues, command centers, or collaborative environments.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LightFieldLabSolidLightPR&amp;quot;&amp;gt;&lt;br /&gt;
Light Field Lab Press Release (2021, Oct 7). *Light Field Lab Unveils SolidLight™ – The Highest Resolution Holographic Display Platform Ever Designed.*  &lt;br /&gt;
https://www.lightfieldlab.com/press-release-oct-2021 (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt; [[Light Field Lab]]&#039;s SolidLight™ platform uses modular panels designed to be tiled into large video walls.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LightFieldLabSolidLightPR&amp;quot;/&amp;gt; Sony&#039;s ELF-SR series (Spatial Reality Display) uses high-speed vision sensors and a micro-optical lens for a single user but demonstrates high-fidelity desktop light field effects.&amp;lt;ref name=&amp;quot;SonyELFSR2&amp;quot;&amp;gt;&lt;br /&gt;
Sony Professional. *ELF‑SR2 Spatial Reality Display.*  &lt;br /&gt;
https://pro.sony/ue_US/products/spatial-reality-displays/elf-sr2 (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Comparison with Other 3D Display Technologies ==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Comparison of Key 3D Display Technology Characteristics&lt;br /&gt;
! Technology&lt;br /&gt;
! Glasses Required&lt;br /&gt;
! Natural Focal Cues (Solves [[Vergence-accommodation conflict|VAC]])&lt;br /&gt;
! Full Motion [[Parallax]]&lt;br /&gt;
! Typical [[Field of view|View Field]]&lt;br /&gt;
! Key Trade-offs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Light Field Display]]&#039;&#039;&#039;&lt;br /&gt;
| No (often)&lt;br /&gt;
| Yes&lt;br /&gt;
| Yes&lt;br /&gt;
| Limited to Wide&lt;br /&gt;
| Spatio-angular resolution trade-off, computation needs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Stereoscopic display|Stereoscopic Displays]]&#039;&#039;&#039;&lt;br /&gt;
| Yes&lt;br /&gt;
| No&lt;br /&gt;
| No &amp;lt;small&amp;gt;(requires head tracking)&amp;lt;/small&amp;gt;&lt;br /&gt;
| Wide&lt;br /&gt;
| VAC causes fatigue, requires glasses&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Autostereoscopic display|Autostereoscopic (non-LFD)]]&#039;&#039;&#039;&lt;br /&gt;
| No&lt;br /&gt;
| No&lt;br /&gt;
| Limited &amp;lt;small&amp;gt;(often Horizontal only)&amp;lt;/small&amp;gt;&lt;br /&gt;
| Limited&lt;br /&gt;
| Reduced resolution per view, fixed viewing zones&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Volumetric Display]]&#039;&#039;&#039;&lt;br /&gt;
| No&lt;br /&gt;
| Yes&lt;br /&gt;
| Yes&lt;br /&gt;
| 360° potential&lt;br /&gt;
| Limited resolution, transparency/opacity issues, bulk&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Holographic display|Holographic Displays]]&#039;&#039;&#039;&lt;br /&gt;
| No&lt;br /&gt;
| Yes&lt;br /&gt;
| Yes&lt;br /&gt;
| Often Limited&lt;br /&gt;
| Extreme computational demands, [[Speckle pattern|speckle]], small size typically&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
LFDs offer a compelling balance, providing natural depth cues without glasses (in many formats) and resolving the VAC, but face challenges in achieving high resolution across both spatial and angular domains simultaneously.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Content Creation ==&lt;br /&gt;
Creating content compatible with LFDs requires capturing or generating directional view information:&lt;br /&gt;
* &#039;&#039;&#039;[[Light Field Camera|Light Field Cameras]] / [[Plenoptic Camera|Plenoptic Cameras]]:&#039;&#039;&#039; Capture both intensity and direction of incoming light using specialized sensors (often with MLAs).&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt; The captured data can be processed for LFD playback.&lt;br /&gt;
* &#039;&#039;&#039;[[Computer Graphics]] Rendering:&#039;&#039;&#039; Standard 3D scenes built in engines like [[Unity (game engine)|Unity]] or [[Unreal Engine]] can be rendered from multiple viewpoints to generate the necessary data.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LookingGlassSoftware&amp;quot;/&amp;gt; Specialized light field rendering techniques, potentially using [[Ray tracing (graphics)|ray tracing]] or neural methods like [[Neural Radiance Fields]] (NeRF), are employed.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Mildenhall2020NeRF&amp;quot;&amp;gt;Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T., Ramamoorthi, R., &amp;amp; Ng, R. (2020). NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. European Conference on Computer Vision (ECCV), 405-421. doi:10.1007/978-3-030-58452-8_24&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Photogrammetry]] and 3D Scanning:&#039;&#039;&#039; Real-world objects/scenes captured as 3D models can serve as input for rendering light field views.&lt;br /&gt;
* &#039;&#039;&#039;[[Focal Stack]] Conversion:&#039;&#039;&#039; Research explores converting image stacks captured at different focal depths into light field representations, particularly for multi-layer displays.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Applications==&lt;br /&gt;
===Applications in VR and AR===&lt;br /&gt;
* &#039;&#039;&#039;Enhanced Realism and Immersion:&#039;&#039;&#039; Correct depth cues make virtual objects appear more solid and stable, improving the sense of presence, especially for near-field interactions.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Improved Visual Comfort:&#039;&#039;&#039; Mitigating the VAC reduces eye strain, fatigue, and nausea, enabling longer and more comfortable VR/AR sessions.&amp;lt;ref name=&amp;quot;WiredVAC&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Natural Interaction:&#039;&#039;&#039; Accurate depth perception facilitates intuitive hand-eye coordination for manipulating virtual objects.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Seamless AR Integration:&#039;&#039;&#039; Allows virtual elements to appear more cohesively integrated with the real world at correct focal depths.&lt;br /&gt;
* &#039;&#039;&#039;Vision Correction:&#039;&#039;&#039; Near-eye LFDs can potentially pre-distort the displayed light field to correct for the user&#039;s refractive errors, eliminating the need for prescription glasses within the headset.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2015Stereoscope&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Other Applications===&lt;br /&gt;
* &#039;&#039;&#039;Medical Imaging and Visualization:&#039;&#039;&#039; Intuitive visualization of complex 3D scans (CT, MRI) for diagnostics, surgical planning, and education.&amp;lt;ref name=&amp;quot;Nam2019Medical&amp;quot;&amp;gt;Nam, J., McCormick, M., &amp;amp; Tate, A. J. (2019). Light field display systems for medical imaging applications. Journal of Display Technology, 15(3), 215-225. doi:10.1002/jsid.785&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Scientific Visualization:&#039;&#039;&#039; Analyzing complex datasets in fields like fluid dynamics, molecular modeling, geology.&amp;lt;ref name=&amp;quot;Halle2017SciVis&amp;quot;&amp;gt;Halle, M. W., &amp;amp; Meng, J. (2017). LightPlanets: GPU-based rendering of transparent astronomical objects using light field methods. IEEE Transactions on Visualization and Computer Graphics, 23(5), 1479-1488. doi:10.1109/TVCG.2016.2535388&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Product Design and Engineering (CAD/CAE):&#039;&#039;&#039; Collaborative visualization and review of 3D models.&amp;lt;ref name=&amp;quot;Nam2019Medical&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Entertainment and Gaming:&#039;&#039;&#039; Immersive experiences in arcades, museums, theme parks, and potentially future home entertainment.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Telepresence and Communication:&#039;&#039;&#039; Creating realistic, life-sized 3D representations of remote collaborators, like Google&#039;s [[Project Starline]] concept.&amp;lt;ref name=&amp;quot;Starline&amp;quot;&amp;gt;Google Blog (2023, May 10). A first look at Project Starline’s new, simpler prototype. Retrieved from https://blog.google/technology/research/project-starline-prototype/&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Microscopy]]:&#039;&#039;&#039; Viewing microscopic samples with natural depth perception.&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Challenges and Limitations ==&lt;br /&gt;
* &#039;&#039;&#039;Spatio-Angular Resolution Trade-off:&#039;&#039;&#039; Increasing the number of views (angular resolution) often decreases the perceived sharpness (spatial resolution) for a fixed display pixel count.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Computational Complexity &amp;amp; Bandwidth:&#039;&#039;&#039; Rendering, compressing, and transmitting the massive datasets for real-time LFDs is extremely demanding on GPUs and data infrastructure.&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Manufacturing Complexity and Cost:&#039;&#039;&#039; Producing precise optical components like high-density MLAs, perfectly aligned multi-layer stacks, or large-area waveguide structures is challenging and costly.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Form Factor and Miniaturization:&#039;&#039;&#039; Integrating complex optics and electronics into thin, lightweight, and power-efficient near-eye devices remains difficult.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Limited Field of View (FoV):&#039;&#039;&#039; Achieving wide FoV comparable to traditional VR headsets while maintaining high angular resolution is challenging.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Brightness and Efficiency:&#039;&#039;&#039; Techniques like MLAs and parallax barriers inherently block or redirect light, reducing overall display brightness and power efficiency.&lt;br /&gt;
* &#039;&#039;&#039;Content Ecosystem:&#039;&#039;&#039; The workflow for creating, distributing, and viewing native light field content is still developing compared to standard 2D or stereoscopic 3D.&amp;lt;ref name=&amp;quot;LookingGlassSoftware&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Visual Artifacts:&#039;&#039;&#039; Potential issues include [[Moiré pattern|moiré]] effects (from periodic structures like MLAs), ghosting/crosstalk between views, and latency.&lt;br /&gt;
&lt;br /&gt;
== Key Players and Commercial Landscape ==&lt;br /&gt;
Several companies and research groups are active in LFD development:&lt;br /&gt;
* &#039;&#039;&#039;[[CREAL]]:&#039;&#039;&#039; Swiss startup focused on compact near-eye LFD modules for AR/VR glasses aiming to solve VAC.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Light Field Lab]]:&#039;&#039;&#039; Developing large-scale, modular &amp;quot;holographic&amp;quot; LFD panels (SolidLight™) based on proprietary [[Waveguide (optics)|waveguide]] technology.&amp;lt;ref name=&amp;quot;LightFieldLabTech&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LightFieldLabSolidLightPR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Sony]]:&#039;&#039;&#039; Produces the Spatial Reality Display (ELF-SR series), a high-fidelity desktop LFD using eye-tracking.&amp;lt;ref name=&amp;quot;SonyELFSR2&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Avegant]]:&#039;&#039;&#039; Develops light field light engines, particularly for AR, focusing on VAC resolution.&amp;lt;ref name=&amp;quot;AvegantPR&amp;quot;&amp;gt;PR Newswire (2017, March 15). Avegant Introduces Light Field Technology for Mixed Reality. Retrieved from https://www.prnewswire.com/news-releases/avegant-introduces-light-field-technology-for-mixed-reality-300423855.html&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Holografika]]:&#039;&#039;&#039; Offers glasses-free 3D LFD systems for professional applications.&amp;lt;ref name=&amp;quot;Holografika&amp;quot;&amp;gt;Holografika. Light Field Displays. Retrieved from https://holografika.com/light-field-displays/&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Japan Display Inc. (JDI)]]:&#039;&#039;&#039; Demonstrated prototype LFDs for various applications.&amp;lt;ref name=&amp;quot;JDI_LFD_2019&amp;quot;&amp;gt;Japan Display Inc. News (2019, December 3). JDI Develops World&#039;s First 10.1-inch Light Field Display. Retrieved from https://www.j-display.com/english/news/2019/20191203_01.html&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[NVIDIA]]:&#039;&#039;&#039; Foundational research in near-eye LFDs and ongoing GPU development crucial for LFD rendering.&amp;lt;ref name=&amp;quot;NvidiaNELD&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Google]]:&#039;&#039;&#039; Research in LFDs, demonstrated through concepts like Project Starline.&amp;lt;ref name=&amp;quot;Starline&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Academic Research:&#039;&#039;&#039; Institutions like [[MIT Media Lab]], [[Stanford University]], University of Arizona, and others continue to push theoretical and practical boundaries.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Future Directions and Research ==&lt;br /&gt;
* &#039;&#039;&#039;Computational Display Optimization:&#039;&#039;&#039; Using [[Artificial intelligence|AI]] and sophisticated algorithms to optimize patterns on multi-layer displays or directional backlights for better quality with fewer resources.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt; Using neural representations (like NeRF) for efficient light field synthesis and compression.&amp;lt;ref name=&amp;quot;Mildenhall2020NeRF&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Varifocal and Multifocal Integration:&#039;&#039;&#039; Hybrid approaches combining LFD principles with dynamic focus elements (liquid lenses, deformable mirrors) to achieve focus cues potentially more efficiently than pure LFDs.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Liu2014OSTHMD&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Miniaturization for Wearables:&#039;&#039;&#039; Developing ultra-thin, efficient components using [[Metasurface|metasurfaces]], [[Holographic optical element|holographic optical elements (HOEs)]], advanced waveguides, and [[MicroLED]] displays for integration into consumer AR/VR glasses.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;SpringerReview2021&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Improved Content Capture and Creation Tools:&#039;&#039;&#039; Advancements in [[Plenoptic camera|plenoptic cameras]], AI-driven view synthesis, and streamlined software workflows.&amp;lt;ref name=&amp;quot;Mildenhall2020NeRF&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Higher Resolution and Efficiency:&#039;&#039;&#039; Addressing the spatio-angular trade-off and improving light efficiency through new materials, optical designs (for example polarization multiplexing&amp;lt;ref name=&amp;quot;Tan2019Polarization&amp;quot;&amp;gt;G. Tan, T. Zhan, Y.-H. Lee, J. Xiong, S.-T. Wu, “Near-eye light-field display with polarization multiplexing,” *Proceedings of SPIE* 10942, Advances in Display Technologies IX, paper 1094206, 2019, doi:10.1117/12.2509121.&amp;lt;/ref&amp;gt;), and display technologies.&lt;br /&gt;
&lt;br /&gt;
== See Also ==&lt;br /&gt;
* [[Light Field]]&lt;br /&gt;
* [[Plenoptic Function]]&lt;br /&gt;
* [[Integral imaging]]&lt;br /&gt;
* [[Autostereoscopic display]]&lt;br /&gt;
* [[Stereoscopy]]&lt;br /&gt;
* [[Holographic display]]&lt;br /&gt;
* [[Volumetric Display]]&lt;br /&gt;
* [[Varifocal display]]&lt;br /&gt;
* [[Vergence-accommodation conflict]]&lt;br /&gt;
* [[Virtual Reality]]&lt;br /&gt;
* [[Augmented Reality]]&lt;br /&gt;
* [[Head-mounted display]]&lt;br /&gt;
* [[Microlens array]]&lt;br /&gt;
* [[Spatial Light Modulator]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Terms]]&lt;br /&gt;
[[Category:Technical Terms]]&lt;br /&gt;
[[Category:Display technology]]&lt;br /&gt;
[[Category:3D display technology]]&lt;br /&gt;
[[Category:Autostereoscopy]]&lt;br /&gt;
[[Category:Virtual reality]]&lt;br /&gt;
[[Category:Augmented reality]]&lt;br /&gt;
[[Category:Optics]]&lt;br /&gt;
[[Category:Computational photography]]&lt;br /&gt;
[[Category:Emerging technologies]]&lt;br /&gt;
[[Category:Human-computer interaction]]&lt;/div&gt;</summary>
		<author><name>RealEditor</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Vergence&amp;diff=36360</id>
		<title>Vergence</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Vergence&amp;diff=36360"/>
		<updated>2025-08-04T05:51:43Z</updated>

		<summary type="html">&lt;p&gt;RealEditor: Copy from https://www.xvrwiki.org/wiki/Vergence&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Vergence&#039;&#039;&#039; is the angle of a person&#039;s two [[eye]]s relative to each other.&lt;br /&gt;
&lt;br /&gt;
A lot of vergence results in being cross-eyed.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Human vision]]&lt;/div&gt;</summary>
		<author><name>RealEditor</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Light_field_display&amp;diff=36359</id>
		<title>Light field display</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Light_field_display&amp;diff=36359"/>
		<updated>2025-08-04T05:51:03Z</updated>

		<summary type="html">&lt;p&gt;RealEditor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Light field display&#039;&#039;&#039; (&#039;&#039;&#039;LFD&#039;&#039;&#039;) is an advanced display technology designed to reproduce a [[light field]], the distribution of light rays in [[3D space]], including their intensity and direction.&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;&amp;gt;Wetzstein G. (2020). “Computational Displays: Achieving the Full Plenoptic Function.” ACM SIGGRAPH 2020 Courses. ACM Digital Library. doi:10.1145/3386569.3409414. Available: https://dl.acm.org/doi/10.1145/3386569.3409414 (accessed 3 May 2025).&amp;lt;/ref&amp;gt; Unlike conventional 2D displays or [[stereoscopic display|stereoscopic 3D]] systems that present flat images or fixed viewpoints requiring glasses, light field displays aim to recreate how light naturally propagates from a real scene.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;&amp;gt;Wetzstein, G., Lanman, D., Hirsch, M., &amp;amp; Raskar, R. (2012). Tensor displays: Compressive light field synthesis using multilayer displays with directional backlighting. ACM Transactions on Graphics, 31(4), Article 80. doi:10.1145/2185520.2185576&amp;lt;/ref&amp;gt; This allows viewers to perceive genuine [[depth]], [[parallax]] (both horizontal and vertical), and perspective changes without special eyewear (in many implementations).&amp;lt;ref name=&amp;quot;LookingGlass27&amp;quot;&amp;gt;Looking Glass Factory. Looking Glass 27″ Light Field Display. Retrieved from https://lookingglassfactory.com/looking-glass-27&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;&amp;gt;Hollister, S. (2024, January 19). Leia is building a 3D empire on the back of the worst phone we&#039;ve ever reviewed. The Verge. Retrieved from https://www.theverge.com/24036574/leia-glasses-free-3d-ces-2024&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This method of display is crucial for the future of [[virtual reality]] (VR) and [[augmented reality]] (AR), because it can directly address the [[vergence-accommodation conflict]] (VAC).&amp;lt;ref name=&amp;quot;WiredVAC&amp;quot;&amp;gt;Zhang, S. (2015, August 11). The Obscure Neuroscience Problem That&#039;s Plaguing VR. WIRED. Retrieved from https://www.wired.com/2015/08/obscure-neuroscience-problem-thats-plaguing-vr&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;VACReview&amp;quot;&amp;gt;Y. Zhou, J. Zhang, F. Fang, “Vergence-accommodation conflict in optical see-through display: Review and prospect,” *Results in Optics*, vol. 5, p. 100160, 2021, doi:10.1016/j.rio.2021.100160.&amp;lt;/ref&amp;gt; By providing correct [[focal cues]] that match the [[vergence]] information, LFDs promise more immersive, realistic, and visually comfortable experiences, reducing eye strain and [[Virtual Reality Sickness|simulator sickness]] often associated with current [[head-mounted display]]s (HMDs).&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;&amp;gt;CREAL. Light-field: Seeing Virtual Worlds Naturally. Retrieved from https://creal.com/technology/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Definition and Principles ==&lt;br /&gt;
A light field display aims to replicate the [[Plenoptic Function]], a theoretical function describing the complete set of light rays passing through every point in space, in every direction, potentially across time and wavelength.&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt; In practice, light field displays generate a discretized (sampled) approximation of the relevant 4D subset of this function (typically spatial position and angular direction).&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;&amp;gt;Huang, F. C., Wetzstein, G., Barsky, B. A., &amp;amp; Raskar, R. (2014). Eyeglasses-free display: Towards correcting visual aberrations with computational light field displays. ACM Transactions on Graphics, 33(4), Article 59. doi:10.1145/2601097.2601122&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
By controlling the direction as well as the color and intensity of emitted light rays, these displays allow the viewer&#039;s eyes to naturally focus ([[accommodation]]) at different depths within the displayed scene, matching the depth cues provided by binocular vision ([[vergence]]).&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt; This recreation allows users to experience:&lt;br /&gt;
* Full motion [[parallax]] (horizontal and vertical look-around).&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&lt;br /&gt;
* Accurate [[occlusion]] cues.&lt;br /&gt;
* Natural [[focal cues]], mitigating the [[Vergence-accommodation conflict]].&amp;lt;ref name=&amp;quot;WiredVAC&amp;quot;/&amp;gt;&lt;br /&gt;
* [[Specular highlights]] and realistic reflections that change with viewpoint.&lt;br /&gt;
* Often, viewing without specialized eyewear (especially in non-headset formats).&amp;lt;ref name=&amp;quot;LookingGlass27&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Characteristics ==&lt;br /&gt;
* &#039;&#039;&#039;Glasses-Free 3D:&#039;&#039;&#039; Many LFD formats (especially desktop and larger) offer autostereoscopic viewing for multiple users simultaneously, each seeing the correct perspective.&amp;lt;ref name=&amp;quot;LookingGlass27&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Full Parallax:&#039;&#039;&#039; True LFDs provide both horizontal and vertical parallax, unlike earlier [[autostereoscopic display|autostereoscopic]] technologies that often limited parallax to side-to-side movement.&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Accommodation-Convergence Conflict Resolution:&#039;&#039;&#039; A primary driver for VR/AR, LFDs can render virtual objects at appropriate focal distances, aligning accommodation and vergence to significantly improve visual comfort and realism.&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;&amp;gt;&lt;br /&gt;
Lanman D., &amp;amp; Luebke D. (2013). “Near‑Eye Light Field Displays.”  &lt;br /&gt;
*ACM Transactions on Graphics*, 32 (6), 220:1–220:10. doi:10.1145/2508363.2508366.  &lt;br /&gt;
Project page: https://research.nvidia.com/publication/near-eye-light-field-displays (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Computational Requirements:&#039;&#039;&#039; Generating and processing the massive amount of data (multiple views or directional light information) needed for LFDs requires significant [[Graphics processing unit|GPU]] power and bandwidth.&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Resolution Trade-offs:&#039;&#039;&#039; A fundamental challenge involves balancing spatial resolution (image sharpness), angular resolution (smoothness of parallax/number of views), [[Field of view|field of view (FoV)]], and depth of field.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt; This is often referred to as the spatio-angular resolution trade-off.&lt;br /&gt;
&lt;br /&gt;
==History and Development==&lt;br /&gt;
===Early Concepts and Foundations===&lt;br /&gt;
The underlying concept can be traced back to Michael Faraday&#039;s 1846 suggestion of light as a field&amp;lt;ref name=&amp;quot;FaradayField&amp;quot;&amp;gt;Princeton University Press. Faraday, Maxwell, and the Electromagnetic Field - How Two Men Revolutionized Physics. Retrieved from https://press.princeton.edu/books/hardcover/9780691161664/faraday-maxwell-and-the-electromagnetic-field&amp;lt;/ref&amp;gt; and was mathematically formalized regarding radiance transfer by Andrey Gershun in 1936.&amp;lt;ref name=&amp;quot;Gershun1936&amp;quot;&amp;gt;Gershun, A. (1936). The Light Field. Moscow. (Translated by P. Moon &amp;amp; G. Timoshenko, 1939, Journal of Mathematics and Physics, XVIII, 51–151).&amp;lt;/ref&amp;gt; The practical groundwork for reproducing light fields was laid by Gabriel Lippmann&#039;s 1908 concept of [[Integral imaging|Integral Photography]] (&amp;quot;photographie intégrale&amp;quot;), which used an array of small lenses to capture and reproduce light fields.&amp;lt;ref name=&amp;quot;Lippmann1908&amp;quot;&amp;gt;Lippmann, G. (1908). Épreuves réversibles donnant la sensation du relief. Journal de Physique Théorique et Appliquée, 7(1), 821–825. doi:10.1051/jphystap:019080070082100&amp;lt;/ref&amp;gt; The modern computational understanding was significantly advanced by Adelson and Bergen&#039;s formalization of the [[Plenoptic Function]] in 1991.&amp;lt;ref name=&amp;quot;AdelsonBergen1991&amp;quot;&amp;gt;Adelson, E. H., &amp;amp; Bergen, J. R. (1991). The plenoptic function and the elements of early vision. In M. Landy &amp;amp; J. A. Movshon (Eds.), Computational Models of Visual Processing (pp. 3–20). MIT Press.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Key Development Milestones===&lt;br /&gt;
* &#039;&#039;&#039;1908:&#039;&#039;&#039; Gabriel Lippmann introduces integral photography.&amp;lt;ref name=&amp;quot;Lippmann1908&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;1936:&#039;&#039;&#039; Andrey Gershun formalizes the light field mathematically.&amp;lt;ref name=&amp;quot;Gershun1936&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;1991:&#039;&#039;&#039; Adelson and Bergen formalize the plenoptic function.&amp;lt;ref name=&amp;quot;AdelsonBergen1991&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;1996:&#039;&#039;&#039; Levoy and Hanrahan publish work on Light Field Rendering.&amp;lt;ref name=&amp;quot;Levoy1996&amp;quot;&amp;gt;Levoy, M., &amp;amp; Hanrahan, P. (1996). Light field rendering. Proceedings of the 23rd annual conference on Computer graphics and interactive techniques (SIGGRAPH &#039;96), 31-42. doi:10.1145/237170.237193&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2005:&#039;&#039;&#039; Stanford Multi-camera Array demonstrated for light field capture.&amp;lt;ref name=&amp;quot;Wilburn2005&amp;quot;&amp;gt;Wilburn, B., Joshi, N., Vaish, V., Talvala, E. V., Antunez, E., Barth, A., Adams, A., Horowitz, M., &amp;amp; Levoy, M. (2005). High performance imaging using large camera arrays. ACM SIGGRAPH 2005 Papers (SIGGRAPH &#039;05), 765-776. doi:10.1145/1186822.1073256&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2004-2008:&#039;&#039;&#039; Early computational light field displays developed (for example MIT Media Lab).&amp;lt;ref name=&amp;quot;Matusik2004&amp;quot;&amp;gt;Matusik, W., &amp;amp; Pfister, H. (2004). 3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes. ACM SIGGRAPH 2004 Papers (SIGGRAPH &#039;04), 814–824. doi:10.1145/1186562.1015805&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2010-2013:&#039;&#039;&#039; Introduction of multilayer, compressive, and tensor light field display concepts.&amp;lt;ref name=&amp;quot;Lanman2010ContentAdaptive&amp;quot;&amp;gt;Lanman, D., Hirsch, M., Kim, Y., &amp;amp; Raskar, R. (2010). Content-adaptive parallax barriers: optimizing dual-layer 3D displays using low-rank light field factorization. ACM SIGGRAPH Asia 2010 papers (SIGGRAPH ASIA &#039;10), Article 163. doi:10.1145/1882261.1866191&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2013:&#039;&#039;&#039; NVIDIA demonstrates near-eye light field display prototype for VR.&amp;lt;ref name=&amp;quot;NvidiaNELD&amp;quot;&amp;gt;Lanman, D., &amp;amp; Luebke, D. (2013). Near-Eye Light Field Displays (Technical Report NVR-2013-004). NVIDIA Research. Retrieved from https://research.nvidia.com/sites/default/files/pubs/2013-11_Near-Eye-Light-Field/NVIDIA-NELD.pdf&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2015 onwards:&#039;&#039;&#039; Emergence of advanced prototypes (for example CREAL, Light Field Lab, PetaRay).&amp;lt;ref name=&amp;quot;LookingGlass27&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;&amp;gt;Lang, B. (2023, January 11). CREAL&#039;s Latest Light-field AR Demo Shows Continued Progress Toward Natural Depth &amp;amp; Focus. Road to VR. Retrieved from https://www.roadtovr.com/creal-light-field-ar-vr-headset-prototype/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Technical Implementations (How They Work) ==&lt;br /&gt;
Light field displays use various techniques to generate the 4D light field:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;[[Microlens Arrays]] (MLAs):&#039;&#039;&#039; A high-resolution display panel (LCD or OLED) is overlaid with an array of tiny lenses. Each lenslet directs light from the pixels beneath it into a specific set of directions, creating different views for different observer positions.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt; This is a common approach derived from integral imaging.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt; The trade-off is explicit: spatial resolution is determined by the lenslet count, angular resolution by the pixels per lenslet.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Multilayer Displays (Stacked LCDs):&#039;&#039;&#039; Several layers of transparent display panels (typically [[Liquid crystal display|LCDs]]) are stacked with air gaps. By computationally optimizing the opacity patterns on each layer, the display acts as a multiplicative [[Spatial Light Modulator|spatial light modulator]], shaping light from a backlight into a complex light field.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2010ContentAdaptive&amp;quot;/&amp;gt; These are often explored for near-eye displays.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Directional Backlighting:&#039;&#039;&#039; A standard display panel (for example LCD) is combined with a specialized backlight that emits light in controlled directions. The backlight might use another LCD panel coupled with optics like lenticular sheets to achieve directionality.&amp;lt;ref name=&amp;quot;Maimone2013Focus3D&amp;quot;&amp;gt;Maimone, A., Wetzstein, G., Hirsch, M., Lanman, D., Raskar, R., &amp;amp; Fuchs, H. (2013). Focus 3D: compressive accommodation display. ACM Transactions on Graphics, 32(5), Article 152. doi:10.1145/2516971.2516983&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Projector Arrays:&#039;&#039;&#039; Multiple projectors illuminate a screen (often lenticular or diffusive). Each projector provides a different perspective view, and their combined output forms the light field.&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Parallax Barrier]]s:&#039;&#039;&#039; An opaque layer with precisely positioned slits or apertures is placed in front of or between display panels. The barrier blocks light selectively, allowing different pixels to be seen from different angles.&amp;lt;ref name=&amp;quot;JDI_Parallax&amp;quot;&amp;gt;&lt;br /&gt;
Japan Display Inc. (2016, Dec 5). *Ultra‑High Resolution Display with Integrated Parallax Barrier for Glasses‑Free 3D* [Press release].  &lt;br /&gt;
Archived copy: https://web.archive.org/web/20161221045330/https://www.j-display.com/english/news/2016/20161205.html (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt; Often less light-efficient than MLAs.&lt;br /&gt;
* &#039;&#039;&#039;[[Waveguide]] Optics:&#039;&#039;&#039; Light is injected into thin optical waveguides (similar to those in some AR glasses) and then coupled out at specific points with controlled directionality, often using diffractive optical elements (DOEs) or gratings.&amp;lt;ref name=&amp;quot;LightFieldLabTech&amp;quot;&amp;gt;&lt;br /&gt;
Light Field Lab. *SolidLight™ Platform Overview.* https://www.lightfieldlab.com/ (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Maimone2017HolographicNED&amp;quot;&amp;gt;Maimone, A., Georgiou, A., &amp;amp; Kollin, J. S. (2017). Holographic near-eye displays for virtual and augmented reality. ACM Transactions on Graphics, 36(4), Article 85. doi:10.1145/3072959.3073624&amp;lt;/ref&amp;gt; This is explored for compact AR/VR systems.&lt;br /&gt;
* &#039;&#039;&#039;Time-Multiplexed Displays:&#039;&#039;&#039; Different views or directional illumination patterns are presented rapidly in sequence. If cycled faster than human perception, this creates the illusion of a continuous light field. Can be combined with other techniques like directional backlighting.&amp;lt;ref name=&amp;quot;Liu2014OSTHMD&amp;quot;&amp;gt;Liu, S., Cheng, D., &amp;amp; Hua, H. (2014). An optical see-through head mounted display with addressable focal planes. 2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 33-42. doi:10.1109/ISMAR.2014.6948403&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Holographic and Diffractive Approaches:&#039;&#039;&#039; While [[Holographic display|holographic displays]] reconstruct wavefronts through diffraction, some LFDs utilize holographic optical elements (HOEs) or related diffractive principles to achieve high angular resolution and potentially overcome MLA limitations.&amp;lt;ref name=&amp;quot;SpringerReview2021&amp;quot;&amp;gt;M. Martínez-Corral, Z. Guan, Y. Li, Z. Xiong, B. Javidi, “Review of light field technologies,” *Visual Computing for Industry, Biomedicine and Art*, 4 (1): 29, 2021, doi:10.1186/s42492-021-00096-8.&amp;lt;/ref&amp;gt; Some companies use &amp;quot;holographic&amp;quot; terminology for their high-density LFDs.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;&amp;gt;C. Fink, “Light Field Lab Raises $50 Million to Bring SolidLight Holograms Into the Real World,” *Forbes*, 8 Feb 2023. Available: https://www.forbes.com/sites/charliefink/2023/02/08/light-field-lab-raises-50m-to-bring-solidlight-holograms-into-the-real-world/ (accessed 30 Apr 2025).&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Types of Light Field Displays ==&lt;br /&gt;
* &#039;&#039;&#039;Near-Eye Light Field Displays:&#039;&#039;&#039; Integrated into VR/AR [[Head-mounted display|HMDs]]. Primarily focused on solving the VAC for comfortable, realistic close-up interactions.&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt; Examples include research prototypes from NVIDIA&amp;lt;ref name=&amp;quot;NvidiaNELD&amp;quot;/&amp;gt; and academic groups,&amp;lt;ref name=&amp;quot;Huang2015Stereoscope&amp;quot;&amp;gt;Huang, F. C., Chen, K., &amp;amp; Wetzstein, G. (2015). The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues. ACM Transactions on Graphics, 34(4), Article 60. doi:10.1145/2766943&amp;lt;/ref&amp;gt; and commercial modules from companies like [[CREAL]].&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt; Often utilize MLAs, stacked LCDs, or waveguide/diffractive approaches.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Large Format / Tiled Displays:&#039;&#039;&#039; Aimed at creating large-scale, immersive &amp;quot;holographic&amp;quot; experiences without glasses for public venues, command centers, or collaborative environments.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LightFieldLabSolidLightPR&amp;quot;&amp;gt;&lt;br /&gt;
Light Field Lab Press Release (2021, Oct 7). *Light Field Lab Unveils SolidLight™ – The Highest Resolution Holographic Display Platform Ever Designed.*  &lt;br /&gt;
https://www.lightfieldlab.com/press-release-oct-2021 (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt; [[Light Field Lab]]&#039;s SolidLight™ platform uses modular panels designed to be tiled into large video walls.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LightFieldLabSolidLightPR&amp;quot;/&amp;gt; Sony&#039;s ELF-SR series (Spatial Reality Display) uses high-speed vision sensors and a micro-optical lens for a single user but demonstrates high-fidelity desktop light field effects.&amp;lt;ref name=&amp;quot;SonyELFSR2&amp;quot;&amp;gt;&lt;br /&gt;
Sony Professional. *ELF‑SR2 Spatial Reality Display.*  &lt;br /&gt;
https://pro.sony/ue_US/products/spatial-reality-displays/elf-sr2 (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Comparison with Other 3D Display Technologies ==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Comparison of Key 3D Display Technology Characteristics&lt;br /&gt;
! Technology&lt;br /&gt;
! Glasses Required&lt;br /&gt;
! Natural Focal Cues (Solves [[Vergence-accommodation conflict|VAC]])&lt;br /&gt;
! Full Motion [[Parallax]]&lt;br /&gt;
! Typical [[Field of view|View Field]]&lt;br /&gt;
! Key Trade-offs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Light Field Display]]&#039;&#039;&#039;&lt;br /&gt;
| No (often)&lt;br /&gt;
| Yes&lt;br /&gt;
| Yes&lt;br /&gt;
| Limited to Wide&lt;br /&gt;
| Spatio-angular resolution trade-off, computation needs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Stereoscopic display|Stereoscopic Displays]]&#039;&#039;&#039;&lt;br /&gt;
| Yes&lt;br /&gt;
| No&lt;br /&gt;
| No &amp;lt;small&amp;gt;(requires head tracking)&amp;lt;/small&amp;gt;&lt;br /&gt;
| Wide&lt;br /&gt;
| VAC causes fatigue, requires glasses&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Autostereoscopic display|Autostereoscopic (non-LFD)]]&#039;&#039;&#039;&lt;br /&gt;
| No&lt;br /&gt;
| No&lt;br /&gt;
| Limited &amp;lt;small&amp;gt;(often Horizontal only)&amp;lt;/small&amp;gt;&lt;br /&gt;
| Limited&lt;br /&gt;
| Reduced resolution per view, fixed viewing zones&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Volumetric Display]]&#039;&#039;&#039;&lt;br /&gt;
| No&lt;br /&gt;
| Yes&lt;br /&gt;
| Yes&lt;br /&gt;
| 360° potential&lt;br /&gt;
| Limited resolution, transparency/opacity issues, bulk&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Holographic display|Holographic Displays]]&#039;&#039;&#039;&lt;br /&gt;
| No&lt;br /&gt;
| Yes&lt;br /&gt;
| Yes&lt;br /&gt;
| Often Limited&lt;br /&gt;
| Extreme computational demands, [[Speckle pattern|speckle]], small size typically&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
LFDs offer a compelling balance, providing natural depth cues without glasses (in many formats) and resolving the VAC, but face challenges in achieving high resolution across both spatial and angular domains simultaneously.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Content Creation ==&lt;br /&gt;
Creating content compatible with LFDs requires capturing or generating directional view information:&lt;br /&gt;
* &#039;&#039;&#039;[[Light Field Camera|Light Field Cameras]] / [[Plenoptic Camera|Plenoptic Cameras]]:&#039;&#039;&#039; Capture both intensity and direction of incoming light using specialized sensors (often with MLAs).&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt; The captured data can be processed for LFD playback.&lt;br /&gt;
* &#039;&#039;&#039;[[Computer Graphics]] Rendering:&#039;&#039;&#039; Standard 3D scenes built in engines like [[Unity (game engine)|Unity]] or [[Unreal Engine]] can be rendered from multiple viewpoints to generate the necessary data.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LookingGlassSoftware&amp;quot;/&amp;gt; Specialized light field rendering techniques, potentially using [[Ray tracing (graphics)|ray tracing]] or neural methods like [[Neural Radiance Fields]] (NeRF), are employed.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Mildenhall2020NeRF&amp;quot;&amp;gt;Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T., Ramamoorthi, R., &amp;amp; Ng, R. (2020). NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. European Conference on Computer Vision (ECCV), 405-421. doi:10.1007/978-3-030-58452-8_24&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Photogrammetry]] and 3D Scanning:&#039;&#039;&#039; Real-world objects/scenes captured as 3D models can serve as input for rendering light field views.&lt;br /&gt;
* &#039;&#039;&#039;Existing 3D Content Conversion:&#039;&#039;&#039; Plugins and software tools (for example provided by Looking Glass Factory) allow conversion of existing 3D models, animations, or even stereoscopic content for LFD viewing.&amp;lt;ref name=&amp;quot;LookingGlassSoftware&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Focal Stack]] Conversion:&#039;&#039;&#039; Research explores converting image stacks captured at different focal depths into light field representations, particularly for multi-layer displays.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Applications==&lt;br /&gt;
===Applications in VR and AR===&lt;br /&gt;
* &#039;&#039;&#039;Enhanced Realism and Immersion:&#039;&#039;&#039; Correct depth cues make virtual objects appear more solid and stable, improving the sense of presence, especially for near-field interactions.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Improved Visual Comfort:&#039;&#039;&#039; Mitigating the VAC reduces eye strain, fatigue, and nausea, enabling longer and more comfortable VR/AR sessions.&amp;lt;ref name=&amp;quot;WiredVAC&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Natural Interaction:&#039;&#039;&#039; Accurate depth perception facilitates intuitive hand-eye coordination for manipulating virtual objects.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Seamless AR Integration:&#039;&#039;&#039; Allows virtual elements to appear more cohesively integrated with the real world at correct focal depths.&lt;br /&gt;
* &#039;&#039;&#039;Vision Correction:&#039;&#039;&#039; Near-eye LFDs can potentially pre-distort the displayed light field to correct for the user&#039;s refractive errors, eliminating the need for prescription glasses within the headset.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2015Stereoscope&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Other Applications===&lt;br /&gt;
* &#039;&#039;&#039;Medical Imaging and Visualization:&#039;&#039;&#039; Intuitive visualization of complex 3D scans (CT, MRI) for diagnostics, surgical planning, and education.&amp;lt;ref name=&amp;quot;Nam2019Medical&amp;quot;&amp;gt;Nam, J., McCormick, M., &amp;amp; Tate, A. J. (2019). Light field display systems for medical imaging applications. Journal of Display Technology, 15(3), 215-225. doi:10.1002/jsid.785&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Scientific Visualization:&#039;&#039;&#039; Analyzing complex datasets in fields like fluid dynamics, molecular modeling, geology.&amp;lt;ref name=&amp;quot;Halle2017SciVis&amp;quot;&amp;gt;Halle, M. W., &amp;amp; Meng, J. (2017). LightPlanets: GPU-based rendering of transparent astronomical objects using light field methods. IEEE Transactions on Visualization and Computer Graphics, 23(5), 1479-1488. doi:10.1109/TVCG.2016.2535388&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Digital Signage]] and Advertising:&#039;&#039;&#039; Eye-catching glasses-free 3D displays for retail and public spaces.&amp;lt;ref name=&amp;quot;LookingGlass27&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Product Design and Engineering (CAD/CAE):&#039;&#039;&#039; Collaborative visualization and review of 3D models.&amp;lt;ref name=&amp;quot;Nam2019Medical&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Entertainment and Gaming:&#039;&#039;&#039; Immersive experiences in arcades, museums, theme parks, and potentially future home entertainment.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Automotive Displays:&#039;&#039;&#039; [[Head-up display|Heads-up displays]] (HUDs) or dashboards presenting information at appropriate depths.&amp;lt;ref name=&amp;quot;JDI_Parallax&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Telepresence and Communication:&#039;&#039;&#039; Creating realistic, life-sized 3D representations of remote collaborators, like Google&#039;s [[Project Starline]] concept.&amp;lt;ref name=&amp;quot;Starline&amp;quot;&amp;gt;Google Blog (2023, May 10). A first look at Project Starline’s new, simpler prototype. Retrieved from https://blog.google/technology/research/project-starline-prototype/&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Microscopy]]:&#039;&#039;&#039; Viewing microscopic samples with natural depth perception.&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Challenges and Limitations ==&lt;br /&gt;
* &#039;&#039;&#039;Spatio-Angular Resolution Trade-off:&#039;&#039;&#039; Increasing the number of views (angular resolution) often decreases the perceived sharpness (spatial resolution) for a fixed display pixel count.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Computational Complexity &amp;amp; Bandwidth:&#039;&#039;&#039; Rendering, compressing, and transmitting the massive datasets for real-time LFDs is extremely demanding on GPUs and data infrastructure.&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Manufacturing Complexity and Cost:&#039;&#039;&#039; Producing precise optical components like high-density MLAs, perfectly aligned multi-layer stacks, or large-area waveguide structures is challenging and costly.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Form Factor and Miniaturization:&#039;&#039;&#039; Integrating complex optics and electronics into thin, lightweight, and power-efficient near-eye devices remains difficult.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Limited Field of View (FoV):&#039;&#039;&#039; Achieving wide FoV comparable to traditional VR headsets while maintaining high angular resolution is challenging.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Brightness and Efficiency:&#039;&#039;&#039; Techniques like MLAs and parallax barriers inherently block or redirect light, reducing overall display brightness and power efficiency.&lt;br /&gt;
* &#039;&#039;&#039;Content Ecosystem:&#039;&#039;&#039; The workflow for creating, distributing, and viewing native light field content is still developing compared to standard 2D or stereoscopic 3D.&amp;lt;ref name=&amp;quot;LookingGlassSoftware&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Visual Artifacts:&#039;&#039;&#039; Potential issues include [[Moiré pattern|moiré]] effects (from periodic structures like MLAs), ghosting/crosstalk between views, and latency.&lt;br /&gt;
&lt;br /&gt;
== Key Players and Commercial Landscape ==&lt;br /&gt;
Several companies and research groups are active in LFD development:&lt;br /&gt;
* &#039;&#039;&#039;[[CREAL]]:&#039;&#039;&#039; Swiss startup focused on compact near-eye LFD modules for AR/VR glasses aiming to solve VAC.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Light Field Lab]]:&#039;&#039;&#039; Developing large-scale, modular &amp;quot;holographic&amp;quot; LFD panels (SolidLight™) based on proprietary [[Waveguide (optics)|waveguide]] technology.&amp;lt;ref name=&amp;quot;LightFieldLabTech&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LightFieldLabSolidLightPR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Sony]]:&#039;&#039;&#039; Produces the Spatial Reality Display (ELF-SR series), a high-fidelity desktop LFD using eye-tracking.&amp;lt;ref name=&amp;quot;SonyELFSR2&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Avegant]]:&#039;&#039;&#039; Develops light field light engines, particularly for AR, focusing on VAC resolution.&amp;lt;ref name=&amp;quot;AvegantPR&amp;quot;&amp;gt;PR Newswire (2017, March 15). Avegant Introduces Light Field Technology for Mixed Reality. Retrieved from https://www.prnewswire.com/news-releases/avegant-introduces-light-field-technology-for-mixed-reality-300423855.html&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Holografika]]:&#039;&#039;&#039; Offers glasses-free 3D LFD systems for professional applications.&amp;lt;ref name=&amp;quot;Holografika&amp;quot;&amp;gt;Holografika. Light Field Displays. Retrieved from https://holografika.com/light-field-displays/&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Japan Display Inc. (JDI)]]:&#039;&#039;&#039; Demonstrated prototype LFDs for various applications.&amp;lt;ref name=&amp;quot;JDI_LFD_2019&amp;quot;&amp;gt;Japan Display Inc. News (2019, December 3). JDI Develops World&#039;s First 10.1-inch Light Field Display. Retrieved from https://www.j-display.com/english/news/2019/20191203_01.html&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[NVIDIA]]:&#039;&#039;&#039; Foundational research in near-eye LFDs and ongoing GPU development crucial for LFD rendering.&amp;lt;ref name=&amp;quot;NvidiaNELD&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Google]]:&#039;&#039;&#039; Research in LFDs, demonstrated through concepts like Project Starline.&amp;lt;ref name=&amp;quot;Starline&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Academic Research:&#039;&#039;&#039; Institutions like [[MIT Media Lab]], [[Stanford University]], University of Arizona, and others continue to push theoretical and practical boundaries.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Future Directions and Research ==&lt;br /&gt;
* &#039;&#039;&#039;Computational Display Optimization:&#039;&#039;&#039; Using [[Artificial intelligence|AI]] and sophisticated algorithms to optimize patterns on multi-layer displays or directional backlights for better quality with fewer resources.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt; Using neural representations (like NeRF) for efficient light field synthesis and compression.&amp;lt;ref name=&amp;quot;Mildenhall2020NeRF&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Varifocal and Multifocal Integration:&#039;&#039;&#039; Hybrid approaches combining LFD principles with dynamic focus elements (liquid lenses, deformable mirrors) to achieve focus cues potentially more efficiently than pure LFDs.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Liu2014OSTHMD&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Miniaturization for Wearables:&#039;&#039;&#039; Developing ultra-thin, efficient components using [[Metasurface|metasurfaces]], [[Holographic optical element|holographic optical elements (HOEs)]], advanced waveguides, and [[MicroLED]] displays for integration into consumer AR/VR glasses.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;SpringerReview2021&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Improved Content Capture and Creation Tools:&#039;&#039;&#039; Advancements in [[Plenoptic camera|plenoptic cameras]], AI-driven view synthesis, and streamlined software workflows.&amp;lt;ref name=&amp;quot;Mildenhall2020NeRF&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Higher Resolution and Efficiency:&#039;&#039;&#039; Addressing the spatio-angular trade-off and improving light efficiency through new materials, optical designs (for example polarization multiplexing&amp;lt;ref name=&amp;quot;Tan2019Polarization&amp;quot;&amp;gt;G. Tan, T. Zhan, Y.-H. Lee, J. Xiong, S.-T. Wu, “Near-eye light-field display with polarization multiplexing,” *Proceedings of SPIE* 10942, Advances in Display Technologies IX, paper 1094206, 2019, doi:10.1117/12.2509121.&amp;lt;/ref&amp;gt;), and display technologies.&lt;br /&gt;
&lt;br /&gt;
== See Also ==&lt;br /&gt;
* [[Light Field]]&lt;br /&gt;
* [[Plenoptic Function]]&lt;br /&gt;
* [[Integral imaging]]&lt;br /&gt;
* [[Autostereoscopic display]]&lt;br /&gt;
* [[Stereoscopy]]&lt;br /&gt;
* [[Holographic display]]&lt;br /&gt;
* [[Volumetric Display]]&lt;br /&gt;
* [[Varifocal display]]&lt;br /&gt;
* [[Vergence-accommodation conflict]]&lt;br /&gt;
* [[Virtual Reality]]&lt;br /&gt;
* [[Augmented Reality]]&lt;br /&gt;
* [[Head-mounted display]]&lt;br /&gt;
* [[Microlens array]]&lt;br /&gt;
* [[Spatial Light Modulator]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Terms]]&lt;br /&gt;
[[Category:Technical Terms]]&lt;br /&gt;
[[Category:Display technology]]&lt;br /&gt;
[[Category:3D display technology]]&lt;br /&gt;
[[Category:Autostereoscopy]]&lt;br /&gt;
[[Category:Virtual reality]]&lt;br /&gt;
[[Category:Augmented reality]]&lt;br /&gt;
[[Category:Optics]]&lt;br /&gt;
[[Category:Computational photography]]&lt;br /&gt;
[[Category:Emerging technologies]]&lt;br /&gt;
[[Category:Human-computer interaction]]&lt;/div&gt;</summary>
		<author><name>RealEditor</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Light_field_display&amp;diff=36358</id>
		<title>Light field display</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Light_field_display&amp;diff=36358"/>
		<updated>2025-08-04T05:50:00Z</updated>

		<summary type="html">&lt;p&gt;RealEditor: sony never made a true lightfield display.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{see also|Terms|Technical Terms}}&lt;br /&gt;
&#039;&#039;&#039;Light field display&#039;&#039;&#039; (&#039;&#039;&#039;LFD&#039;&#039;&#039;) is an advanced display technology designed to reproduce a [[light field]], the distribution of light rays in [[3D space]], including their intensity and direction.&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;&amp;gt;Wetzstein G. (2020). “Computational Displays: Achieving the Full Plenoptic Function.” ACM SIGGRAPH 2020 Courses. ACM Digital Library. doi:10.1145/3386569.3409414. Available: https://dl.acm.org/doi/10.1145/3386569.3409414 (accessed 3 May 2025).&amp;lt;/ref&amp;gt; Unlike conventional 2D displays or [[stereoscopic display|stereoscopic 3D]] systems that present flat images or fixed viewpoints requiring glasses, light field displays aim to recreate how light naturally propagates from a real scene.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;&amp;gt;Wetzstein, G., Lanman, D., Hirsch, M., &amp;amp; Raskar, R. (2012). Tensor displays: Compressive light field synthesis using multilayer displays with directional backlighting. ACM Transactions on Graphics, 31(4), Article 80. doi:10.1145/2185520.2185576&amp;lt;/ref&amp;gt; This allows viewers to perceive genuine [[depth]], [[parallax]] (both horizontal and vertical), and perspective changes without special eyewear (in many implementations).&amp;lt;ref name=&amp;quot;LookingGlass27&amp;quot;&amp;gt;Looking Glass Factory. Looking Glass 27″ Light Field Display. Retrieved from https://lookingglassfactory.com/looking-glass-27&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;&amp;gt;Hollister, S. (2024, January 19). Leia is building a 3D empire on the back of the worst phone we&#039;ve ever reviewed. The Verge. Retrieved from https://www.theverge.com/24036574/leia-glasses-free-3d-ces-2024&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This method of display is crucial for the future of [[virtual reality]] (VR) and [[augmented reality]] (AR), because it can directly address the [[vergence-accommodation conflict]] (VAC).&amp;lt;ref name=&amp;quot;WiredVAC&amp;quot;&amp;gt;Zhang, S. (2015, August 11). The Obscure Neuroscience Problem That&#039;s Plaguing VR. WIRED. Retrieved from https://www.wired.com/2015/08/obscure-neuroscience-problem-thats-plaguing-vr&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;VACReview&amp;quot;&amp;gt;Y. Zhou, J. Zhang, F. Fang, “Vergence-accommodation conflict in optical see-through display: Review and prospect,” *Results in Optics*, vol. 5, p. 100160, 2021, doi:10.1016/j.rio.2021.100160.&amp;lt;/ref&amp;gt; By providing correct [[focal cues]] that match the [[vergence]] information, LFDs promise more immersive, realistic, and visually comfortable experiences, reducing eye strain and [[Virtual Reality Sickness|simulator sickness]] often associated with current [[head-mounted display]]s (HMDs).&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;&amp;gt;CREAL. Light-field: Seeing Virtual Worlds Naturally. Retrieved from https://creal.com/technology/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Definition and Principles ==&lt;br /&gt;
A light field display aims to replicate the [[Plenoptic Function]], a theoretical function describing the complete set of light rays passing through every point in space, in every direction, potentially across time and wavelength.&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt; In practice, light field displays generate a discretized (sampled) approximation of the relevant 4D subset of this function (typically spatial position and angular direction).&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;&amp;gt;Huang, F. C., Wetzstein, G., Barsky, B. A., &amp;amp; Raskar, R. (2014). Eyeglasses-free display: Towards correcting visual aberrations with computational light field displays. ACM Transactions on Graphics, 33(4), Article 59. doi:10.1145/2601097.2601122&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
By controlling the direction as well as the color and intensity of emitted light rays, these displays allow the viewer&#039;s eyes to naturally focus ([[accommodation]]) at different depths within the displayed scene, matching the depth cues provided by binocular vision ([[vergence]]).&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt; This recreation allows users to experience:&lt;br /&gt;
* Full motion [[parallax]] (horizontal and vertical look-around).&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&lt;br /&gt;
* Accurate [[occlusion]] cues.&lt;br /&gt;
* Natural [[focal cues]], mitigating the [[Vergence-accommodation conflict]].&amp;lt;ref name=&amp;quot;WiredVAC&amp;quot;/&amp;gt;&lt;br /&gt;
* [[Specular highlights]] and realistic reflections that change with viewpoint.&lt;br /&gt;
* Often, viewing without specialized eyewear (especially in non-headset formats).&amp;lt;ref name=&amp;quot;LookingGlass27&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Characteristics ==&lt;br /&gt;
* &#039;&#039;&#039;Glasses-Free 3D:&#039;&#039;&#039; Many LFD formats (especially desktop and larger) offer autostereoscopic viewing for multiple users simultaneously, each seeing the correct perspective.&amp;lt;ref name=&amp;quot;LookingGlass27&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Full Parallax:&#039;&#039;&#039; True LFDs provide both horizontal and vertical parallax, unlike earlier [[autostereoscopic display|autostereoscopic]] technologies that often limited parallax to side-to-side movement.&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Accommodation-Convergence Conflict Resolution:&#039;&#039;&#039; A primary driver for VR/AR, LFDs can render virtual objects at appropriate focal distances, aligning accommodation and vergence to significantly improve visual comfort and realism.&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;&amp;gt;&lt;br /&gt;
Lanman D., &amp;amp; Luebke D. (2013). “Near‑Eye Light Field Displays.”  &lt;br /&gt;
*ACM Transactions on Graphics*, 32 (6), 220:1–220:10. doi:10.1145/2508363.2508366.  &lt;br /&gt;
Project page: https://research.nvidia.com/publication/near-eye-light-field-displays (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Computational Requirements:&#039;&#039;&#039; Generating and processing the massive amount of data (multiple views or directional light information) needed for LFDs requires significant [[Graphics processing unit|GPU]] power and bandwidth.&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Resolution Trade-offs:&#039;&#039;&#039; A fundamental challenge involves balancing spatial resolution (image sharpness), angular resolution (smoothness of parallax/number of views), [[Field of view|field of view (FoV)]], and depth of field.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt; This is often referred to as the spatio-angular resolution trade-off.&lt;br /&gt;
&lt;br /&gt;
==History and Development==&lt;br /&gt;
===Early Concepts and Foundations===&lt;br /&gt;
The underlying concept can be traced back to Michael Faraday&#039;s 1846 suggestion of light as a field&amp;lt;ref name=&amp;quot;FaradayField&amp;quot;&amp;gt;Princeton University Press. Faraday, Maxwell, and the Electromagnetic Field - How Two Men Revolutionized Physics. Retrieved from https://press.princeton.edu/books/hardcover/9780691161664/faraday-maxwell-and-the-electromagnetic-field&amp;lt;/ref&amp;gt; and was mathematically formalized regarding radiance transfer by Andrey Gershun in 1936.&amp;lt;ref name=&amp;quot;Gershun1936&amp;quot;&amp;gt;Gershun, A. (1936). The Light Field. Moscow. (Translated by P. Moon &amp;amp; G. Timoshenko, 1939, Journal of Mathematics and Physics, XVIII, 51–151).&amp;lt;/ref&amp;gt; The practical groundwork for reproducing light fields was laid by Gabriel Lippmann&#039;s 1908 concept of [[Integral imaging|Integral Photography]] (&amp;quot;photographie intégrale&amp;quot;), which used an array of small lenses to capture and reproduce light fields.&amp;lt;ref name=&amp;quot;Lippmann1908&amp;quot;&amp;gt;Lippmann, G. (1908). Épreuves réversibles donnant la sensation du relief. Journal de Physique Théorique et Appliquée, 7(1), 821–825. doi:10.1051/jphystap:019080070082100&amp;lt;/ref&amp;gt; The modern computational understanding was significantly advanced by Adelson and Bergen&#039;s formalization of the [[Plenoptic Function]] in 1991.&amp;lt;ref name=&amp;quot;AdelsonBergen1991&amp;quot;&amp;gt;Adelson, E. H., &amp;amp; Bergen, J. R. (1991). The plenoptic function and the elements of early vision. In M. Landy &amp;amp; J. A. Movshon (Eds.), Computational Models of Visual Processing (pp. 3–20). MIT Press.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Key Development Milestones===&lt;br /&gt;
* &#039;&#039;&#039;1908:&#039;&#039;&#039; Gabriel Lippmann introduces integral photography.&amp;lt;ref name=&amp;quot;Lippmann1908&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;1936:&#039;&#039;&#039; Andrey Gershun formalizes the light field mathematically.&amp;lt;ref name=&amp;quot;Gershun1936&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;1991:&#039;&#039;&#039; Adelson and Bergen formalize the plenoptic function.&amp;lt;ref name=&amp;quot;AdelsonBergen1991&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;1996:&#039;&#039;&#039; Levoy and Hanrahan publish work on Light Field Rendering.&amp;lt;ref name=&amp;quot;Levoy1996&amp;quot;&amp;gt;Levoy, M., &amp;amp; Hanrahan, P. (1996). Light field rendering. Proceedings of the 23rd annual conference on Computer graphics and interactive techniques (SIGGRAPH &#039;96), 31-42. doi:10.1145/237170.237193&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2005:&#039;&#039;&#039; Stanford Multi-camera Array demonstrated for light field capture.&amp;lt;ref name=&amp;quot;Wilburn2005&amp;quot;&amp;gt;Wilburn, B., Joshi, N., Vaish, V., Talvala, E. V., Antunez, E., Barth, A., Adams, A., Horowitz, M., &amp;amp; Levoy, M. (2005). High performance imaging using large camera arrays. ACM SIGGRAPH 2005 Papers (SIGGRAPH &#039;05), 765-776. doi:10.1145/1186822.1073256&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2004-2008:&#039;&#039;&#039; Early computational light field displays developed (for example MIT Media Lab).&amp;lt;ref name=&amp;quot;Matusik2004&amp;quot;&amp;gt;Matusik, W., &amp;amp; Pfister, H. (2004). 3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes. ACM SIGGRAPH 2004 Papers (SIGGRAPH &#039;04), 814–824. doi:10.1145/1186562.1015805&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2010-2013:&#039;&#039;&#039; Introduction of multilayer, compressive, and tensor light field display concepts.&amp;lt;ref name=&amp;quot;Lanman2010ContentAdaptive&amp;quot;&amp;gt;Lanman, D., Hirsch, M., Kim, Y., &amp;amp; Raskar, R. (2010). Content-adaptive parallax barriers: optimizing dual-layer 3D displays using low-rank light field factorization. ACM SIGGRAPH Asia 2010 papers (SIGGRAPH ASIA &#039;10), Article 163. doi:10.1145/1882261.1866191&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2013:&#039;&#039;&#039; NVIDIA demonstrates near-eye light field display prototype for VR.&amp;lt;ref name=&amp;quot;NvidiaNELD&amp;quot;&amp;gt;Lanman, D., &amp;amp; Luebke, D. (2013). Near-Eye Light Field Displays (Technical Report NVR-2013-004). NVIDIA Research. Retrieved from https://research.nvidia.com/sites/default/files/pubs/2013-11_Near-Eye-Light-Field/NVIDIA-NELD.pdf&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2015 onwards:&#039;&#039;&#039; Emergence of advanced prototypes (for example CREAL, Light Field Lab, PetaRay).&amp;lt;ref name=&amp;quot;LookingGlass27&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;&amp;gt;Lang, B. (2023, January 11). CREAL&#039;s Latest Light-field AR Demo Shows Continued Progress Toward Natural Depth &amp;amp; Focus. Road to VR. Retrieved from https://www.roadtovr.com/creal-light-field-ar-vr-headset-prototype/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Technical Implementations (How They Work) ==&lt;br /&gt;
Light field displays use various techniques to generate the 4D light field:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;[[Microlens Arrays]] (MLAs):&#039;&#039;&#039; A high-resolution display panel (LCD or OLED) is overlaid with an array of tiny lenses. Each lenslet directs light from the pixels beneath it into a specific set of directions, creating different views for different observer positions.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt; This is a common approach derived from integral imaging.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt; The trade-off is explicit: spatial resolution is determined by the lenslet count, angular resolution by the pixels per lenslet.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Multilayer Displays (Stacked LCDs):&#039;&#039;&#039; Several layers of transparent display panels (typically [[Liquid crystal display|LCDs]]) are stacked with air gaps. By computationally optimizing the opacity patterns on each layer, the display acts as a multiplicative [[Spatial Light Modulator|spatial light modulator]], shaping light from a backlight into a complex light field.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2010ContentAdaptive&amp;quot;/&amp;gt; These are often explored for near-eye displays.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Directional Backlighting:&#039;&#039;&#039; A standard display panel (for example LCD) is combined with a specialized backlight that emits light in controlled directions. The backlight might use another LCD panel coupled with optics like lenticular sheets to achieve directionality.&amp;lt;ref name=&amp;quot;Maimone2013Focus3D&amp;quot;&amp;gt;Maimone, A., Wetzstein, G., Hirsch, M., Lanman, D., Raskar, R., &amp;amp; Fuchs, H. (2013). Focus 3D: compressive accommodation display. ACM Transactions on Graphics, 32(5), Article 152. doi:10.1145/2516971.2516983&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Projector Arrays:&#039;&#039;&#039; Multiple projectors illuminate a screen (often lenticular or diffusive). Each projector provides a different perspective view, and their combined output forms the light field.&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Parallax Barrier]]s:&#039;&#039;&#039; An opaque layer with precisely positioned slits or apertures is placed in front of or between display panels. The barrier blocks light selectively, allowing different pixels to be seen from different angles.&amp;lt;ref name=&amp;quot;JDI_Parallax&amp;quot;&amp;gt;&lt;br /&gt;
Japan Display Inc. (2016, Dec 5). *Ultra‑High Resolution Display with Integrated Parallax Barrier for Glasses‑Free 3D* [Press release].  &lt;br /&gt;
Archived copy: https://web.archive.org/web/20161221045330/https://www.j-display.com/english/news/2016/20161205.html (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt; Often less light-efficient than MLAs.&lt;br /&gt;
* &#039;&#039;&#039;[[Waveguide]] Optics:&#039;&#039;&#039; Light is injected into thin optical waveguides (similar to those in some AR glasses) and then coupled out at specific points with controlled directionality, often using diffractive optical elements (DOEs) or gratings.&amp;lt;ref name=&amp;quot;LightFieldLabTech&amp;quot;&amp;gt;&lt;br /&gt;
Light Field Lab. *SolidLight™ Platform Overview.* https://www.lightfieldlab.com/ (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Maimone2017HolographicNED&amp;quot;&amp;gt;Maimone, A., Georgiou, A., &amp;amp; Kollin, J. S. (2017). Holographic near-eye displays for virtual and augmented reality. ACM Transactions on Graphics, 36(4), Article 85. doi:10.1145/3072959.3073624&amp;lt;/ref&amp;gt; This is explored for compact AR/VR systems.&lt;br /&gt;
* &#039;&#039;&#039;Time-Multiplexed Displays:&#039;&#039;&#039; Different views or directional illumination patterns are presented rapidly in sequence. If cycled faster than human perception, this creates the illusion of a continuous light field. Can be combined with other techniques like directional backlighting.&amp;lt;ref name=&amp;quot;Liu2014OSTHMD&amp;quot;&amp;gt;Liu, S., Cheng, D., &amp;amp; Hua, H. (2014). An optical see-through head mounted display with addressable focal planes. 2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 33-42. doi:10.1109/ISMAR.2014.6948403&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Holographic and Diffractive Approaches:&#039;&#039;&#039; While [[Holographic display|holographic displays]] reconstruct wavefronts through diffraction, some LFDs utilize holographic optical elements (HOEs) or related diffractive principles to achieve high angular resolution and potentially overcome MLA limitations.&amp;lt;ref name=&amp;quot;SpringerReview2021&amp;quot;&amp;gt;M. Martínez-Corral, Z. Guan, Y. Li, Z. Xiong, B. Javidi, “Review of light field technologies,” *Visual Computing for Industry, Biomedicine and Art*, 4 (1): 29, 2021, doi:10.1186/s42492-021-00096-8.&amp;lt;/ref&amp;gt; Some companies use &amp;quot;holographic&amp;quot; terminology for their high-density LFDs.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;&amp;gt;C. Fink, “Light Field Lab Raises $50 Million to Bring SolidLight Holograms Into the Real World,” *Forbes*, 8 Feb 2023. Available: https://www.forbes.com/sites/charliefink/2023/02/08/light-field-lab-raises-50m-to-bring-solidlight-holograms-into-the-real-world/ (accessed 30 Apr 2025).&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Types of Light Field Displays ==&lt;br /&gt;
* &#039;&#039;&#039;Near-Eye Light Field Displays:&#039;&#039;&#039; Integrated into VR/AR [[Head-mounted display|HMDs]]. Primarily focused on solving the VAC for comfortable, realistic close-up interactions.&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt; Examples include research prototypes from NVIDIA&amp;lt;ref name=&amp;quot;NvidiaNELD&amp;quot;/&amp;gt; and academic groups,&amp;lt;ref name=&amp;quot;Huang2015Stereoscope&amp;quot;&amp;gt;Huang, F. C., Chen, K., &amp;amp; Wetzstein, G. (2015). The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues. ACM Transactions on Graphics, 34(4), Article 60. doi:10.1145/2766943&amp;lt;/ref&amp;gt; and commercial modules from companies like [[CREAL]].&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt; Often utilize MLAs, stacked LCDs, or waveguide/diffractive approaches.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Large Format / Tiled Displays:&#039;&#039;&#039; Aimed at creating large-scale, immersive &amp;quot;holographic&amp;quot; experiences without glasses for public venues, command centers, or collaborative environments.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LightFieldLabSolidLightPR&amp;quot;&amp;gt;&lt;br /&gt;
Light Field Lab Press Release (2021, Oct 7). *Light Field Lab Unveils SolidLight™ – The Highest Resolution Holographic Display Platform Ever Designed.*  &lt;br /&gt;
https://www.lightfieldlab.com/press-release-oct-2021 (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt; [[Light Field Lab]]&#039;s SolidLight™ platform uses modular panels designed to be tiled into large video walls.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LightFieldLabSolidLightPR&amp;quot;/&amp;gt; Sony&#039;s ELF-SR series (Spatial Reality Display) uses high-speed vision sensors and a micro-optical lens for a single user but demonstrates high-fidelity desktop light field effects.&amp;lt;ref name=&amp;quot;SonyELFSR2&amp;quot;&amp;gt;&lt;br /&gt;
Sony Professional. *ELF‑SR2 Spatial Reality Display.*  &lt;br /&gt;
https://pro.sony/ue_US/products/spatial-reality-displays/elf-sr2 (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Comparison with Other 3D Display Technologies ==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Comparison of Key 3D Display Technology Characteristics&lt;br /&gt;
! Technology&lt;br /&gt;
! Glasses Required&lt;br /&gt;
! Natural Focal Cues (Solves [[Vergence-accommodation conflict|VAC]])&lt;br /&gt;
! Full Motion [[Parallax]]&lt;br /&gt;
! Typical [[Field of view|View Field]]&lt;br /&gt;
! Key Trade-offs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Light Field Display]]&#039;&#039;&#039;&lt;br /&gt;
| No (often)&lt;br /&gt;
| Yes&lt;br /&gt;
| Yes&lt;br /&gt;
| Limited to Wide&lt;br /&gt;
| Spatio-angular resolution trade-off, computation needs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Stereoscopic display|Stereoscopic Displays]]&#039;&#039;&#039;&lt;br /&gt;
| Yes&lt;br /&gt;
| No&lt;br /&gt;
| No &amp;lt;small&amp;gt;(requires head tracking)&amp;lt;/small&amp;gt;&lt;br /&gt;
| Wide&lt;br /&gt;
| VAC causes fatigue, requires glasses&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Autostereoscopic display|Autostereoscopic (non-LFD)]]&#039;&#039;&#039;&lt;br /&gt;
| No&lt;br /&gt;
| No&lt;br /&gt;
| Limited &amp;lt;small&amp;gt;(often Horizontal only)&amp;lt;/small&amp;gt;&lt;br /&gt;
| Limited&lt;br /&gt;
| Reduced resolution per view, fixed viewing zones&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Volumetric Display]]&#039;&#039;&#039;&lt;br /&gt;
| No&lt;br /&gt;
| Yes&lt;br /&gt;
| Yes&lt;br /&gt;
| 360° potential&lt;br /&gt;
| Limited resolution, transparency/opacity issues, bulk&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Holographic display|Holographic Displays]]&#039;&#039;&#039;&lt;br /&gt;
| No&lt;br /&gt;
| Yes&lt;br /&gt;
| Yes&lt;br /&gt;
| Often Limited&lt;br /&gt;
| Extreme computational demands, [[Speckle pattern|speckle]], small size typically&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
LFDs offer a compelling balance, providing natural depth cues without glasses (in many formats) and resolving the VAC, but face challenges in achieving high resolution across both spatial and angular domains simultaneously.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Content Creation ==&lt;br /&gt;
Creating content compatible with LFDs requires capturing or generating directional view information:&lt;br /&gt;
* &#039;&#039;&#039;[[Light Field Camera|Light Field Cameras]] / [[Plenoptic Camera|Plenoptic Cameras]]:&#039;&#039;&#039; Capture both intensity and direction of incoming light using specialized sensors (often with MLAs).&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt; The captured data can be processed for LFD playback.&lt;br /&gt;
* &#039;&#039;&#039;[[Computer Graphics]] Rendering:&#039;&#039;&#039; Standard 3D scenes built in engines like [[Unity (game engine)|Unity]] or [[Unreal Engine]] can be rendered from multiple viewpoints to generate the necessary data.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LookingGlassSoftware&amp;quot;/&amp;gt; Specialized light field rendering techniques, potentially using [[Ray tracing (graphics)|ray tracing]] or neural methods like [[Neural Radiance Fields]] (NeRF), are employed.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Mildenhall2020NeRF&amp;quot;&amp;gt;Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T., Ramamoorthi, R., &amp;amp; Ng, R. (2020). NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. European Conference on Computer Vision (ECCV), 405-421. doi:10.1007/978-3-030-58452-8_24&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Photogrammetry]] and 3D Scanning:&#039;&#039;&#039; Real-world objects/scenes captured as 3D models can serve as input for rendering light field views.&lt;br /&gt;
* &#039;&#039;&#039;Existing 3D Content Conversion:&#039;&#039;&#039; Plugins and software tools (for example provided by Looking Glass Factory) allow conversion of existing 3D models, animations, or even stereoscopic content for LFD viewing.&amp;lt;ref name=&amp;quot;LookingGlassSoftware&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Focal Stack]] Conversion:&#039;&#039;&#039; Research explores converting image stacks captured at different focal depths into light field representations, particularly for multi-layer displays.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Applications==&lt;br /&gt;
===Applications in VR and AR===&lt;br /&gt;
* &#039;&#039;&#039;Enhanced Realism and Immersion:&#039;&#039;&#039; Correct depth cues make virtual objects appear more solid and stable, improving the sense of presence, especially for near-field interactions.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Improved Visual Comfort:&#039;&#039;&#039; Mitigating the VAC reduces eye strain, fatigue, and nausea, enabling longer and more comfortable VR/AR sessions.&amp;lt;ref name=&amp;quot;WiredVAC&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Natural Interaction:&#039;&#039;&#039; Accurate depth perception facilitates intuitive hand-eye coordination for manipulating virtual objects.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Seamless AR Integration:&#039;&#039;&#039; Allows virtual elements to appear more cohesively integrated with the real world at correct focal depths.&lt;br /&gt;
* &#039;&#039;&#039;Vision Correction:&#039;&#039;&#039; Near-eye LFDs can potentially pre-distort the displayed light field to correct for the user&#039;s refractive errors, eliminating the need for prescription glasses within the headset.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2015Stereoscope&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Other Applications===&lt;br /&gt;
* &#039;&#039;&#039;Medical Imaging and Visualization:&#039;&#039;&#039; Intuitive visualization of complex 3D scans (CT, MRI) for diagnostics, surgical planning, and education.&amp;lt;ref name=&amp;quot;Nam2019Medical&amp;quot;&amp;gt;Nam, J., McCormick, M., &amp;amp; Tate, A. J. (2019). Light field display systems for medical imaging applications. Journal of Display Technology, 15(3), 215-225. doi:10.1002/jsid.785&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Scientific Visualization:&#039;&#039;&#039; Analyzing complex datasets in fields like fluid dynamics, molecular modeling, geology.&amp;lt;ref name=&amp;quot;Halle2017SciVis&amp;quot;&amp;gt;Halle, M. W., &amp;amp; Meng, J. (2017). LightPlanets: GPU-based rendering of transparent astronomical objects using light field methods. IEEE Transactions on Visualization and Computer Graphics, 23(5), 1479-1488. doi:10.1109/TVCG.2016.2535388&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Digital Signage]] and Advertising:&#039;&#039;&#039; Eye-catching glasses-free 3D displays for retail and public spaces.&amp;lt;ref name=&amp;quot;LookingGlass27&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Product Design and Engineering (CAD/CAE):&#039;&#039;&#039; Collaborative visualization and review of 3D models.&amp;lt;ref name=&amp;quot;Nam2019Medical&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Entertainment and Gaming:&#039;&#039;&#039; Immersive experiences in arcades, museums, theme parks, and potentially future home entertainment.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Automotive Displays:&#039;&#039;&#039; [[Head-up display|Heads-up displays]] (HUDs) or dashboards presenting information at appropriate depths.&amp;lt;ref name=&amp;quot;JDI_Parallax&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Telepresence and Communication:&#039;&#039;&#039; Creating realistic, life-sized 3D representations of remote collaborators, like Google&#039;s [[Project Starline]] concept.&amp;lt;ref name=&amp;quot;Starline&amp;quot;&amp;gt;Google Blog (2023, May 10). A first look at Project Starline’s new, simpler prototype. Retrieved from https://blog.google/technology/research/project-starline-prototype/&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Microscopy]]:&#039;&#039;&#039; Viewing microscopic samples with natural depth perception.&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Challenges and Limitations ==&lt;br /&gt;
* &#039;&#039;&#039;Spatio-Angular Resolution Trade-off:&#039;&#039;&#039; Increasing the number of views (angular resolution) often decreases the perceived sharpness (spatial resolution) for a fixed display pixel count.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Computational Complexity &amp;amp; Bandwidth:&#039;&#039;&#039; Rendering, compressing, and transmitting the massive datasets for real-time LFDs is extremely demanding on GPUs and data infrastructure.&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Manufacturing Complexity and Cost:&#039;&#039;&#039; Producing precise optical components like high-density MLAs, perfectly aligned multi-layer stacks, or large-area waveguide structures is challenging and costly.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Form Factor and Miniaturization:&#039;&#039;&#039; Integrating complex optics and electronics into thin, lightweight, and power-efficient near-eye devices remains difficult.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Limited Field of View (FoV):&#039;&#039;&#039; Achieving wide FoV comparable to traditional VR headsets while maintaining high angular resolution is challenging.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Brightness and Efficiency:&#039;&#039;&#039; Techniques like MLAs and parallax barriers inherently block or redirect light, reducing overall display brightness and power efficiency.&lt;br /&gt;
* &#039;&#039;&#039;Content Ecosystem:&#039;&#039;&#039; The workflow for creating, distributing, and viewing native light field content is still developing compared to standard 2D or stereoscopic 3D.&amp;lt;ref name=&amp;quot;LookingGlassSoftware&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Visual Artifacts:&#039;&#039;&#039; Potential issues include [[Moiré pattern|moiré]] effects (from periodic structures like MLAs), ghosting/crosstalk between views, and latency.&lt;br /&gt;
&lt;br /&gt;
== Key Players and Commercial Landscape ==&lt;br /&gt;
Several companies and research groups are active in LFD development:&lt;br /&gt;
* &#039;&#039;&#039;[[CREAL]]:&#039;&#039;&#039; Swiss startup focused on compact near-eye LFD modules for AR/VR glasses aiming to solve VAC.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Light Field Lab]]:&#039;&#039;&#039; Developing large-scale, modular &amp;quot;holographic&amp;quot; LFD panels (SolidLight™) based on proprietary [[Waveguide (optics)|waveguide]] technology.&amp;lt;ref name=&amp;quot;LightFieldLabTech&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LightFieldLabSolidLightPR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Sony]]:&#039;&#039;&#039; Produces the Spatial Reality Display (ELF-SR series), a high-fidelity desktop LFD using eye-tracking.&amp;lt;ref name=&amp;quot;SonyELFSR2&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Avegant]]:&#039;&#039;&#039; Develops light field light engines, particularly for AR, focusing on VAC resolution.&amp;lt;ref name=&amp;quot;AvegantPR&amp;quot;&amp;gt;PR Newswire (2017, March 15). Avegant Introduces Light Field Technology for Mixed Reality. Retrieved from https://www.prnewswire.com/news-releases/avegant-introduces-light-field-technology-for-mixed-reality-300423855.html&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Holografika]]:&#039;&#039;&#039; Offers glasses-free 3D LFD systems for professional applications.&amp;lt;ref name=&amp;quot;Holografika&amp;quot;&amp;gt;Holografika. Light Field Displays. Retrieved from https://holografika.com/light-field-displays/&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Japan Display Inc. (JDI)]]:&#039;&#039;&#039; Demonstrated prototype LFDs for various applications.&amp;lt;ref name=&amp;quot;JDI_LFD_2019&amp;quot;&amp;gt;Japan Display Inc. News (2019, December 3). JDI Develops World&#039;s First 10.1-inch Light Field Display. Retrieved from https://www.j-display.com/english/news/2019/20191203_01.html&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[NVIDIA]]:&#039;&#039;&#039; Foundational research in near-eye LFDs and ongoing GPU development crucial for LFD rendering.&amp;lt;ref name=&amp;quot;NvidiaNELD&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Google]]:&#039;&#039;&#039; Research in LFDs, demonstrated through concepts like Project Starline.&amp;lt;ref name=&amp;quot;Starline&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Academic Research:&#039;&#039;&#039; Institutions like [[MIT Media Lab]], [[Stanford University]], University of Arizona, and others continue to push theoretical and practical boundaries.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Future Directions and Research ==&lt;br /&gt;
* &#039;&#039;&#039;Computational Display Optimization:&#039;&#039;&#039; Using [[Artificial intelligence|AI]] and sophisticated algorithms to optimize patterns on multi-layer displays or directional backlights for better quality with fewer resources.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt; Using neural representations (like NeRF) for efficient light field synthesis and compression.&amp;lt;ref name=&amp;quot;Mildenhall2020NeRF&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Varifocal and Multifocal Integration:&#039;&#039;&#039; Hybrid approaches combining LFD principles with dynamic focus elements (liquid lenses, deformable mirrors) to achieve focus cues potentially more efficiently than pure LFDs.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Liu2014OSTHMD&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Miniaturization for Wearables:&#039;&#039;&#039; Developing ultra-thin, efficient components using [[Metasurface|metasurfaces]], [[Holographic optical element|holographic optical elements (HOEs)]], advanced waveguides, and [[MicroLED]] displays for integration into consumer AR/VR glasses.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;SpringerReview2021&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Improved Content Capture and Creation Tools:&#039;&#039;&#039; Advancements in [[Plenoptic camera|plenoptic cameras]], AI-driven view synthesis, and streamlined software workflows.&amp;lt;ref name=&amp;quot;Mildenhall2020NeRF&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Higher Resolution and Efficiency:&#039;&#039;&#039; Addressing the spatio-angular trade-off and improving light efficiency through new materials, optical designs (for example polarization multiplexing&amp;lt;ref name=&amp;quot;Tan2019Polarization&amp;quot;&amp;gt;G. Tan, T. Zhan, Y.-H. Lee, J. Xiong, S.-T. Wu, “Near-eye light-field display with polarization multiplexing,” *Proceedings of SPIE* 10942, Advances in Display Technologies IX, paper 1094206, 2019, doi:10.1117/12.2509121.&amp;lt;/ref&amp;gt;), and display technologies.&lt;br /&gt;
&lt;br /&gt;
== See Also ==&lt;br /&gt;
* [[Light Field]]&lt;br /&gt;
* [[Plenoptic Function]]&lt;br /&gt;
* [[Integral imaging]]&lt;br /&gt;
* [[Autostereoscopic display]]&lt;br /&gt;
* [[Stereoscopy]]&lt;br /&gt;
* [[Holographic display]]&lt;br /&gt;
* [[Volumetric Display]]&lt;br /&gt;
* [[Varifocal display]]&lt;br /&gt;
* [[Vergence-accommodation conflict]]&lt;br /&gt;
* [[Virtual Reality]]&lt;br /&gt;
* [[Augmented Reality]]&lt;br /&gt;
* [[Head-mounted display]]&lt;br /&gt;
* [[Microlens array]]&lt;br /&gt;
* [[Spatial Light Modulator]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Terms]]&lt;br /&gt;
[[Category:Technical Terms]]&lt;br /&gt;
[[Category:Display technology]]&lt;br /&gt;
[[Category:3D display technology]]&lt;br /&gt;
[[Category:Autostereoscopy]]&lt;br /&gt;
[[Category:Virtual reality]]&lt;br /&gt;
[[Category:Augmented reality]]&lt;br /&gt;
[[Category:Optics]]&lt;br /&gt;
[[Category:Computational photography]]&lt;br /&gt;
[[Category:Emerging technologies]]&lt;br /&gt;
[[Category:Human-computer interaction]]&lt;/div&gt;</summary>
		<author><name>RealEditor</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Light_field_display&amp;diff=36357</id>
		<title>Light field display</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Light_field_display&amp;diff=36357"/>
		<updated>2025-08-04T05:46:43Z</updated>

		<summary type="html">&lt;p&gt;RealEditor: /* Key Characteristics */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{see also|Terms|Technical Terms}}&lt;br /&gt;
&#039;&#039;&#039;Light field display&#039;&#039;&#039; (&#039;&#039;&#039;LFD&#039;&#039;&#039;) is an advanced display technology designed to reproduce a [[light field]], the distribution of light rays in [[3D space]], including their intensity and direction.&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;&amp;gt;Wetzstein G. (2020). “Computational Displays: Achieving the Full Plenoptic Function.” ACM SIGGRAPH 2020 Courses. ACM Digital Library. doi:10.1145/3386569.3409414. Available: https://dl.acm.org/doi/10.1145/3386569.3409414 (accessed 3 May 2025).&amp;lt;/ref&amp;gt; Unlike conventional 2D displays or [[stereoscopic display|stereoscopic 3D]] systems that present flat images or fixed viewpoints requiring glasses, light field displays aim to recreate how light naturally propagates from a real scene.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;&amp;gt;Wetzstein, G., Lanman, D., Hirsch, M., &amp;amp; Raskar, R. (2012). Tensor displays: Compressive light field synthesis using multilayer displays with directional backlighting. ACM Transactions on Graphics, 31(4), Article 80. doi:10.1145/2185520.2185576&amp;lt;/ref&amp;gt; This allows viewers to perceive genuine [[depth]], [[parallax]] (both horizontal and vertical), and perspective changes without special eyewear (in many implementations).&amp;lt;ref name=&amp;quot;LookingGlass27&amp;quot;&amp;gt;Looking Glass Factory. Looking Glass 27″ Light Field Display. Retrieved from https://lookingglassfactory.com/looking-glass-27&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;&amp;gt;Hollister, S. (2024, January 19). Leia is building a 3D empire on the back of the worst phone we&#039;ve ever reviewed. The Verge. Retrieved from https://www.theverge.com/24036574/leia-glasses-free-3d-ces-2024&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This method of display is crucial for the future of [[virtual reality]] (VR) and [[augmented reality]] (AR), because it can directly address the [[vergence-accommodation conflict]] (VAC).&amp;lt;ref name=&amp;quot;WiredVAC&amp;quot;&amp;gt;Zhang, S. (2015, August 11). The Obscure Neuroscience Problem That&#039;s Plaguing VR. WIRED. Retrieved from https://www.wired.com/2015/08/obscure-neuroscience-problem-thats-plaguing-vr&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;VACReview&amp;quot;&amp;gt;Y. Zhou, J. Zhang, F. Fang, “Vergence-accommodation conflict in optical see-through display: Review and prospect,” *Results in Optics*, vol. 5, p. 100160, 2021, doi:10.1016/j.rio.2021.100160.&amp;lt;/ref&amp;gt; By providing correct [[focal cues]] that match the [[vergence]] information, LFDs promise more immersive, realistic, and visually comfortable experiences, reducing eye strain and [[Virtual Reality Sickness|simulator sickness]] often associated with current [[head-mounted display]]s (HMDs).&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;&amp;gt;CREAL. Light-field: Seeing Virtual Worlds Naturally. Retrieved from https://creal.com/technology/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Definition and Principles ==&lt;br /&gt;
A light field display aims to replicate the [[Plenoptic Function]], a theoretical function describing the complete set of light rays passing through every point in space, in every direction, potentially across time and wavelength.&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt; In practice, light field displays generate a discretized (sampled) approximation of the relevant 4D subset of this function (typically spatial position and angular direction).&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;&amp;gt;Huang, F. C., Wetzstein, G., Barsky, B. A., &amp;amp; Raskar, R. (2014). Eyeglasses-free display: Towards correcting visual aberrations with computational light field displays. ACM Transactions on Graphics, 33(4), Article 59. doi:10.1145/2601097.2601122&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
By controlling the direction as well as the color and intensity of emitted light rays, these displays allow the viewer&#039;s eyes to naturally focus ([[accommodation]]) at different depths within the displayed scene, matching the depth cues provided by binocular vision ([[vergence]]).&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt; This recreation allows users to experience:&lt;br /&gt;
* Full motion [[parallax]] (horizontal and vertical look-around).&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&lt;br /&gt;
* Accurate [[occlusion]] cues.&lt;br /&gt;
* Natural [[focal cues]], mitigating the [[Vergence-accommodation conflict]].&amp;lt;ref name=&amp;quot;WiredVAC&amp;quot;/&amp;gt;&lt;br /&gt;
* [[Specular highlights]] and realistic reflections that change with viewpoint.&lt;br /&gt;
* Often, viewing without specialized eyewear (especially in non-headset formats).&amp;lt;ref name=&amp;quot;LookingGlass27&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Characteristics ==&lt;br /&gt;
* &#039;&#039;&#039;Glasses-Free 3D:&#039;&#039;&#039; Many LFD formats (especially desktop and larger) offer autostereoscopic viewing for multiple users simultaneously, each seeing the correct perspective.&amp;lt;ref name=&amp;quot;LookingGlass27&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Full Parallax:&#039;&#039;&#039; True LFDs provide both horizontal and vertical parallax, unlike earlier [[autostereoscopic display|autostereoscopic]] technologies that often limited parallax to side-to-side movement.&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Accommodation-Convergence Conflict Resolution:&#039;&#039;&#039; A primary driver for VR/AR, LFDs can render virtual objects at appropriate focal distances, aligning accommodation and vergence to significantly improve visual comfort and realism.&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;&amp;gt;&lt;br /&gt;
Lanman D., &amp;amp; Luebke D. (2013). “Near‑Eye Light Field Displays.”  &lt;br /&gt;
*ACM Transactions on Graphics*, 32 (6), 220:1–220:10. doi:10.1145/2508363.2508366.  &lt;br /&gt;
Project page: https://research.nvidia.com/publication/near-eye-light-field-displays (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Computational Requirements:&#039;&#039;&#039; Generating and processing the massive amount of data (multiple views or directional light information) needed for LFDs requires significant [[Graphics processing unit|GPU]] power and bandwidth.&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Resolution Trade-offs:&#039;&#039;&#039; A fundamental challenge involves balancing spatial resolution (image sharpness), angular resolution (smoothness of parallax/number of views), [[Field of view|field of view (FoV)]], and depth of field.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt; This is often referred to as the spatio-angular resolution trade-off.&lt;br /&gt;
&lt;br /&gt;
==History and Development==&lt;br /&gt;
===Early Concepts and Foundations===&lt;br /&gt;
The underlying concept can be traced back to Michael Faraday&#039;s 1846 suggestion of light as a field&amp;lt;ref name=&amp;quot;FaradayField&amp;quot;&amp;gt;Princeton University Press. Faraday, Maxwell, and the Electromagnetic Field - How Two Men Revolutionized Physics. Retrieved from https://press.princeton.edu/books/hardcover/9780691161664/faraday-maxwell-and-the-electromagnetic-field&amp;lt;/ref&amp;gt; and was mathematically formalized regarding radiance transfer by Andrey Gershun in 1936.&amp;lt;ref name=&amp;quot;Gershun1936&amp;quot;&amp;gt;Gershun, A. (1936). The Light Field. Moscow. (Translated by P. Moon &amp;amp; G. Timoshenko, 1939, Journal of Mathematics and Physics, XVIII, 51–151).&amp;lt;/ref&amp;gt; The practical groundwork for reproducing light fields was laid by Gabriel Lippmann&#039;s 1908 concept of [[Integral imaging|Integral Photography]] (&amp;quot;photographie intégrale&amp;quot;), which used an array of small lenses to capture and reproduce light fields.&amp;lt;ref name=&amp;quot;Lippmann1908&amp;quot;&amp;gt;Lippmann, G. (1908). Épreuves réversibles donnant la sensation du relief. Journal de Physique Théorique et Appliquée, 7(1), 821–825. doi:10.1051/jphystap:019080070082100&amp;lt;/ref&amp;gt; The modern computational understanding was significantly advanced by Adelson and Bergen&#039;s formalization of the [[Plenoptic Function]] in 1991.&amp;lt;ref name=&amp;quot;AdelsonBergen1991&amp;quot;&amp;gt;Adelson, E. H., &amp;amp; Bergen, J. R. (1991). The plenoptic function and the elements of early vision. In M. Landy &amp;amp; J. A. Movshon (Eds.), Computational Models of Visual Processing (pp. 3–20). MIT Press.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Key Development Milestones===&lt;br /&gt;
* &#039;&#039;&#039;1908:&#039;&#039;&#039; Gabriel Lippmann introduces integral photography.&amp;lt;ref name=&amp;quot;Lippmann1908&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;1936:&#039;&#039;&#039; Andrey Gershun formalizes the light field mathematically.&amp;lt;ref name=&amp;quot;Gershun1936&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;1991:&#039;&#039;&#039; Adelson and Bergen formalize the plenoptic function.&amp;lt;ref name=&amp;quot;AdelsonBergen1991&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;1996:&#039;&#039;&#039; Levoy and Hanrahan publish work on Light Field Rendering.&amp;lt;ref name=&amp;quot;Levoy1996&amp;quot;&amp;gt;Levoy, M., &amp;amp; Hanrahan, P. (1996). Light field rendering. Proceedings of the 23rd annual conference on Computer graphics and interactive techniques (SIGGRAPH &#039;96), 31-42. doi:10.1145/237170.237193&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2005:&#039;&#039;&#039; Stanford Multi-camera Array demonstrated for light field capture.&amp;lt;ref name=&amp;quot;Wilburn2005&amp;quot;&amp;gt;Wilburn, B., Joshi, N., Vaish, V., Talvala, E. V., Antunez, E., Barth, A., Adams, A., Horowitz, M., &amp;amp; Levoy, M. (2005). High performance imaging using large camera arrays. ACM SIGGRAPH 2005 Papers (SIGGRAPH &#039;05), 765-776. doi:10.1145/1186822.1073256&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2004-2008:&#039;&#039;&#039; Early computational light field displays developed (for example MIT Media Lab).&amp;lt;ref name=&amp;quot;Matusik2004&amp;quot;&amp;gt;Matusik, W., &amp;amp; Pfister, H. (2004). 3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes. ACM SIGGRAPH 2004 Papers (SIGGRAPH &#039;04), 814–824. doi:10.1145/1186562.1015805&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2010-2013:&#039;&#039;&#039; Introduction of multilayer, compressive, and tensor light field display concepts.&amp;lt;ref name=&amp;quot;Lanman2010ContentAdaptive&amp;quot;&amp;gt;Lanman, D., Hirsch, M., Kim, Y., &amp;amp; Raskar, R. (2010). Content-adaptive parallax barriers: optimizing dual-layer 3D displays using low-rank light field factorization. ACM SIGGRAPH Asia 2010 papers (SIGGRAPH ASIA &#039;10), Article 163. doi:10.1145/1882261.1866191&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2013:&#039;&#039;&#039; NVIDIA demonstrates near-eye light field display prototype for VR.&amp;lt;ref name=&amp;quot;NvidiaNELD&amp;quot;&amp;gt;Lanman, D., &amp;amp; Luebke, D. (2013). Near-Eye Light Field Displays (Technical Report NVR-2013-004). NVIDIA Research. Retrieved from https://research.nvidia.com/sites/default/files/pubs/2013-11_Near-Eye-Light-Field/NVIDIA-NELD.pdf&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2015 onwards:&#039;&#039;&#039; Emergence of advanced prototypes (for example Sony, CREAL, Light Field Lab).&amp;lt;ref name=&amp;quot;LookingGlass27&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;&amp;gt;Lang, B. (2023, January 11). CREAL&#039;s Latest Light-field AR Demo Shows Continued Progress Toward Natural Depth &amp;amp; Focus. Road to VR. Retrieved from https://www.roadtovr.com/creal-light-field-ar-vr-headset-prototype/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Technical Implementations (How They Work) ==&lt;br /&gt;
Light field displays use various techniques to generate the 4D light field:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;[[Microlens Arrays]] (MLAs):&#039;&#039;&#039; A high-resolution display panel (LCD or OLED) is overlaid with an array of tiny lenses. Each lenslet directs light from the pixels beneath it into a specific set of directions, creating different views for different observer positions.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt; This is a common approach derived from integral imaging.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt; The trade-off is explicit: spatial resolution is determined by the lenslet count, angular resolution by the pixels per lenslet.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Multilayer Displays (Stacked LCDs):&#039;&#039;&#039; Several layers of transparent display panels (typically [[Liquid crystal display|LCDs]]) are stacked with air gaps. By computationally optimizing the opacity patterns on each layer, the display acts as a multiplicative [[Spatial Light Modulator|spatial light modulator]], shaping light from a backlight into a complex light field.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2010ContentAdaptive&amp;quot;/&amp;gt; These are often explored for near-eye displays.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Directional Backlighting:&#039;&#039;&#039; A standard display panel (for example LCD) is combined with a specialized backlight that emits light in controlled directions. The backlight might use another LCD panel coupled with optics like lenticular sheets to achieve directionality.&amp;lt;ref name=&amp;quot;Maimone2013Focus3D&amp;quot;&amp;gt;Maimone, A., Wetzstein, G., Hirsch, M., Lanman, D., Raskar, R., &amp;amp; Fuchs, H. (2013). Focus 3D: compressive accommodation display. ACM Transactions on Graphics, 32(5), Article 152. doi:10.1145/2516971.2516983&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Projector Arrays:&#039;&#039;&#039; Multiple projectors illuminate a screen (often lenticular or diffusive). Each projector provides a different perspective view, and their combined output forms the light field.&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Parallax Barrier]]s:&#039;&#039;&#039; An opaque layer with precisely positioned slits or apertures is placed in front of or between display panels. The barrier blocks light selectively, allowing different pixels to be seen from different angles.&amp;lt;ref name=&amp;quot;JDI_Parallax&amp;quot;&amp;gt;&lt;br /&gt;
Japan Display Inc. (2016, Dec 5). *Ultra‑High Resolution Display with Integrated Parallax Barrier for Glasses‑Free 3D* [Press release].  &lt;br /&gt;
Archived copy: https://web.archive.org/web/20161221045330/https://www.j-display.com/english/news/2016/20161205.html (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt; Often less light-efficient than MLAs.&lt;br /&gt;
* &#039;&#039;&#039;[[Waveguide]] Optics:&#039;&#039;&#039; Light is injected into thin optical waveguides (similar to those in some AR glasses) and then coupled out at specific points with controlled directionality, often using diffractive optical elements (DOEs) or gratings.&amp;lt;ref name=&amp;quot;LightFieldLabTech&amp;quot;&amp;gt;&lt;br /&gt;
Light Field Lab. *SolidLight™ Platform Overview.* https://www.lightfieldlab.com/ (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Maimone2017HolographicNED&amp;quot;&amp;gt;Maimone, A., Georgiou, A., &amp;amp; Kollin, J. S. (2017). Holographic near-eye displays for virtual and augmented reality. ACM Transactions on Graphics, 36(4), Article 85. doi:10.1145/3072959.3073624&amp;lt;/ref&amp;gt; This is explored for compact AR/VR systems.&lt;br /&gt;
* &#039;&#039;&#039;Time-Multiplexed Displays:&#039;&#039;&#039; Different views or directional illumination patterns are presented rapidly in sequence. If cycled faster than human perception, this creates the illusion of a continuous light field. Can be combined with other techniques like directional backlighting.&amp;lt;ref name=&amp;quot;Liu2014OSTHMD&amp;quot;&amp;gt;Liu, S., Cheng, D., &amp;amp; Hua, H. (2014). An optical see-through head mounted display with addressable focal planes. 2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 33-42. doi:10.1109/ISMAR.2014.6948403&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Holographic and Diffractive Approaches:&#039;&#039;&#039; While [[Holographic display|holographic displays]] reconstruct wavefronts through diffraction, some LFDs utilize holographic optical elements (HOEs) or related diffractive principles to achieve high angular resolution and potentially overcome MLA limitations.&amp;lt;ref name=&amp;quot;SpringerReview2021&amp;quot;&amp;gt;M. Martínez-Corral, Z. Guan, Y. Li, Z. Xiong, B. Javidi, “Review of light field technologies,” *Visual Computing for Industry, Biomedicine and Art*, 4 (1): 29, 2021, doi:10.1186/s42492-021-00096-8.&amp;lt;/ref&amp;gt; Some companies use &amp;quot;holographic&amp;quot; terminology for their high-density LFDs.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;&amp;gt;C. Fink, “Light Field Lab Raises $50 Million to Bring SolidLight Holograms Into the Real World,” *Forbes*, 8 Feb 2023. Available: https://www.forbes.com/sites/charliefink/2023/02/08/light-field-lab-raises-50m-to-bring-solidlight-holograms-into-the-real-world/ (accessed 30 Apr 2025).&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Types of Light Field Displays ==&lt;br /&gt;
* &#039;&#039;&#039;Near-Eye Light Field Displays:&#039;&#039;&#039; Integrated into VR/AR [[Head-mounted display|HMDs]]. Primarily focused on solving the VAC for comfortable, realistic close-up interactions.&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt; Examples include research prototypes from NVIDIA&amp;lt;ref name=&amp;quot;NvidiaNELD&amp;quot;/&amp;gt; and academic groups,&amp;lt;ref name=&amp;quot;Huang2015Stereoscope&amp;quot;&amp;gt;Huang, F. C., Chen, K., &amp;amp; Wetzstein, G. (2015). The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues. ACM Transactions on Graphics, 34(4), Article 60. doi:10.1145/2766943&amp;lt;/ref&amp;gt; and commercial modules from companies like [[CREAL]].&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt; Often utilize MLAs, stacked LCDs, or waveguide/diffractive approaches.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Large Format / Tiled Displays:&#039;&#039;&#039; Aimed at creating large-scale, immersive &amp;quot;holographic&amp;quot; experiences without glasses for public venues, command centers, or collaborative environments.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LightFieldLabSolidLightPR&amp;quot;&amp;gt;&lt;br /&gt;
Light Field Lab Press Release (2021, Oct 7). *Light Field Lab Unveils SolidLight™ – The Highest Resolution Holographic Display Platform Ever Designed.*  &lt;br /&gt;
https://www.lightfieldlab.com/press-release-oct-2021 (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt; [[Light Field Lab]]&#039;s SolidLight™ platform uses modular panels designed to be tiled into large video walls.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LightFieldLabSolidLightPR&amp;quot;/&amp;gt; Sony&#039;s ELF-SR series (Spatial Reality Display) uses high-speed vision sensors and a micro-optical lens for a single user but demonstrates high-fidelity desktop light field effects.&amp;lt;ref name=&amp;quot;SonyELFSR2&amp;quot;&amp;gt;&lt;br /&gt;
Sony Professional. *ELF‑SR2 Spatial Reality Display.*  &lt;br /&gt;
https://pro.sony/ue_US/products/spatial-reality-displays/elf-sr2 (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Comparison with Other 3D Display Technologies ==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Comparison of Key 3D Display Technology Characteristics&lt;br /&gt;
! Technology&lt;br /&gt;
! Glasses Required&lt;br /&gt;
! Natural Focal Cues (Solves [[Vergence-accommodation conflict|VAC]])&lt;br /&gt;
! Full Motion [[Parallax]]&lt;br /&gt;
! Typical [[Field of view|View Field]]&lt;br /&gt;
! Key Trade-offs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Light Field Display]]&#039;&#039;&#039;&lt;br /&gt;
| No (often)&lt;br /&gt;
| Yes&lt;br /&gt;
| Yes&lt;br /&gt;
| Limited to Wide&lt;br /&gt;
| Spatio-angular resolution trade-off, computation needs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Stereoscopic display|Stereoscopic Displays]]&#039;&#039;&#039;&lt;br /&gt;
| Yes&lt;br /&gt;
| No&lt;br /&gt;
| No &amp;lt;small&amp;gt;(requires head tracking)&amp;lt;/small&amp;gt;&lt;br /&gt;
| Wide&lt;br /&gt;
| VAC causes fatigue, requires glasses&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Autostereoscopic display|Autostereoscopic (non-LFD)]]&#039;&#039;&#039;&lt;br /&gt;
| No&lt;br /&gt;
| No&lt;br /&gt;
| Limited &amp;lt;small&amp;gt;(often Horizontal only)&amp;lt;/small&amp;gt;&lt;br /&gt;
| Limited&lt;br /&gt;
| Reduced resolution per view, fixed viewing zones&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Volumetric Display]]&#039;&#039;&#039;&lt;br /&gt;
| No&lt;br /&gt;
| Yes&lt;br /&gt;
| Yes&lt;br /&gt;
| 360° potential&lt;br /&gt;
| Limited resolution, transparency/opacity issues, bulk&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Holographic display|Holographic Displays]]&#039;&#039;&#039;&lt;br /&gt;
| No&lt;br /&gt;
| Yes&lt;br /&gt;
| Yes&lt;br /&gt;
| Often Limited&lt;br /&gt;
| Extreme computational demands, [[Speckle pattern|speckle]], small size typically&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
LFDs offer a compelling balance, providing natural depth cues without glasses (in many formats) and resolving the VAC, but face challenges in achieving high resolution across both spatial and angular domains simultaneously.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Content Creation ==&lt;br /&gt;
Creating content compatible with LFDs requires capturing or generating directional view information:&lt;br /&gt;
* &#039;&#039;&#039;[[Light Field Camera|Light Field Cameras]] / [[Plenoptic Camera|Plenoptic Cameras]]:&#039;&#039;&#039; Capture both intensity and direction of incoming light using specialized sensors (often with MLAs).&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt; The captured data can be processed for LFD playback.&lt;br /&gt;
* &#039;&#039;&#039;[[Computer Graphics]] Rendering:&#039;&#039;&#039; Standard 3D scenes built in engines like [[Unity (game engine)|Unity]] or [[Unreal Engine]] can be rendered from multiple viewpoints to generate the necessary data.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LookingGlassSoftware&amp;quot;/&amp;gt; Specialized light field rendering techniques, potentially using [[Ray tracing (graphics)|ray tracing]] or neural methods like [[Neural Radiance Fields]] (NeRF), are employed.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Mildenhall2020NeRF&amp;quot;&amp;gt;Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T., Ramamoorthi, R., &amp;amp; Ng, R. (2020). NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. European Conference on Computer Vision (ECCV), 405-421. doi:10.1007/978-3-030-58452-8_24&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Photogrammetry]] and 3D Scanning:&#039;&#039;&#039; Real-world objects/scenes captured as 3D models can serve as input for rendering light field views.&lt;br /&gt;
* &#039;&#039;&#039;Existing 3D Content Conversion:&#039;&#039;&#039; Plugins and software tools (for example provided by Looking Glass Factory) allow conversion of existing 3D models, animations, or even stereoscopic content for LFD viewing.&amp;lt;ref name=&amp;quot;LookingGlassSoftware&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Focal Stack]] Conversion:&#039;&#039;&#039; Research explores converting image stacks captured at different focal depths into light field representations, particularly for multi-layer displays.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Applications==&lt;br /&gt;
===Applications in VR and AR===&lt;br /&gt;
* &#039;&#039;&#039;Enhanced Realism and Immersion:&#039;&#039;&#039; Correct depth cues make virtual objects appear more solid and stable, improving the sense of presence, especially for near-field interactions.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Improved Visual Comfort:&#039;&#039;&#039; Mitigating the VAC reduces eye strain, fatigue, and nausea, enabling longer and more comfortable VR/AR sessions.&amp;lt;ref name=&amp;quot;WiredVAC&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Natural Interaction:&#039;&#039;&#039; Accurate depth perception facilitates intuitive hand-eye coordination for manipulating virtual objects.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Seamless AR Integration:&#039;&#039;&#039; Allows virtual elements to appear more cohesively integrated with the real world at correct focal depths.&lt;br /&gt;
* &#039;&#039;&#039;Vision Correction:&#039;&#039;&#039; Near-eye LFDs can potentially pre-distort the displayed light field to correct for the user&#039;s refractive errors, eliminating the need for prescription glasses within the headset.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2015Stereoscope&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Other Applications===&lt;br /&gt;
* &#039;&#039;&#039;Medical Imaging and Visualization:&#039;&#039;&#039; Intuitive visualization of complex 3D scans (CT, MRI) for diagnostics, surgical planning, and education.&amp;lt;ref name=&amp;quot;Nam2019Medical&amp;quot;&amp;gt;Nam, J., McCormick, M., &amp;amp; Tate, A. J. (2019). Light field display systems for medical imaging applications. Journal of Display Technology, 15(3), 215-225. doi:10.1002/jsid.785&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Scientific Visualization:&#039;&#039;&#039; Analyzing complex datasets in fields like fluid dynamics, molecular modeling, geology.&amp;lt;ref name=&amp;quot;Halle2017SciVis&amp;quot;&amp;gt;Halle, M. W., &amp;amp; Meng, J. (2017). LightPlanets: GPU-based rendering of transparent astronomical objects using light field methods. IEEE Transactions on Visualization and Computer Graphics, 23(5), 1479-1488. doi:10.1109/TVCG.2016.2535388&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Digital Signage]] and Advertising:&#039;&#039;&#039; Eye-catching glasses-free 3D displays for retail and public spaces.&amp;lt;ref name=&amp;quot;LookingGlass27&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Product Design and Engineering (CAD/CAE):&#039;&#039;&#039; Collaborative visualization and review of 3D models.&amp;lt;ref name=&amp;quot;Nam2019Medical&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Entertainment and Gaming:&#039;&#039;&#039; Immersive experiences in arcades, museums, theme parks, and potentially future home entertainment.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Automotive Displays:&#039;&#039;&#039; [[Head-up display|Heads-up displays]] (HUDs) or dashboards presenting information at appropriate depths.&amp;lt;ref name=&amp;quot;JDI_Parallax&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Telepresence and Communication:&#039;&#039;&#039; Creating realistic, life-sized 3D representations of remote collaborators, like Google&#039;s [[Project Starline]] concept.&amp;lt;ref name=&amp;quot;Starline&amp;quot;&amp;gt;Google Blog (2023, May 10). A first look at Project Starline’s new, simpler prototype. Retrieved from https://blog.google/technology/research/project-starline-prototype/&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Microscopy]]:&#039;&#039;&#039; Viewing microscopic samples with natural depth perception.&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Challenges and Limitations ==&lt;br /&gt;
* &#039;&#039;&#039;Spatio-Angular Resolution Trade-off:&#039;&#039;&#039; Increasing the number of views (angular resolution) often decreases the perceived sharpness (spatial resolution) for a fixed display pixel count.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Computational Complexity &amp;amp; Bandwidth:&#039;&#039;&#039; Rendering, compressing, and transmitting the massive datasets for real-time LFDs is extremely demanding on GPUs and data infrastructure.&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Manufacturing Complexity and Cost:&#039;&#039;&#039; Producing precise optical components like high-density MLAs, perfectly aligned multi-layer stacks, or large-area waveguide structures is challenging and costly.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Form Factor and Miniaturization:&#039;&#039;&#039; Integrating complex optics and electronics into thin, lightweight, and power-efficient near-eye devices remains difficult.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Limited Field of View (FoV):&#039;&#039;&#039; Achieving wide FoV comparable to traditional VR headsets while maintaining high angular resolution is challenging.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Brightness and Efficiency:&#039;&#039;&#039; Techniques like MLAs and parallax barriers inherently block or redirect light, reducing overall display brightness and power efficiency.&lt;br /&gt;
* &#039;&#039;&#039;Content Ecosystem:&#039;&#039;&#039; The workflow for creating, distributing, and viewing native light field content is still developing compared to standard 2D or stereoscopic 3D.&amp;lt;ref name=&amp;quot;LookingGlassSoftware&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Visual Artifacts:&#039;&#039;&#039; Potential issues include [[Moiré pattern|moiré]] effects (from periodic structures like MLAs), ghosting/crosstalk between views, and latency.&lt;br /&gt;
&lt;br /&gt;
== Key Players and Commercial Landscape ==&lt;br /&gt;
Several companies and research groups are active in LFD development:&lt;br /&gt;
* &#039;&#039;&#039;[[CREAL]]:&#039;&#039;&#039; Swiss startup focused on compact near-eye LFD modules for AR/VR glasses aiming to solve VAC.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Light Field Lab]]:&#039;&#039;&#039; Developing large-scale, modular &amp;quot;holographic&amp;quot; LFD panels (SolidLight™) based on proprietary [[Waveguide (optics)|waveguide]] technology.&amp;lt;ref name=&amp;quot;LightFieldLabTech&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LightFieldLabSolidLightPR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Sony]]:&#039;&#039;&#039; Produces the Spatial Reality Display (ELF-SR series), a high-fidelity desktop LFD using eye-tracking.&amp;lt;ref name=&amp;quot;SonyELFSR2&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Avegant]]:&#039;&#039;&#039; Develops light field light engines, particularly for AR, focusing on VAC resolution.&amp;lt;ref name=&amp;quot;AvegantPR&amp;quot;&amp;gt;PR Newswire (2017, March 15). Avegant Introduces Light Field Technology for Mixed Reality. Retrieved from https://www.prnewswire.com/news-releases/avegant-introduces-light-field-technology-for-mixed-reality-300423855.html&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Holografika]]:&#039;&#039;&#039; Offers glasses-free 3D LFD systems for professional applications.&amp;lt;ref name=&amp;quot;Holografika&amp;quot;&amp;gt;Holografika. Light Field Displays. Retrieved from https://holografika.com/light-field-displays/&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Japan Display Inc. (JDI)]]:&#039;&#039;&#039; Demonstrated prototype LFDs for various applications.&amp;lt;ref name=&amp;quot;JDI_LFD_2019&amp;quot;&amp;gt;Japan Display Inc. News (2019, December 3). JDI Develops World&#039;s First 10.1-inch Light Field Display. Retrieved from https://www.j-display.com/english/news/2019/20191203_01.html&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[NVIDIA]]:&#039;&#039;&#039; Foundational research in near-eye LFDs and ongoing GPU development crucial for LFD rendering.&amp;lt;ref name=&amp;quot;NvidiaNELD&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Google]]:&#039;&#039;&#039; Research in LFDs, demonstrated through concepts like Project Starline.&amp;lt;ref name=&amp;quot;Starline&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Academic Research:&#039;&#039;&#039; Institutions like [[MIT Media Lab]], [[Stanford University]], University of Arizona, and others continue to push theoretical and practical boundaries.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Future Directions and Research ==&lt;br /&gt;
* &#039;&#039;&#039;Computational Display Optimization:&#039;&#039;&#039; Using [[Artificial intelligence|AI]] and sophisticated algorithms to optimize patterns on multi-layer displays or directional backlights for better quality with fewer resources.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt; Using neural representations (like NeRF) for efficient light field synthesis and compression.&amp;lt;ref name=&amp;quot;Mildenhall2020NeRF&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Varifocal and Multifocal Integration:&#039;&#039;&#039; Hybrid approaches combining LFD principles with dynamic focus elements (liquid lenses, deformable mirrors) to achieve focus cues potentially more efficiently than pure LFDs.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Liu2014OSTHMD&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Miniaturization for Wearables:&#039;&#039;&#039; Developing ultra-thin, efficient components using [[Metasurface|metasurfaces]], [[Holographic optical element|holographic optical elements (HOEs)]], advanced waveguides, and [[MicroLED]] displays for integration into consumer AR/VR glasses.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;SpringerReview2021&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Improved Content Capture and Creation Tools:&#039;&#039;&#039; Advancements in [[Plenoptic camera|plenoptic cameras]], AI-driven view synthesis, and streamlined software workflows.&amp;lt;ref name=&amp;quot;Mildenhall2020NeRF&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Higher Resolution and Efficiency:&#039;&#039;&#039; Addressing the spatio-angular trade-off and improving light efficiency through new materials, optical designs (for example polarization multiplexing&amp;lt;ref name=&amp;quot;Tan2019Polarization&amp;quot;&amp;gt;G. Tan, T. Zhan, Y.-H. Lee, J. Xiong, S.-T. Wu, “Near-eye light-field display with polarization multiplexing,” *Proceedings of SPIE* 10942, Advances in Display Technologies IX, paper 1094206, 2019, doi:10.1117/12.2509121.&amp;lt;/ref&amp;gt;), and display technologies.&lt;br /&gt;
&lt;br /&gt;
== See Also ==&lt;br /&gt;
* [[Light Field]]&lt;br /&gt;
* [[Plenoptic Function]]&lt;br /&gt;
* [[Integral imaging]]&lt;br /&gt;
* [[Autostereoscopic display]]&lt;br /&gt;
* [[Stereoscopy]]&lt;br /&gt;
* [[Holographic display]]&lt;br /&gt;
* [[Volumetric Display]]&lt;br /&gt;
* [[Varifocal display]]&lt;br /&gt;
* [[Vergence-accommodation conflict]]&lt;br /&gt;
* [[Virtual Reality]]&lt;br /&gt;
* [[Augmented Reality]]&lt;br /&gt;
* [[Head-mounted display]]&lt;br /&gt;
* [[Microlens array]]&lt;br /&gt;
* [[Spatial Light Modulator]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Terms]]&lt;br /&gt;
[[Category:Technical Terms]]&lt;br /&gt;
[[Category:Display technology]]&lt;br /&gt;
[[Category:3D display technology]]&lt;br /&gt;
[[Category:Autostereoscopy]]&lt;br /&gt;
[[Category:Virtual reality]]&lt;br /&gt;
[[Category:Augmented reality]]&lt;br /&gt;
[[Category:Optics]]&lt;br /&gt;
[[Category:Computational photography]]&lt;br /&gt;
[[Category:Emerging technologies]]&lt;br /&gt;
[[Category:Human-computer interaction]]&lt;/div&gt;</summary>
		<author><name>RealEditor</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Light_field_display&amp;diff=36356</id>
		<title>Light field display</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Light_field_display&amp;diff=36356"/>
		<updated>2025-08-04T05:46:32Z</updated>

		<summary type="html">&lt;p&gt;RealEditor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{see also|Terms|Technical Terms}}&lt;br /&gt;
&#039;&#039;&#039;Light field display&#039;&#039;&#039; (&#039;&#039;&#039;LFD&#039;&#039;&#039;) is an advanced display technology designed to reproduce a [[light field]], the distribution of light rays in [[3D space]], including their intensity and direction.&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;&amp;gt;Wetzstein G. (2020). “Computational Displays: Achieving the Full Plenoptic Function.” ACM SIGGRAPH 2020 Courses. ACM Digital Library. doi:10.1145/3386569.3409414. Available: https://dl.acm.org/doi/10.1145/3386569.3409414 (accessed 3 May 2025).&amp;lt;/ref&amp;gt; Unlike conventional 2D displays or [[stereoscopic display|stereoscopic 3D]] systems that present flat images or fixed viewpoints requiring glasses, light field displays aim to recreate how light naturally propagates from a real scene.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;&amp;gt;Wetzstein, G., Lanman, D., Hirsch, M., &amp;amp; Raskar, R. (2012). Tensor displays: Compressive light field synthesis using multilayer displays with directional backlighting. ACM Transactions on Graphics, 31(4), Article 80. doi:10.1145/2185520.2185576&amp;lt;/ref&amp;gt; This allows viewers to perceive genuine [[depth]], [[parallax]] (both horizontal and vertical), and perspective changes without special eyewear (in many implementations).&amp;lt;ref name=&amp;quot;LookingGlass27&amp;quot;&amp;gt;Looking Glass Factory. Looking Glass 27″ Light Field Display. Retrieved from https://lookingglassfactory.com/looking-glass-27&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;&amp;gt;Hollister, S. (2024, January 19). Leia is building a 3D empire on the back of the worst phone we&#039;ve ever reviewed. The Verge. Retrieved from https://www.theverge.com/24036574/leia-glasses-free-3d-ces-2024&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This method of display is crucial for the future of [[virtual reality]] (VR) and [[augmented reality]] (AR), because it can directly address the [[vergence-accommodation conflict]] (VAC).&amp;lt;ref name=&amp;quot;WiredVAC&amp;quot;&amp;gt;Zhang, S. (2015, August 11). The Obscure Neuroscience Problem That&#039;s Plaguing VR. WIRED. Retrieved from https://www.wired.com/2015/08/obscure-neuroscience-problem-thats-plaguing-vr&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;VACReview&amp;quot;&amp;gt;Y. Zhou, J. Zhang, F. Fang, “Vergence-accommodation conflict in optical see-through display: Review and prospect,” *Results in Optics*, vol. 5, p. 100160, 2021, doi:10.1016/j.rio.2021.100160.&amp;lt;/ref&amp;gt; By providing correct [[focal cues]] that match the [[vergence]] information, LFDs promise more immersive, realistic, and visually comfortable experiences, reducing eye strain and [[Virtual Reality Sickness|simulator sickness]] often associated with current [[head-mounted display]]s (HMDs).&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;&amp;gt;CREAL. Light-field: Seeing Virtual Worlds Naturally. Retrieved from https://creal.com/technology/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Definition and Principles ==&lt;br /&gt;
A light field display aims to replicate the [[Plenoptic Function]], a theoretical function describing the complete set of light rays passing through every point in space, in every direction, potentially across time and wavelength.&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt; In practice, light field displays generate a discretized (sampled) approximation of the relevant 4D subset of this function (typically spatial position and angular direction).&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;&amp;gt;Huang, F. C., Wetzstein, G., Barsky, B. A., &amp;amp; Raskar, R. (2014). Eyeglasses-free display: Towards correcting visual aberrations with computational light field displays. ACM Transactions on Graphics, 33(4), Article 59. doi:10.1145/2601097.2601122&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
By controlling the direction as well as the color and intensity of emitted light rays, these displays allow the viewer&#039;s eyes to naturally focus ([[accommodation]]) at different depths within the displayed scene, matching the depth cues provided by binocular vision ([[vergence]]).&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt; This recreation allows users to experience:&lt;br /&gt;
* Full motion [[parallax]] (horizontal and vertical look-around).&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&lt;br /&gt;
* Accurate [[occlusion]] cues.&lt;br /&gt;
* Natural [[focal cues]], mitigating the [[Vergence-accommodation conflict]].&amp;lt;ref name=&amp;quot;WiredVAC&amp;quot;/&amp;gt;&lt;br /&gt;
* [[Specular highlights]] and realistic reflections that change with viewpoint.&lt;br /&gt;
* Often, viewing without specialized eyewear (especially in non-headset formats).&amp;lt;ref name=&amp;quot;LookingGlass27&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Key Characteristics ==&lt;br /&gt;
* &#039;&#039;&#039;Glasses-Free 3D:&#039;&#039;&#039; Many LFD formats (especially desktop and larger) offer autostereoscopic viewing for multiple users simultaneously, each seeing the correct perspective.&amp;lt;ref name=&amp;quot;LookingGlass27&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Full Parallax:&#039;&#039;&#039; True LFDs provide both horizontal and vertical parallax, unlike earlier [[autostereoscopic display|autostereoscopic]] technologies that often limited parallax to side-to-side movement.&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Accommodation-Convergence Conflict Resolution:&#039;&#039;&#039; A primary driver for VR/AR, LFDs can render virtual objects at appropriate focal distances, aligning accommodation and vergence to significantly improve visual comfort and realism.&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;&amp;gt;&lt;br /&gt;
Lanman D., &amp;amp; Luebke D. (2013). “Near‑Eye Light Field Displays.”  &lt;br /&gt;
*ACM Transactions on Graphics*, 32 (6), 220:1–220:10. doi:10.1145/2508363.2508366.  &lt;br /&gt;
Project page: https://research.nvidia.com/publication/near-eye-light-field-displays (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Computational Requirements:&#039;&#039;&#039; Generating and processing the massive amount of data (multiple views or directional light information) needed for LFDs requires significant [[Graphics processing unit|GPU]] power and bandwidth.&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Resolution Trade-offs:&#039;&#039;&#039; A fundamental challenge involves balancing spatial resolution (image sharpness), angular resolution (smoothness of parallax/number of views), [[Field of view|field of view (FoV)]], and depth of field.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt; This is often referred to as the spatio-angular resolution trade-off.&lt;br /&gt;
&lt;br /&gt;
==History and Development==&lt;br /&gt;
===Early Concepts and Foundations===&lt;br /&gt;
The underlying concept can be traced back to Michael Faraday&#039;s 1846 suggestion of light as a field&amp;lt;ref name=&amp;quot;FaradayField&amp;quot;&amp;gt;Princeton University Press. Faraday, Maxwell, and the Electromagnetic Field - How Two Men Revolutionized Physics. Retrieved from https://press.princeton.edu/books/hardcover/9780691161664/faraday-maxwell-and-the-electromagnetic-field&amp;lt;/ref&amp;gt; and was mathematically formalized regarding radiance transfer by Andrey Gershun in 1936.&amp;lt;ref name=&amp;quot;Gershun1936&amp;quot;&amp;gt;Gershun, A. (1936). The Light Field. Moscow. (Translated by P. Moon &amp;amp; G. Timoshenko, 1939, Journal of Mathematics and Physics, XVIII, 51–151).&amp;lt;/ref&amp;gt; The practical groundwork for reproducing light fields was laid by Gabriel Lippmann&#039;s 1908 concept of [[Integral imaging|Integral Photography]] (&amp;quot;photographie intégrale&amp;quot;), which used an array of small lenses to capture and reproduce light fields.&amp;lt;ref name=&amp;quot;Lippmann1908&amp;quot;&amp;gt;Lippmann, G. (1908). Épreuves réversibles donnant la sensation du relief. Journal de Physique Théorique et Appliquée, 7(1), 821–825. doi:10.1051/jphystap:019080070082100&amp;lt;/ref&amp;gt; The modern computational understanding was significantly advanced by Adelson and Bergen&#039;s formalization of the [[Plenoptic Function]] in 1991.&amp;lt;ref name=&amp;quot;AdelsonBergen1991&amp;quot;&amp;gt;Adelson, E. H., &amp;amp; Bergen, J. R. (1991). The plenoptic function and the elements of early vision. In M. Landy &amp;amp; J. A. Movshon (Eds.), Computational Models of Visual Processing (pp. 3–20). MIT Press.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Key Development Milestones===&lt;br /&gt;
* &#039;&#039;&#039;1908:&#039;&#039;&#039; Gabriel Lippmann introduces integral photography.&amp;lt;ref name=&amp;quot;Lippmann1908&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;1936:&#039;&#039;&#039; Andrey Gershun formalizes the light field mathematically.&amp;lt;ref name=&amp;quot;Gershun1936&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;1991:&#039;&#039;&#039; Adelson and Bergen formalize the plenoptic function.&amp;lt;ref name=&amp;quot;AdelsonBergen1991&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;1996:&#039;&#039;&#039; Levoy and Hanrahan publish work on Light Field Rendering.&amp;lt;ref name=&amp;quot;Levoy1996&amp;quot;&amp;gt;Levoy, M., &amp;amp; Hanrahan, P. (1996). Light field rendering. Proceedings of the 23rd annual conference on Computer graphics and interactive techniques (SIGGRAPH &#039;96), 31-42. doi:10.1145/237170.237193&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2005:&#039;&#039;&#039; Stanford Multi-camera Array demonstrated for light field capture.&amp;lt;ref name=&amp;quot;Wilburn2005&amp;quot;&amp;gt;Wilburn, B., Joshi, N., Vaish, V., Talvala, E. V., Antunez, E., Barth, A., Adams, A., Horowitz, M., &amp;amp; Levoy, M. (2005). High performance imaging using large camera arrays. ACM SIGGRAPH 2005 Papers (SIGGRAPH &#039;05), 765-776. doi:10.1145/1186822.1073256&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2004-2008:&#039;&#039;&#039; Early computational light field displays developed (for example MIT Media Lab).&amp;lt;ref name=&amp;quot;Matusik2004&amp;quot;&amp;gt;Matusik, W., &amp;amp; Pfister, H. (2004). 3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes. ACM SIGGRAPH 2004 Papers (SIGGRAPH &#039;04), 814–824. doi:10.1145/1186562.1015805&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2010-2013:&#039;&#039;&#039; Introduction of multilayer, compressive, and tensor light field display concepts.&amp;lt;ref name=&amp;quot;Lanman2010ContentAdaptive&amp;quot;&amp;gt;Lanman, D., Hirsch, M., Kim, Y., &amp;amp; Raskar, R. (2010). Content-adaptive parallax barriers: optimizing dual-layer 3D displays using low-rank light field factorization. ACM SIGGRAPH Asia 2010 papers (SIGGRAPH ASIA &#039;10), Article 163. doi:10.1145/1882261.1866191&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2013:&#039;&#039;&#039; NVIDIA demonstrates near-eye light field display prototype for VR.&amp;lt;ref name=&amp;quot;NvidiaNELD&amp;quot;&amp;gt;Lanman, D., &amp;amp; Luebke, D. (2013). Near-Eye Light Field Displays (Technical Report NVR-2013-004). NVIDIA Research. Retrieved from https://research.nvidia.com/sites/default/files/pubs/2013-11_Near-Eye-Light-Field/NVIDIA-NELD.pdf&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2015 onwards:&#039;&#039;&#039; Emergence of advanced prototypes (for example Sony, CREAL, Light Field Lab).&amp;lt;ref name=&amp;quot;LookingGlass27&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;&amp;gt;Lang, B. (2023, January 11). CREAL&#039;s Latest Light-field AR Demo Shows Continued Progress Toward Natural Depth &amp;amp; Focus. Road to VR. Retrieved from https://www.roadtovr.com/creal-light-field-ar-vr-headset-prototype/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Technical Implementations (How They Work) ==&lt;br /&gt;
Light field displays use various techniques to generate the 4D light field:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;[[Microlens Arrays]] (MLAs):&#039;&#039;&#039; A high-resolution display panel (LCD or OLED) is overlaid with an array of tiny lenses. Each lenslet directs light from the pixels beneath it into a specific set of directions, creating different views for different observer positions.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt; This is a common approach derived from integral imaging.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt; The trade-off is explicit: spatial resolution is determined by the lenslet count, angular resolution by the pixels per lenslet.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Multilayer Displays (Stacked LCDs):&#039;&#039;&#039; Several layers of transparent display panels (typically [[Liquid crystal display|LCDs]]) are stacked with air gaps. By computationally optimizing the opacity patterns on each layer, the display acts as a multiplicative [[Spatial Light Modulator|spatial light modulator]], shaping light from a backlight into a complex light field.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2010ContentAdaptive&amp;quot;/&amp;gt; These are often explored for near-eye displays.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Directional Backlighting:&#039;&#039;&#039; A standard display panel (for example LCD) is combined with a specialized backlight that emits light in controlled directions. The backlight might use another LCD panel coupled with optics like lenticular sheets to achieve directionality.&amp;lt;ref name=&amp;quot;Maimone2013Focus3D&amp;quot;&amp;gt;Maimone, A., Wetzstein, G., Hirsch, M., Lanman, D., Raskar, R., &amp;amp; Fuchs, H. (2013). Focus 3D: compressive accommodation display. ACM Transactions on Graphics, 32(5), Article 152. doi:10.1145/2516971.2516983&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Projector Arrays:&#039;&#039;&#039; Multiple projectors illuminate a screen (often lenticular or diffusive). Each projector provides a different perspective view, and their combined output forms the light field.&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Parallax Barrier]]s:&#039;&#039;&#039; An opaque layer with precisely positioned slits or apertures is placed in front of or between display panels. The barrier blocks light selectively, allowing different pixels to be seen from different angles.&amp;lt;ref name=&amp;quot;JDI_Parallax&amp;quot;&amp;gt;&lt;br /&gt;
Japan Display Inc. (2016, Dec 5). *Ultra‑High Resolution Display with Integrated Parallax Barrier for Glasses‑Free 3D* [Press release].  &lt;br /&gt;
Archived copy: https://web.archive.org/web/20161221045330/https://www.j-display.com/english/news/2016/20161205.html (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt; Often less light-efficient than MLAs.&lt;br /&gt;
* &#039;&#039;&#039;[[Waveguide]] Optics:&#039;&#039;&#039; Light is injected into thin optical waveguides (similar to those in some AR glasses) and then coupled out at specific points with controlled directionality, often using diffractive optical elements (DOEs) or gratings.&amp;lt;ref name=&amp;quot;LightFieldLabTech&amp;quot;&amp;gt;&lt;br /&gt;
Light Field Lab. *SolidLight™ Platform Overview.* https://www.lightfieldlab.com/ (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Maimone2017HolographicNED&amp;quot;&amp;gt;Maimone, A., Georgiou, A., &amp;amp; Kollin, J. S. (2017). Holographic near-eye displays for virtual and augmented reality. ACM Transactions on Graphics, 36(4), Article 85. doi:10.1145/3072959.3073624&amp;lt;/ref&amp;gt; This is explored for compact AR/VR systems.&lt;br /&gt;
* &#039;&#039;&#039;Time-Multiplexed Displays:&#039;&#039;&#039; Different views or directional illumination patterns are presented rapidly in sequence. If cycled faster than human perception, this creates the illusion of a continuous light field. Can be combined with other techniques like directional backlighting.&amp;lt;ref name=&amp;quot;Liu2014OSTHMD&amp;quot;&amp;gt;Liu, S., Cheng, D., &amp;amp; Hua, H. (2014). An optical see-through head mounted display with addressable focal planes. 2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 33-42. doi:10.1109/ISMAR.2014.6948403&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Holographic and Diffractive Approaches:&#039;&#039;&#039; While [[Holographic display|holographic displays]] reconstruct wavefronts through diffraction, some LFDs utilize holographic optical elements (HOEs) or related diffractive principles to achieve high angular resolution and potentially overcome MLA limitations.&amp;lt;ref name=&amp;quot;SpringerReview2021&amp;quot;&amp;gt;M. Martínez-Corral, Z. Guan, Y. Li, Z. Xiong, B. Javidi, “Review of light field technologies,” *Visual Computing for Industry, Biomedicine and Art*, 4 (1): 29, 2021, doi:10.1186/s42492-021-00096-8.&amp;lt;/ref&amp;gt; Some companies use &amp;quot;holographic&amp;quot; terminology for their high-density LFDs.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;&amp;gt;C. Fink, “Light Field Lab Raises $50 Million to Bring SolidLight Holograms Into the Real World,” *Forbes*, 8 Feb 2023. Available: https://www.forbes.com/sites/charliefink/2023/02/08/light-field-lab-raises-50m-to-bring-solidlight-holograms-into-the-real-world/ (accessed 30 Apr 2025).&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Types of Light Field Displays ==&lt;br /&gt;
* &#039;&#039;&#039;Near-Eye Light Field Displays:&#039;&#039;&#039; Integrated into VR/AR [[Head-mounted display|HMDs]]. Primarily focused on solving the VAC for comfortable, realistic close-up interactions.&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt; Examples include research prototypes from NVIDIA&amp;lt;ref name=&amp;quot;NvidiaNELD&amp;quot;/&amp;gt; and academic groups,&amp;lt;ref name=&amp;quot;Huang2015Stereoscope&amp;quot;&amp;gt;Huang, F. C., Chen, K., &amp;amp; Wetzstein, G. (2015). The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues. ACM Transactions on Graphics, 34(4), Article 60. doi:10.1145/2766943&amp;lt;/ref&amp;gt; and commercial modules from companies like [[CREAL]].&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt; Often utilize MLAs, stacked LCDs, or waveguide/diffractive approaches.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Large Format / Tiled Displays:&#039;&#039;&#039; Aimed at creating large-scale, immersive &amp;quot;holographic&amp;quot; experiences without glasses for public venues, command centers, or collaborative environments.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LightFieldLabSolidLightPR&amp;quot;&amp;gt;&lt;br /&gt;
Light Field Lab Press Release (2021, Oct 7). *Light Field Lab Unveils SolidLight™ – The Highest Resolution Holographic Display Platform Ever Designed.*  &lt;br /&gt;
https://www.lightfieldlab.com/press-release-oct-2021 (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt; [[Light Field Lab]]&#039;s SolidLight™ platform uses modular panels designed to be tiled into large video walls.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LightFieldLabSolidLightPR&amp;quot;/&amp;gt; Sony&#039;s ELF-SR series (Spatial Reality Display) uses high-speed vision sensors and a micro-optical lens for a single user but demonstrates high-fidelity desktop light field effects.&amp;lt;ref name=&amp;quot;SonyELFSR2&amp;quot;&amp;gt;&lt;br /&gt;
Sony Professional. *ELF‑SR2 Spatial Reality Display.*  &lt;br /&gt;
https://pro.sony/ue_US/products/spatial-reality-displays/elf-sr2 (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Comparison with Other 3D Display Technologies ==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Comparison of Key 3D Display Technology Characteristics&lt;br /&gt;
! Technology&lt;br /&gt;
! Glasses Required&lt;br /&gt;
! Natural Focal Cues (Solves [[Vergence-accommodation conflict|VAC]])&lt;br /&gt;
! Full Motion [[Parallax]]&lt;br /&gt;
! Typical [[Field of view|View Field]]&lt;br /&gt;
! Key Trade-offs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Light Field Display]]&#039;&#039;&#039;&lt;br /&gt;
| No (often)&lt;br /&gt;
| Yes&lt;br /&gt;
| Yes&lt;br /&gt;
| Limited to Wide&lt;br /&gt;
| Spatio-angular resolution trade-off, computation needs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Stereoscopic display|Stereoscopic Displays]]&#039;&#039;&#039;&lt;br /&gt;
| Yes&lt;br /&gt;
| No&lt;br /&gt;
| No &amp;lt;small&amp;gt;(requires head tracking)&amp;lt;/small&amp;gt;&lt;br /&gt;
| Wide&lt;br /&gt;
| VAC causes fatigue, requires glasses&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Autostereoscopic display|Autostereoscopic (non-LFD)]]&#039;&#039;&#039;&lt;br /&gt;
| No&lt;br /&gt;
| No&lt;br /&gt;
| Limited &amp;lt;small&amp;gt;(often Horizontal only)&amp;lt;/small&amp;gt;&lt;br /&gt;
| Limited&lt;br /&gt;
| Reduced resolution per view, fixed viewing zones&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Volumetric Display]]&#039;&#039;&#039;&lt;br /&gt;
| No&lt;br /&gt;
| Yes&lt;br /&gt;
| Yes&lt;br /&gt;
| 360° potential&lt;br /&gt;
| Limited resolution, transparency/opacity issues, bulk&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Holographic display|Holographic Displays]]&#039;&#039;&#039;&lt;br /&gt;
| No&lt;br /&gt;
| Yes&lt;br /&gt;
| Yes&lt;br /&gt;
| Often Limited&lt;br /&gt;
| Extreme computational demands, [[Speckle pattern|speckle]], small size typically&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
LFDs offer a compelling balance, providing natural depth cues without glasses (in many formats) and resolving the VAC, but face challenges in achieving high resolution across both spatial and angular domains simultaneously.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Content Creation ==&lt;br /&gt;
Creating content compatible with LFDs requires capturing or generating directional view information:&lt;br /&gt;
* &#039;&#039;&#039;[[Light Field Camera|Light Field Cameras]] / [[Plenoptic Camera|Plenoptic Cameras]]:&#039;&#039;&#039; Capture both intensity and direction of incoming light using specialized sensors (often with MLAs).&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt; The captured data can be processed for LFD playback.&lt;br /&gt;
* &#039;&#039;&#039;[[Computer Graphics]] Rendering:&#039;&#039;&#039; Standard 3D scenes built in engines like [[Unity (game engine)|Unity]] or [[Unreal Engine]] can be rendered from multiple viewpoints to generate the necessary data.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LookingGlassSoftware&amp;quot;/&amp;gt; Specialized light field rendering techniques, potentially using [[Ray tracing (graphics)|ray tracing]] or neural methods like [[Neural Radiance Fields]] (NeRF), are employed.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Mildenhall2020NeRF&amp;quot;&amp;gt;Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T., Ramamoorthi, R., &amp;amp; Ng, R. (2020). NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. European Conference on Computer Vision (ECCV), 405-421. doi:10.1007/978-3-030-58452-8_24&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Photogrammetry]] and 3D Scanning:&#039;&#039;&#039; Real-world objects/scenes captured as 3D models can serve as input for rendering light field views.&lt;br /&gt;
* &#039;&#039;&#039;Existing 3D Content Conversion:&#039;&#039;&#039; Plugins and software tools (for example provided by Looking Glass Factory) allow conversion of existing 3D models, animations, or even stereoscopic content for LFD viewing.&amp;lt;ref name=&amp;quot;LookingGlassSoftware&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Focal Stack]] Conversion:&#039;&#039;&#039; Research explores converting image stacks captured at different focal depths into light field representations, particularly for multi-layer displays.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Applications==&lt;br /&gt;
===Applications in VR and AR===&lt;br /&gt;
* &#039;&#039;&#039;Enhanced Realism and Immersion:&#039;&#039;&#039; Correct depth cues make virtual objects appear more solid and stable, improving the sense of presence, especially for near-field interactions.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Improved Visual Comfort:&#039;&#039;&#039; Mitigating the VAC reduces eye strain, fatigue, and nausea, enabling longer and more comfortable VR/AR sessions.&amp;lt;ref name=&amp;quot;WiredVAC&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Natural Interaction:&#039;&#039;&#039; Accurate depth perception facilitates intuitive hand-eye coordination for manipulating virtual objects.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Seamless AR Integration:&#039;&#039;&#039; Allows virtual elements to appear more cohesively integrated with the real world at correct focal depths.&lt;br /&gt;
* &#039;&#039;&#039;Vision Correction:&#039;&#039;&#039; Near-eye LFDs can potentially pre-distort the displayed light field to correct for the user&#039;s refractive errors, eliminating the need for prescription glasses within the headset.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2015Stereoscope&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Other Applications===&lt;br /&gt;
* &#039;&#039;&#039;Medical Imaging and Visualization:&#039;&#039;&#039; Intuitive visualization of complex 3D scans (CT, MRI) for diagnostics, surgical planning, and education.&amp;lt;ref name=&amp;quot;Nam2019Medical&amp;quot;&amp;gt;Nam, J., McCormick, M., &amp;amp; Tate, A. J. (2019). Light field display systems for medical imaging applications. Journal of Display Technology, 15(3), 215-225. doi:10.1002/jsid.785&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Scientific Visualization:&#039;&#039;&#039; Analyzing complex datasets in fields like fluid dynamics, molecular modeling, geology.&amp;lt;ref name=&amp;quot;Halle2017SciVis&amp;quot;&amp;gt;Halle, M. W., &amp;amp; Meng, J. (2017). LightPlanets: GPU-based rendering of transparent astronomical objects using light field methods. IEEE Transactions on Visualization and Computer Graphics, 23(5), 1479-1488. doi:10.1109/TVCG.2016.2535388&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Digital Signage]] and Advertising:&#039;&#039;&#039; Eye-catching glasses-free 3D displays for retail and public spaces.&amp;lt;ref name=&amp;quot;LookingGlass27&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Product Design and Engineering (CAD/CAE):&#039;&#039;&#039; Collaborative visualization and review of 3D models.&amp;lt;ref name=&amp;quot;Nam2019Medical&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Entertainment and Gaming:&#039;&#039;&#039; Immersive experiences in arcades, museums, theme parks, and potentially future home entertainment.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Automotive Displays:&#039;&#039;&#039; [[Head-up display|Heads-up displays]] (HUDs) or dashboards presenting information at appropriate depths.&amp;lt;ref name=&amp;quot;JDI_Parallax&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Telepresence and Communication:&#039;&#039;&#039; Creating realistic, life-sized 3D representations of remote collaborators, like Google&#039;s [[Project Starline]] concept.&amp;lt;ref name=&amp;quot;Starline&amp;quot;&amp;gt;Google Blog (2023, May 10). A first look at Project Starline’s new, simpler prototype. Retrieved from https://blog.google/technology/research/project-starline-prototype/&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Microscopy]]:&#039;&#039;&#039; Viewing microscopic samples with natural depth perception.&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Challenges and Limitations ==&lt;br /&gt;
* &#039;&#039;&#039;Spatio-Angular Resolution Trade-off:&#039;&#039;&#039; Increasing the number of views (angular resolution) often decreases the perceived sharpness (spatial resolution) for a fixed display pixel count.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Computational Complexity &amp;amp; Bandwidth:&#039;&#039;&#039; Rendering, compressing, and transmitting the massive datasets for real-time LFDs is extremely demanding on GPUs and data infrastructure.&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Manufacturing Complexity and Cost:&#039;&#039;&#039; Producing precise optical components like high-density MLAs, perfectly aligned multi-layer stacks, or large-area waveguide structures is challenging and costly.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Form Factor and Miniaturization:&#039;&#039;&#039; Integrating complex optics and electronics into thin, lightweight, and power-efficient near-eye devices remains difficult.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Limited Field of View (FoV):&#039;&#039;&#039; Achieving wide FoV comparable to traditional VR headsets while maintaining high angular resolution is challenging.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Brightness and Efficiency:&#039;&#039;&#039; Techniques like MLAs and parallax barriers inherently block or redirect light, reducing overall display brightness and power efficiency.&lt;br /&gt;
* &#039;&#039;&#039;Content Ecosystem:&#039;&#039;&#039; The workflow for creating, distributing, and viewing native light field content is still developing compared to standard 2D or stereoscopic 3D.&amp;lt;ref name=&amp;quot;LookingGlassSoftware&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Visual Artifacts:&#039;&#039;&#039; Potential issues include [[Moiré pattern|moiré]] effects (from periodic structures like MLAs), ghosting/crosstalk between views, and latency.&lt;br /&gt;
&lt;br /&gt;
== Key Players and Commercial Landscape ==&lt;br /&gt;
Several companies and research groups are active in LFD development:&lt;br /&gt;
* &#039;&#039;&#039;[[CREAL]]:&#039;&#039;&#039; Swiss startup focused on compact near-eye LFD modules for AR/VR glasses aiming to solve VAC.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Light Field Lab]]:&#039;&#039;&#039; Developing large-scale, modular &amp;quot;holographic&amp;quot; LFD panels (SolidLight™) based on proprietary [[Waveguide (optics)|waveguide]] technology.&amp;lt;ref name=&amp;quot;LightFieldLabTech&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LightFieldLabSolidLightPR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Sony]]:&#039;&#039;&#039; Produces the Spatial Reality Display (ELF-SR series), a high-fidelity desktop LFD using eye-tracking.&amp;lt;ref name=&amp;quot;SonyELFSR2&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Avegant]]:&#039;&#039;&#039; Develops light field light engines, particularly for AR, focusing on VAC resolution.&amp;lt;ref name=&amp;quot;AvegantPR&amp;quot;&amp;gt;PR Newswire (2017, March 15). Avegant Introduces Light Field Technology for Mixed Reality. Retrieved from https://www.prnewswire.com/news-releases/avegant-introduces-light-field-technology-for-mixed-reality-300423855.html&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Holografika]]:&#039;&#039;&#039; Offers glasses-free 3D LFD systems for professional applications.&amp;lt;ref name=&amp;quot;Holografika&amp;quot;&amp;gt;Holografika. Light Field Displays. Retrieved from https://holografika.com/light-field-displays/&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Japan Display Inc. (JDI)]]:&#039;&#039;&#039; Demonstrated prototype LFDs for various applications.&amp;lt;ref name=&amp;quot;JDI_LFD_2019&amp;quot;&amp;gt;Japan Display Inc. News (2019, December 3). JDI Develops World&#039;s First 10.1-inch Light Field Display. Retrieved from https://www.j-display.com/english/news/2019/20191203_01.html&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[NVIDIA]]:&#039;&#039;&#039; Foundational research in near-eye LFDs and ongoing GPU development crucial for LFD rendering.&amp;lt;ref name=&amp;quot;NvidiaNELD&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Google]]:&#039;&#039;&#039; Research in LFDs, demonstrated through concepts like Project Starline.&amp;lt;ref name=&amp;quot;Starline&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Academic Research:&#039;&#039;&#039; Institutions like [[MIT Media Lab]], [[Stanford University]], University of Arizona, and others continue to push theoretical and practical boundaries.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Future Directions and Research ==&lt;br /&gt;
* &#039;&#039;&#039;Computational Display Optimization:&#039;&#039;&#039; Using [[Artificial intelligence|AI]] and sophisticated algorithms to optimize patterns on multi-layer displays or directional backlights for better quality with fewer resources.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt; Using neural representations (like NeRF) for efficient light field synthesis and compression.&amp;lt;ref name=&amp;quot;Mildenhall2020NeRF&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Varifocal and Multifocal Integration:&#039;&#039;&#039; Hybrid approaches combining LFD principles with dynamic focus elements (liquid lenses, deformable mirrors) to achieve focus cues potentially more efficiently than pure LFDs.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Liu2014OSTHMD&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Miniaturization for Wearables:&#039;&#039;&#039; Developing ultra-thin, efficient components using [[Metasurface|metasurfaces]], [[Holographic optical element|holographic optical elements (HOEs)]], advanced waveguides, and [[MicroLED]] displays for integration into consumer AR/VR glasses.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;SpringerReview2021&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Improved Content Capture and Creation Tools:&#039;&#039;&#039; Advancements in [[Plenoptic camera|plenoptic cameras]], AI-driven view synthesis, and streamlined software workflows.&amp;lt;ref name=&amp;quot;Mildenhall2020NeRF&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Higher Resolution and Efficiency:&#039;&#039;&#039; Addressing the spatio-angular trade-off and improving light efficiency through new materials, optical designs (for example polarization multiplexing&amp;lt;ref name=&amp;quot;Tan2019Polarization&amp;quot;&amp;gt;G. Tan, T. Zhan, Y.-H. Lee, J. Xiong, S.-T. Wu, “Near-eye light-field display with polarization multiplexing,” *Proceedings of SPIE* 10942, Advances in Display Technologies IX, paper 1094206, 2019, doi:10.1117/12.2509121.&amp;lt;/ref&amp;gt;), and display technologies.&lt;br /&gt;
&lt;br /&gt;
== See Also ==&lt;br /&gt;
* [[Light Field]]&lt;br /&gt;
* [[Plenoptic Function]]&lt;br /&gt;
* [[Integral imaging]]&lt;br /&gt;
* [[Autostereoscopic display]]&lt;br /&gt;
* [[Stereoscopy]]&lt;br /&gt;
* [[Holographic display]]&lt;br /&gt;
* [[Volumetric Display]]&lt;br /&gt;
* [[Varifocal display]]&lt;br /&gt;
* [[Vergence-accommodation conflict]]&lt;br /&gt;
* [[Virtual Reality]]&lt;br /&gt;
* [[Augmented Reality]]&lt;br /&gt;
* [[Head-mounted display]]&lt;br /&gt;
* [[Microlens array]]&lt;br /&gt;
* [[Spatial Light Modulator]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Terms]]&lt;br /&gt;
[[Category:Technical Terms]]&lt;br /&gt;
[[Category:Display technology]]&lt;br /&gt;
[[Category:3D display technology]]&lt;br /&gt;
[[Category:Autostereoscopy]]&lt;br /&gt;
[[Category:Virtual reality]]&lt;br /&gt;
[[Category:Augmented reality]]&lt;br /&gt;
[[Category:Optics]]&lt;br /&gt;
[[Category:Computational photography]]&lt;br /&gt;
[[Category:Emerging technologies]]&lt;br /&gt;
[[Category:Human-computer interaction]]&lt;/div&gt;</summary>
		<author><name>RealEditor</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Light_field_display&amp;diff=36355</id>
		<title>Light field display</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Light_field_display&amp;diff=36355"/>
		<updated>2025-08-04T05:44:09Z</updated>

		<summary type="html">&lt;p&gt;RealEditor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{see also|Terms|Technical Terms}}&lt;br /&gt;
&#039;&#039;&#039;Light field display&#039;&#039;&#039; (&#039;&#039;&#039;LFD&#039;&#039;&#039;) is an advanced display technology designed to reproduce a [[light field]], the distribution of light rays in [[3D space]], including their intensity and direction.&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;&amp;gt;Wetzstein G. (2020). “Computational Displays: Achieving the Full Plenoptic Function.” ACM SIGGRAPH 2020 Courses. ACM Digital Library. doi:10.1145/3386569.3409414. Available: https://dl.acm.org/doi/10.1145/3386569.3409414 (accessed 3 May 2025).&amp;lt;/ref&amp;gt; Unlike conventional 2D displays or [[stereoscopic display|stereoscopic 3D]] systems that present flat images or fixed viewpoints requiring glasses, light field displays aim to recreate how light naturally propagates from a real scene.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;&amp;gt;Wetzstein, G., Lanman, D., Hirsch, M., &amp;amp; Raskar, R. (2012). Tensor displays: Compressive light field synthesis using multilayer displays with directional backlighting. ACM Transactions on Graphics, 31(4), Article 80. doi:10.1145/2185520.2185576&amp;lt;/ref&amp;gt; This allows viewers to perceive genuine [[depth]], [[parallax]] (both horizontal and vertical), and perspective changes without special eyewear (in many implementations).&amp;lt;ref name=&amp;quot;LookingGlass27&amp;quot;&amp;gt;Looking Glass Factory. Looking Glass 27″ Light Field Display. Retrieved from https://lookingglassfactory.com/looking-glass-27&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;&amp;gt;Hollister, S. (2024, January 19). Leia is building a 3D empire on the back of the worst phone we&#039;ve ever reviewed. The Verge. Retrieved from https://www.theverge.com/24036574/leia-glasses-free-3d-ces-2024&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This method of display is crucial for the future of [[virtual reality]] (VR) and [[augmented reality]] (AR), because it can directly address the [[vergence-accommodation conflict]] (VAC).&amp;lt;ref name=&amp;quot;WiredVAC&amp;quot;&amp;gt;Zhang, S. (2015, August 11). The Obscure Neuroscience Problem That&#039;s Plaguing VR. WIRED. Retrieved from https://www.wired.com/2015/08/obscure-neuroscience-problem-thats-plaguing-vr&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;VACReview&amp;quot;&amp;gt;Y. Zhou, J. Zhang, F. Fang, “Vergence-accommodation conflict in optical see-through display: Review and prospect,” *Results in Optics*, vol. 5, p. 100160, 2021, doi:10.1016/j.rio.2021.100160.&amp;lt;/ref&amp;gt; By providing correct [[focal cues]] that match the [[vergence]] information, LFDs promise more immersive, realistic, and visually comfortable experiences, reducing eye strain and [[Virtual Reality Sickness|simulator sickness]] often associated with current HMDs.&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;&amp;gt;CREAL. Light-field: Seeing Virtual Worlds Naturally. Retrieved from https://creal.com/technology/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Definition and Principles ==&lt;br /&gt;
A light field display aims to replicate the [[Plenoptic Function]], a theoretical function describing the complete set of light rays passing through every point in space, in every direction, potentially across time and wavelength.&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt; In practice, light field displays generate a discretized (sampled) approximation of the relevant 4D subset of this function (typically spatial position and angular direction).&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;&amp;gt;Huang, F. C., Wetzstein, G., Barsky, B. A., &amp;amp; Raskar, R. (2014). Eyeglasses-free display: Towards correcting visual aberrations with computational light field displays. ACM Transactions on Graphics, 33(4), Article 59. doi:10.1145/2601097.2601122&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
By controlling the direction as well as the color and intensity of emitted light rays, these displays allow the viewer&#039;s eyes to naturally focus ([[accommodation]]) at different depths within the displayed scene, matching the depth cues provided by binocular vision ([[vergence]]).&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt; This recreation allows users to experience:&lt;br /&gt;
* Full motion [[parallax]] (horizontal and vertical look-around).&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&lt;br /&gt;
* Accurate [[occlusion]] cues.&lt;br /&gt;
* Natural [[focal cues]], mitigating the [[Vergence-accommodation conflict]].&amp;lt;ref name=&amp;quot;WiredVAC&amp;quot;/&amp;gt;&lt;br /&gt;
* [[Specular highlights]] and realistic reflections that change with viewpoint.&lt;br /&gt;
* Often, viewing without specialized eyewear (especially in non-headset formats).&amp;lt;ref name=&amp;quot;LookingGlass27&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Key Characteristics ==&lt;br /&gt;
* &#039;&#039;&#039;Glasses-Free 3D:&#039;&#039;&#039; Many LFD formats (especially desktop and larger) offer autostereoscopic viewing for multiple users simultaneously, each seeing the correct perspective.&amp;lt;ref name=&amp;quot;LookingGlass27&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Full Parallax:&#039;&#039;&#039; True LFDs provide both horizontal and vertical parallax, unlike earlier [[autostereoscopic display|autostereoscopic]] technologies that often limited parallax to side-to-side movement.&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Accommodation-Convergence Conflict Resolution:&#039;&#039;&#039; A primary driver for VR/AR, LFDs can render virtual objects at appropriate focal distances, aligning accommodation and vergence to significantly improve visual comfort and realism.&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;&amp;gt;&lt;br /&gt;
Lanman D., &amp;amp; Luebke D. (2013). “Near‑Eye Light Field Displays.”  &lt;br /&gt;
*ACM Transactions on Graphics*, 32 (6), 220:1–220:10. doi:10.1145/2508363.2508366.  &lt;br /&gt;
Project page: https://research.nvidia.com/publication/near-eye-light-field-displays (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Computational Requirements:&#039;&#039;&#039; Generating and processing the massive amount of data (multiple views or directional light information) needed for LFDs requires significant [[Graphics processing unit|GPU]] power and bandwidth.&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Resolution Trade-offs:&#039;&#039;&#039; A fundamental challenge involves balancing spatial resolution (image sharpness), angular resolution (smoothness of parallax/number of views), [[Field of view|field of view (FoV)]], and depth of field.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt; This is often referred to as the spatio-angular resolution trade-off.&lt;br /&gt;
&lt;br /&gt;
==History and Development==&lt;br /&gt;
===Early Concepts and Foundations===&lt;br /&gt;
The underlying concept can be traced back to Michael Faraday&#039;s 1846 suggestion of light as a field&amp;lt;ref name=&amp;quot;FaradayField&amp;quot;&amp;gt;Princeton University Press. Faraday, Maxwell, and the Electromagnetic Field - How Two Men Revolutionized Physics. Retrieved from https://press.princeton.edu/books/hardcover/9780691161664/faraday-maxwell-and-the-electromagnetic-field&amp;lt;/ref&amp;gt; and was mathematically formalized regarding radiance transfer by Andrey Gershun in 1936.&amp;lt;ref name=&amp;quot;Gershun1936&amp;quot;&amp;gt;Gershun, A. (1936). The Light Field. Moscow. (Translated by P. Moon &amp;amp; G. Timoshenko, 1939, Journal of Mathematics and Physics, XVIII, 51–151).&amp;lt;/ref&amp;gt; The practical groundwork for reproducing light fields was laid by Gabriel Lippmann&#039;s 1908 concept of [[Integral imaging|Integral Photography]] (&amp;quot;photographie intégrale&amp;quot;), which used an array of small lenses to capture and reproduce light fields.&amp;lt;ref name=&amp;quot;Lippmann1908&amp;quot;&amp;gt;Lippmann, G. (1908). Épreuves réversibles donnant la sensation du relief. Journal de Physique Théorique et Appliquée, 7(1), 821–825. doi:10.1051/jphystap:019080070082100&amp;lt;/ref&amp;gt; The modern computational understanding was significantly advanced by Adelson and Bergen&#039;s formalization of the [[Plenoptic Function]] in 1991.&amp;lt;ref name=&amp;quot;AdelsonBergen1991&amp;quot;&amp;gt;Adelson, E. H., &amp;amp; Bergen, J. R. (1991). The plenoptic function and the elements of early vision. In M. Landy &amp;amp; J. A. Movshon (Eds.), Computational Models of Visual Processing (pp. 3–20). MIT Press.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Key Development Milestones===&lt;br /&gt;
* &#039;&#039;&#039;1908:&#039;&#039;&#039; Gabriel Lippmann introduces integral photography.&amp;lt;ref name=&amp;quot;Lippmann1908&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;1936:&#039;&#039;&#039; Andrey Gershun formalizes the light field mathematically.&amp;lt;ref name=&amp;quot;Gershun1936&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;1991:&#039;&#039;&#039; Adelson and Bergen formalize the plenoptic function.&amp;lt;ref name=&amp;quot;AdelsonBergen1991&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;1996:&#039;&#039;&#039; Levoy and Hanrahan publish work on Light Field Rendering.&amp;lt;ref name=&amp;quot;Levoy1996&amp;quot;&amp;gt;Levoy, M., &amp;amp; Hanrahan, P. (1996). Light field rendering. Proceedings of the 23rd annual conference on Computer graphics and interactive techniques (SIGGRAPH &#039;96), 31-42. doi:10.1145/237170.237193&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2005:&#039;&#039;&#039; Stanford Multi-camera Array demonstrated for light field capture.&amp;lt;ref name=&amp;quot;Wilburn2005&amp;quot;&amp;gt;Wilburn, B., Joshi, N., Vaish, V., Talvala, E. V., Antunez, E., Barth, A., Adams, A., Horowitz, M., &amp;amp; Levoy, M. (2005). High performance imaging using large camera arrays. ACM SIGGRAPH 2005 Papers (SIGGRAPH &#039;05), 765-776. doi:10.1145/1186822.1073256&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2004-2008:&#039;&#039;&#039; Early computational light field displays developed (for example MIT Media Lab).&amp;lt;ref name=&amp;quot;Matusik2004&amp;quot;&amp;gt;Matusik, W., &amp;amp; Pfister, H. (2004). 3D TV: a scalable system for real-time acquisition, transmission, and autostereoscopic display of dynamic scenes. ACM SIGGRAPH 2004 Papers (SIGGRAPH &#039;04), 814–824. doi:10.1145/1186562.1015805&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2010-2013:&#039;&#039;&#039; Introduction of multilayer, compressive, and tensor light field display concepts.&amp;lt;ref name=&amp;quot;Lanman2010ContentAdaptive&amp;quot;&amp;gt;Lanman, D., Hirsch, M., Kim, Y., &amp;amp; Raskar, R. (2010). Content-adaptive parallax barriers: optimizing dual-layer 3D displays using low-rank light field factorization. ACM SIGGRAPH Asia 2010 papers (SIGGRAPH ASIA &#039;10), Article 163. doi:10.1145/1882261.1866191&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2013:&#039;&#039;&#039; NVIDIA demonstrates near-eye light field display prototype for VR.&amp;lt;ref name=&amp;quot;NvidiaNELD&amp;quot;&amp;gt;Lanman, D., &amp;amp; Luebke, D. (2013). Near-Eye Light Field Displays (Technical Report NVR-2013-004). NVIDIA Research. Retrieved from https://research.nvidia.com/sites/default/files/pubs/2013-11_Near-Eye-Light-Field/NVIDIA-NELD.pdf&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;2015 onwards:&#039;&#039;&#039; Emergence of advanced prototypes (for example Sony, CREAL, Light Field Lab).&amp;lt;ref name=&amp;quot;LookingGlass27&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;&amp;gt;Lang, B. (2023, January 11). CREAL&#039;s Latest Light-field AR Demo Shows Continued Progress Toward Natural Depth &amp;amp; Focus. Road to VR. Retrieved from https://www.roadtovr.com/creal-light-field-ar-vr-headset-prototype/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Technical Implementations (How They Work) ==&lt;br /&gt;
Light field displays use various techniques to generate the 4D light field:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;[[Microlens Arrays]] (MLAs):&#039;&#039;&#039; A high-resolution display panel (LCD or OLED) is overlaid with an array of tiny lenses. Each lenslet directs light from the pixels beneath it into a specific set of directions, creating different views for different observer positions.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt; This is a common approach derived from integral imaging.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt; The trade-off is explicit: spatial resolution is determined by the lenslet count, angular resolution by the pixels per lenslet.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Multilayer Displays (Stacked LCDs):&#039;&#039;&#039; Several layers of transparent display panels (typically [[Liquid crystal display|LCDs]]) are stacked with air gaps. By computationally optimizing the opacity patterns on each layer, the display acts as a multiplicative [[Spatial Light Modulator|spatial light modulator]], shaping light from a backlight into a complex light field.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2010ContentAdaptive&amp;quot;/&amp;gt; These are often explored for near-eye displays.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Directional Backlighting:&#039;&#039;&#039; A standard display panel (for example LCD) is combined with a specialized backlight that emits light in controlled directions. The backlight might use another LCD panel coupled with optics like lenticular sheets to achieve directionality.&amp;lt;ref name=&amp;quot;Maimone2013Focus3D&amp;quot;&amp;gt;Maimone, A., Wetzstein, G., Hirsch, M., Lanman, D., Raskar, R., &amp;amp; Fuchs, H. (2013). Focus 3D: compressive accommodation display. ACM Transactions on Graphics, 32(5), Article 152. doi:10.1145/2516971.2516983&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Projector Arrays:&#039;&#039;&#039; Multiple projectors illuminate a screen (often lenticular or diffusive). Each projector provides a different perspective view, and their combined output forms the light field.&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Parallax Barrier]]s:&#039;&#039;&#039; An opaque layer with precisely positioned slits or apertures is placed in front of or between display panels. The barrier blocks light selectively, allowing different pixels to be seen from different angles.&amp;lt;ref name=&amp;quot;JDI_Parallax&amp;quot;&amp;gt;&lt;br /&gt;
Japan Display Inc. (2016, Dec 5). *Ultra‑High Resolution Display with Integrated Parallax Barrier for Glasses‑Free 3D* [Press release].  &lt;br /&gt;
Archived copy: https://web.archive.org/web/20161221045330/https://www.j-display.com/english/news/2016/20161205.html (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt; Often less light-efficient than MLAs.&lt;br /&gt;
* &#039;&#039;&#039;[[Waveguide]] Optics:&#039;&#039;&#039; Light is injected into thin optical waveguides (similar to those in some AR glasses) and then coupled out at specific points with controlled directionality, often using diffractive optical elements (DOEs) or gratings.&amp;lt;ref name=&amp;quot;LightFieldLabTech&amp;quot;&amp;gt;&lt;br /&gt;
Light Field Lab. *SolidLight™ Platform Overview.* https://www.lightfieldlab.com/ (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Maimone2017HolographicNED&amp;quot;&amp;gt;Maimone, A., Georgiou, A., &amp;amp; Kollin, J. S. (2017). Holographic near-eye displays for virtual and augmented reality. ACM Transactions on Graphics, 36(4), Article 85. doi:10.1145/3072959.3073624&amp;lt;/ref&amp;gt; This is explored for compact AR/VR systems.&lt;br /&gt;
* &#039;&#039;&#039;Time-Multiplexed Displays:&#039;&#039;&#039; Different views or directional illumination patterns are presented rapidly in sequence. If cycled faster than human perception, this creates the illusion of a continuous light field. Can be combined with other techniques like directional backlighting.&amp;lt;ref name=&amp;quot;Liu2014OSTHMD&amp;quot;&amp;gt;Liu, S., Cheng, D., &amp;amp; Hua, H. (2014). An optical see-through head mounted display with addressable focal planes. 2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 33-42. doi:10.1109/ISMAR.2014.6948403&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Holographic and Diffractive Approaches:&#039;&#039;&#039; While [[Holographic display|holographic displays]] reconstruct wavefronts through diffraction, some LFDs utilize holographic optical elements (HOEs) or related diffractive principles to achieve high angular resolution and potentially overcome MLA limitations.&amp;lt;ref name=&amp;quot;SpringerReview2021&amp;quot;&amp;gt;M. Martínez-Corral, Z. Guan, Y. Li, Z. Xiong, B. Javidi, “Review of light field technologies,” *Visual Computing for Industry, Biomedicine and Art*, 4 (1): 29, 2021, doi:10.1186/s42492-021-00096-8.&amp;lt;/ref&amp;gt; Some companies use &amp;quot;holographic&amp;quot; terminology for their high-density LFDs.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;&amp;gt;C. Fink, “Light Field Lab Raises $50 Million to Bring SolidLight Holograms Into the Real World,” *Forbes*, 8 Feb 2023. Available: https://www.forbes.com/sites/charliefink/2023/02/08/light-field-lab-raises-50m-to-bring-solidlight-holograms-into-the-real-world/ (accessed 30 Apr 2025).&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Types of Light Field Displays ==&lt;br /&gt;
* &#039;&#039;&#039;Near-Eye Light Field Displays:&#039;&#039;&#039; Integrated into VR/AR [[Head-mounted display|HMDs]]. Primarily focused on solving the VAC for comfortable, realistic close-up interactions.&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt; Examples include research prototypes from NVIDIA&amp;lt;ref name=&amp;quot;NvidiaNELD&amp;quot;/&amp;gt; and academic groups,&amp;lt;ref name=&amp;quot;Huang2015Stereoscope&amp;quot;&amp;gt;Huang, F. C., Chen, K., &amp;amp; Wetzstein, G. (2015). The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues. ACM Transactions on Graphics, 34(4), Article 60. doi:10.1145/2766943&amp;lt;/ref&amp;gt; and commercial modules from companies like [[CREAL]].&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt; Often utilize MLAs, stacked LCDs, or waveguide/diffractive approaches.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Large Format / Tiled Displays:&#039;&#039;&#039; Aimed at creating large-scale, immersive &amp;quot;holographic&amp;quot; experiences without glasses for public venues, command centers, or collaborative environments.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LightFieldLabSolidLightPR&amp;quot;&amp;gt;&lt;br /&gt;
Light Field Lab Press Release (2021, Oct 7). *Light Field Lab Unveils SolidLight™ – The Highest Resolution Holographic Display Platform Ever Designed.*  &lt;br /&gt;
https://www.lightfieldlab.com/press-release-oct-2021 (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt; [[Light Field Lab]]&#039;s SolidLight™ platform uses modular panels designed to be tiled into large video walls.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LightFieldLabSolidLightPR&amp;quot;/&amp;gt; Sony&#039;s ELF-SR series (Spatial Reality Display) uses high-speed vision sensors and a micro-optical lens for a single user but demonstrates high-fidelity desktop light field effects.&amp;lt;ref name=&amp;quot;SonyELFSR2&amp;quot;&amp;gt;&lt;br /&gt;
Sony Professional. *ELF‑SR2 Spatial Reality Display.*  &lt;br /&gt;
https://pro.sony/ue_US/products/spatial-reality-displays/elf-sr2 (accessed 3 May 2025).&lt;br /&gt;
&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Comparison with Other 3D Display Technologies ==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+ Comparison of Key 3D Display Technology Characteristics&lt;br /&gt;
! Technology&lt;br /&gt;
! Glasses Required&lt;br /&gt;
! Natural Focal Cues (Solves [[Vergence-accommodation conflict|VAC]])&lt;br /&gt;
! Full Motion [[Parallax]]&lt;br /&gt;
! Typical [[Field of view|View Field]]&lt;br /&gt;
! Key Trade-offs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Light Field Display]]&#039;&#039;&#039;&lt;br /&gt;
| No (often)&lt;br /&gt;
| Yes&lt;br /&gt;
| Yes&lt;br /&gt;
| Limited to Wide&lt;br /&gt;
| Spatio-angular resolution trade-off, computation needs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Stereoscopic display|Stereoscopic Displays]]&#039;&#039;&#039;&lt;br /&gt;
| Yes&lt;br /&gt;
| No&lt;br /&gt;
| No &amp;lt;small&amp;gt;(requires head tracking)&amp;lt;/small&amp;gt;&lt;br /&gt;
| Wide&lt;br /&gt;
| VAC causes fatigue, requires glasses&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Autostereoscopic display|Autostereoscopic (non-LFD)]]&#039;&#039;&#039;&lt;br /&gt;
| No&lt;br /&gt;
| No&lt;br /&gt;
| Limited &amp;lt;small&amp;gt;(often Horizontal only)&amp;lt;/small&amp;gt;&lt;br /&gt;
| Limited&lt;br /&gt;
| Reduced resolution per view, fixed viewing zones&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Volumetric Display]]&#039;&#039;&#039;&lt;br /&gt;
| No&lt;br /&gt;
| Yes&lt;br /&gt;
| Yes&lt;br /&gt;
| 360° potential&lt;br /&gt;
| Limited resolution, transparency/opacity issues, bulk&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;[[Holographic display|Holographic Displays]]&#039;&#039;&#039;&lt;br /&gt;
| No&lt;br /&gt;
| Yes&lt;br /&gt;
| Yes&lt;br /&gt;
| Often Limited&lt;br /&gt;
| Extreme computational demands, [[Speckle pattern|speckle]], small size typically&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
LFDs offer a compelling balance, providing natural depth cues without glasses (in many formats) and resolving the VAC, but face challenges in achieving high resolution across both spatial and angular domains simultaneously.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Content Creation ==&lt;br /&gt;
Creating content compatible with LFDs requires capturing or generating directional view information:&lt;br /&gt;
* &#039;&#039;&#039;[[Light Field Camera|Light Field Cameras]] / [[Plenoptic Camera|Plenoptic Cameras]]:&#039;&#039;&#039; Capture both intensity and direction of incoming light using specialized sensors (often with MLAs).&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt; The captured data can be processed for LFD playback.&lt;br /&gt;
* &#039;&#039;&#039;[[Computer Graphics]] Rendering:&#039;&#039;&#039; Standard 3D scenes built in engines like [[Unity (game engine)|Unity]] or [[Unreal Engine]] can be rendered from multiple viewpoints to generate the necessary data.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LookingGlassSoftware&amp;quot;/&amp;gt; Specialized light field rendering techniques, potentially using [[Ray tracing (graphics)|ray tracing]] or neural methods like [[Neural Radiance Fields]] (NeRF), are employed.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Mildenhall2020NeRF&amp;quot;&amp;gt;Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T., Ramamoorthi, R., &amp;amp; Ng, R. (2020). NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. European Conference on Computer Vision (ECCV), 405-421. doi:10.1007/978-3-030-58452-8_24&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Photogrammetry]] and 3D Scanning:&#039;&#039;&#039; Real-world objects/scenes captured as 3D models can serve as input for rendering light field views.&lt;br /&gt;
* &#039;&#039;&#039;Existing 3D Content Conversion:&#039;&#039;&#039; Plugins and software tools (for example provided by Looking Glass Factory) allow conversion of existing 3D models, animations, or even stereoscopic content for LFD viewing.&amp;lt;ref name=&amp;quot;LookingGlassSoftware&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Focal Stack]] Conversion:&#039;&#039;&#039; Research explores converting image stacks captured at different focal depths into light field representations, particularly for multi-layer displays.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Applications==&lt;br /&gt;
===Applications in VR and AR===&lt;br /&gt;
* &#039;&#039;&#039;Enhanced Realism and Immersion:&#039;&#039;&#039; Correct depth cues make virtual objects appear more solid and stable, improving the sense of presence, especially for near-field interactions.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Improved Visual Comfort:&#039;&#039;&#039; Mitigating the VAC reduces eye strain, fatigue, and nausea, enabling longer and more comfortable VR/AR sessions.&amp;lt;ref name=&amp;quot;WiredVAC&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealWebsite&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Natural Interaction:&#039;&#039;&#039; Accurate depth perception facilitates intuitive hand-eye coordination for manipulating virtual objects.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Seamless AR Integration:&#039;&#039;&#039; Allows virtual elements to appear more cohesively integrated with the real world at correct focal depths.&lt;br /&gt;
* &#039;&#039;&#039;Vision Correction:&#039;&#039;&#039; Near-eye LFDs can potentially pre-distort the displayed light field to correct for the user&#039;s refractive errors, eliminating the need for prescription glasses within the headset.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2015Stereoscope&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Other Applications===&lt;br /&gt;
* &#039;&#039;&#039;Medical Imaging and Visualization:&#039;&#039;&#039; Intuitive visualization of complex 3D scans (CT, MRI) for diagnostics, surgical planning, and education.&amp;lt;ref name=&amp;quot;Nam2019Medical&amp;quot;&amp;gt;Nam, J., McCormick, M., &amp;amp; Tate, A. J. (2019). Light field display systems for medical imaging applications. Journal of Display Technology, 15(3), 215-225. doi:10.1002/jsid.785&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Scientific Visualization:&#039;&#039;&#039; Analyzing complex datasets in fields like fluid dynamics, molecular modeling, geology.&amp;lt;ref name=&amp;quot;Halle2017SciVis&amp;quot;&amp;gt;Halle, M. W., &amp;amp; Meng, J. (2017). LightPlanets: GPU-based rendering of transparent astronomical objects using light field methods. IEEE Transactions on Visualization and Computer Graphics, 23(5), 1479-1488. doi:10.1109/TVCG.2016.2535388&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Digital Signage]] and Advertising:&#039;&#039;&#039; Eye-catching glasses-free 3D displays for retail and public spaces.&amp;lt;ref name=&amp;quot;LookingGlass27&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Product Design and Engineering (CAD/CAE):&#039;&#039;&#039; Collaborative visualization and review of 3D models.&amp;lt;ref name=&amp;quot;Nam2019Medical&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Entertainment and Gaming:&#039;&#039;&#039; Immersive experiences in arcades, museums, theme parks, and potentially future home entertainment.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Automotive Displays:&#039;&#039;&#039; [[Head-up display|Heads-up displays]] (HUDs) or dashboards presenting information at appropriate depths.&amp;lt;ref name=&amp;quot;JDI_Parallax&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Telepresence and Communication:&#039;&#039;&#039; Creating realistic, life-sized 3D representations of remote collaborators, like Google&#039;s [[Project Starline]] concept.&amp;lt;ref name=&amp;quot;Starline&amp;quot;&amp;gt;Google Blog (2023, May 10). A first look at Project Starline’s new, simpler prototype. Retrieved from https://blog.google/technology/research/project-starline-prototype/&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Microscopy]]:&#039;&#039;&#039; Viewing microscopic samples with natural depth perception.&amp;lt;ref name=&amp;quot;WetzsteinPlenoptic&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Challenges and Limitations ==&lt;br /&gt;
* &#039;&#039;&#039;Spatio-Angular Resolution Trade-off:&#039;&#039;&#039; Increasing the number of views (angular resolution) often decreases the perceived sharpness (spatial resolution) for a fixed display pixel count.&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Computational Complexity &amp;amp; Bandwidth:&#039;&#039;&#039; Rendering, compressing, and transmitting the massive datasets for real-time LFDs is extremely demanding on GPUs and data infrastructure.&amp;lt;ref name=&amp;quot;LeiaVerge&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Manufacturing Complexity and Cost:&#039;&#039;&#039; Producing precise optical components like high-density MLAs, perfectly aligned multi-layer stacks, or large-area waveguide structures is challenging and costly.&amp;lt;ref name=&amp;quot;ForbesLightField&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Form Factor and Miniaturization:&#039;&#039;&#039; Integrating complex optics and electronics into thin, lightweight, and power-efficient near-eye devices remains difficult.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Limited Field of View (FoV):&#039;&#039;&#039; Achieving wide FoV comparable to traditional VR headsets while maintaining high angular resolution is challenging.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Brightness and Efficiency:&#039;&#039;&#039; Techniques like MLAs and parallax barriers inherently block or redirect light, reducing overall display brightness and power efficiency.&lt;br /&gt;
* &#039;&#039;&#039;Content Ecosystem:&#039;&#039;&#039; The workflow for creating, distributing, and viewing native light field content is still developing compared to standard 2D or stereoscopic 3D.&amp;lt;ref name=&amp;quot;LookingGlassSoftware&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Visual Artifacts:&#039;&#039;&#039; Potential issues include [[Moiré pattern|moiré]] effects (from periodic structures like MLAs), ghosting/crosstalk between views, and latency.&lt;br /&gt;
&lt;br /&gt;
== Key Players and Commercial Landscape ==&lt;br /&gt;
Several companies and research groups are active in LFD development:&lt;br /&gt;
* &#039;&#039;&#039;[[CREAL]]:&#039;&#039;&#039; Swiss startup focused on compact near-eye LFD modules for AR/VR glasses aiming to solve VAC.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Light Field Lab]]:&#039;&#039;&#039; Developing large-scale, modular &amp;quot;holographic&amp;quot; LFD panels (SolidLight™) based on proprietary [[Waveguide (optics)|waveguide]] technology.&amp;lt;ref name=&amp;quot;LightFieldLabTech&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;LightFieldLabSolidLightPR&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Sony]]:&#039;&#039;&#039; Produces the Spatial Reality Display (ELF-SR series), a high-fidelity desktop LFD using eye-tracking.&amp;lt;ref name=&amp;quot;SonyELFSR2&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Avegant]]:&#039;&#039;&#039; Develops light field light engines, particularly for AR, focusing on VAC resolution.&amp;lt;ref name=&amp;quot;AvegantPR&amp;quot;&amp;gt;PR Newswire (2017, March 15). Avegant Introduces Light Field Technology for Mixed Reality. Retrieved from https://www.prnewswire.com/news-releases/avegant-introduces-light-field-technology-for-mixed-reality-300423855.html&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Holografika]]:&#039;&#039;&#039; Offers glasses-free 3D LFD systems for professional applications.&amp;lt;ref name=&amp;quot;Holografika&amp;quot;&amp;gt;Holografika. Light Field Displays. Retrieved from https://holografika.com/light-field-displays/&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Japan Display Inc. (JDI)]]:&#039;&#039;&#039; Demonstrated prototype LFDs for various applications.&amp;lt;ref name=&amp;quot;JDI_LFD_2019&amp;quot;&amp;gt;Japan Display Inc. News (2019, December 3). JDI Develops World&#039;s First 10.1-inch Light Field Display. Retrieved from https://www.j-display.com/english/news/2019/20191203_01.html&amp;lt;/ref&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[NVIDIA]]:&#039;&#039;&#039; Foundational research in near-eye LFDs and ongoing GPU development crucial for LFD rendering.&amp;lt;ref name=&amp;quot;NvidiaNELD&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;[[Google]]:&#039;&#039;&#039; Research in LFDs, demonstrated through concepts like Project Starline.&amp;lt;ref name=&amp;quot;Starline&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Academic Research:&#039;&#039;&#039; Institutions like [[MIT Media Lab]], [[Stanford University]], University of Arizona, and others continue to push theoretical and practical boundaries.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Huang2014EyeglassesFree&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Future Directions and Research ==&lt;br /&gt;
* &#039;&#039;&#039;Computational Display Optimization:&#039;&#039;&#039; Using [[Artificial intelligence|AI]] and sophisticated algorithms to optimize patterns on multi-layer displays or directional backlights for better quality with fewer resources.&amp;lt;ref name=&amp;quot;WetzsteinTensor&amp;quot;/&amp;gt; Using neural representations (like NeRF) for efficient light field synthesis and compression.&amp;lt;ref name=&amp;quot;Mildenhall2020NeRF&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Varifocal and Multifocal Integration:&#039;&#039;&#039; Hybrid approaches combining LFD principles with dynamic focus elements (liquid lenses, deformable mirrors) to achieve focus cues potentially more efficiently than pure LFDs.&amp;lt;ref name=&amp;quot;Lanman2020NearEyeCourse&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;Liu2014OSTHMD&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Miniaturization for Wearables:&#039;&#039;&#039; Developing ultra-thin, efficient components using [[Metasurface|metasurfaces]], [[Holographic optical element|holographic optical elements (HOEs)]], advanced waveguides, and [[MicroLED]] displays for integration into consumer AR/VR glasses.&amp;lt;ref name=&amp;quot;CrealRoadToVR&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;SpringerReview2021&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Improved Content Capture and Creation Tools:&#039;&#039;&#039; Advancements in [[Plenoptic camera|plenoptic cameras]], AI-driven view synthesis, and streamlined software workflows.&amp;lt;ref name=&amp;quot;Mildenhall2020NeRF&amp;quot;/&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Higher Resolution and Efficiency:&#039;&#039;&#039; Addressing the spatio-angular trade-off and improving light efficiency through new materials, optical designs (for example polarization multiplexing&amp;lt;ref name=&amp;quot;Tan2019Polarization&amp;quot;&amp;gt;G. Tan, T. Zhan, Y.-H. Lee, J. Xiong, S.-T. Wu, “Near-eye light-field display with polarization multiplexing,” *Proceedings of SPIE* 10942, Advances in Display Technologies IX, paper 1094206, 2019, doi:10.1117/12.2509121.&amp;lt;/ref&amp;gt;), and display technologies.&lt;br /&gt;
&lt;br /&gt;
== See Also ==&lt;br /&gt;
* [[Light Field]]&lt;br /&gt;
* [[Plenoptic Function]]&lt;br /&gt;
* [[Integral imaging]]&lt;br /&gt;
* [[Autostereoscopic display]]&lt;br /&gt;
* [[Stereoscopy]]&lt;br /&gt;
* [[Holographic display]]&lt;br /&gt;
* [[Volumetric Display]]&lt;br /&gt;
* [[Varifocal display]]&lt;br /&gt;
* [[Vergence-accommodation conflict]]&lt;br /&gt;
* [[Virtual Reality]]&lt;br /&gt;
* [[Augmented Reality]]&lt;br /&gt;
* [[Head-mounted display]]&lt;br /&gt;
* [[Microlens array]]&lt;br /&gt;
* [[Spatial Light Modulator]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Terms]]&lt;br /&gt;
[[Category:Technical Terms]]&lt;br /&gt;
[[Category:Display technology]]&lt;br /&gt;
[[Category:3D display technology]]&lt;br /&gt;
[[Category:Autostereoscopy]]&lt;br /&gt;
[[Category:Virtual reality]]&lt;br /&gt;
[[Category:Augmented reality]]&lt;br /&gt;
[[Category:Optics]]&lt;br /&gt;
[[Category:Computational photography]]&lt;br /&gt;
[[Category:Emerging technologies]]&lt;br /&gt;
[[Category:Human-computer interaction]]&lt;/div&gt;</summary>
		<author><name>RealEditor</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Wanted_pages&amp;diff=36353</id>
		<title>Wanted pages</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Wanted_pages&amp;diff=36353"/>
		<updated>2025-07-28T15:20:10Z</updated>

		<summary type="html">&lt;p&gt;RealEditor: /* Media */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{TOCRIGHT}}&lt;br /&gt;
==New products==&lt;br /&gt;
*[[Meta Aria Gen 2]] - https://www.projectaria.com/&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;&#039;[[Android XR]]&#039;&#039;&#039; - [[Samsung&#039;s Android XR Headset]]&lt;br /&gt;
&lt;br /&gt;
* [[HoloKit]] - AR headset for iPhone - https://holokit.io/intro_holokit/&lt;br /&gt;
&lt;br /&gt;
*[[EyeSight]] - feature from [[Apple Vision Pro]]&lt;br /&gt;
*[[Persona]] - feature from [[Apple Vision Pro]]&lt;br /&gt;
&lt;br /&gt;
*[[Meta Horizon OS]]&lt;br /&gt;
*[[VR Chat]] - [[Avatar Marketplace]] - [[Boothplorer]] - https://www.roadtovr.com/vrchat-avatar-marketplace-release-digital-economy/&lt;br /&gt;
*BCI in AVP - https://www.uploadvr.com/apple-vision-pro-getting-bci-brain-computer-interface-support/&lt;br /&gt;
&lt;br /&gt;
==2025==&lt;br /&gt;
*[[Webspatial]] - https://github.com/webspatial/webspatial-sdk&lt;br /&gt;
&lt;br /&gt;
==2023==&lt;br /&gt;
*[[HoloKit]] - AR headset for iPhone - https://holokit.io/intro_holokit/&lt;br /&gt;
*Apple AR/VR Headset - [[Reality One]] / [[Reality Pro]] / [[Reality Processor]] - [[visionOS]] and [[XR]]&lt;br /&gt;
*New Meta Headset releasing later this year&lt;br /&gt;
&lt;br /&gt;
==Guides==&lt;br /&gt;
[[Oculus Rift 360 Degrees and Room-scale Setup with 3 Sensors]] - [https://scontent.xx.fbcdn.net/t39.2365-6/15363893_1774761836111478_5342883442994446336_n.pdf official pdf], [http://uploadvr.com/vive-vs-oculus-rift-touch-roomscale/ comparison], [http://www.roadtovr.com/oculus-touch-support-room-scale-360-tracking-extra-cameras-sensor/ source 3] [https://www.reddit.com/r/oculus/comments/5irny7/3_sensor_room_scale_setup_is_perfect_here_is_my/ source 4], [https://www.reddit.com/r/oculus/comments/5legsb/if_you_are_struggling_with_roomscale_setup_with/ source 5]&lt;br /&gt;
&lt;br /&gt;
==Update==&lt;br /&gt;
[[TPCAST]] - [https://www.roadtovr.com/tpcast-announces-wireless-adapter-oculus-rift-arriving-q4-2017/ ref 1]&lt;br /&gt;
&lt;br /&gt;
[[Oculus Rift DK2]] Open Source - [https://developer.oculus.com/blog/open-source-release-of-rift-dk2/ ref 1]&lt;br /&gt;
&lt;br /&gt;
==Concepts==&lt;br /&gt;
===Technical Concepts===&lt;br /&gt;
* [[Phone-Powered AR]]&lt;br /&gt;
* [[Standalone AR]]&lt;br /&gt;
* [[Console-Powered VR]]&lt;br /&gt;
* [[PC-Powered AR]]&lt;br /&gt;
* [[Quest Insight tracking]]&lt;br /&gt;
* [[Pose]]&lt;br /&gt;
* [[VR UI/UX design]] - [http://www.roadtovr.com/vr-interface-design-insights-mike-alger/ reference 1]&lt;br /&gt;
* [[Near-eye light field]] ([[NE-LF]])&lt;br /&gt;
* [[Point cloud]]&lt;br /&gt;
* [[Mura correction]] - improves image sharpness, first employed in [[HTC Vive Pre]]&lt;br /&gt;
&lt;br /&gt;
[[Myriad 2]] / [[Vision processing unit]] - [http://www.movidius.com/solutions/vision-processing-unit reference 1], [http://uploads.movidius.com/1441734401-Myriad-2-product-brief.pdf reference 2], [https://www.youtube.com/watch?v=hD3RYGJgH4A reference 3], [http://www.hotchips.org/wp-content/uploads/hc_archives/hc26/HC26-12-day2-epub/HC26.12-6-HP-ASICs-epub/HC26.12.620-Myriad2-Eye-Moloney-Movidius-provided.pdf reference 4]&lt;br /&gt;
&lt;br /&gt;
[[Lens Matched Shading]] - avoid rendering pixels that end up being discarded after the distortion process.&lt;br /&gt;
&lt;br /&gt;
[https://medium.com/@B__REEL/simulating-weight-in-vr-d161e87990b#.qj0u70jy2 simulating weight in VR]&lt;br /&gt;
&lt;br /&gt;
==Hardware==&lt;br /&gt;
[[Pimax 5K]] - [https://www.kickstarter.com/projects/pimax8kvr/pimax-the-worlds-first-8k-vr-headset ref 1]&lt;br /&gt;
&lt;br /&gt;
[[Daydream View (2017)]] - [https://www.roadtovr.com/google-daydream-view-2-2017-performance-field-of-view-comfort/ ref 1], [https://www.roadtovr.com/hands-on-google-daydream-view-2-2017-pixel-2-pixel-2-xl/ ref 2] - &lt;br /&gt;
&lt;br /&gt;
[[Google Pixel 2]] / [[Google Pixel 2 XL]] - [https://www.roadtovr.com/googles-new-pixel-2-pixel-2-xl-phones-factory-calibrated-optimized-ar-now-60-fps-tracking/ ref 1] - AR-ready and [[Daydream]]-ready&lt;br /&gt;
&lt;br /&gt;
[[Oculus For Business]] - [https://www.oculus.com/blog/announcing-oculus-for-business-bringing-vr-into-the-workplace/ ref 1], [https://www.oculusforbusiness.com/ ref 2]&lt;br /&gt;
&lt;br /&gt;
[[Oculus Santa Cruz 2]] - [[Santa Cruz Controllers]] - [https://www.roadtovr.com/hands-on-oculus-santa-cruz-ii-prototype-controllers-2017-oculus-connect-4/ ref 1] - High-end standalone HMD with 6DOF controllers. Controllers are tracked by sensors on HMD!&lt;br /&gt;
&lt;br /&gt;
[[Mindride Airflow]] - [http://www.mindride.co/make-1/ ref] - harness that simulates flying in VR&lt;br /&gt;
&lt;br /&gt;
New [[Base Stations]] - [http://www.roadtovr.com/valve-confirms-new-steamvr-tracking-lighthouse-base-station-release-date-2017-htc-vive/ reference 1], [https://www.roadtovr.com/steamvr-tracking-2-0-will-support-33x33-foot-playspaces-with-4-base-stations/ ref 2] - New Vive Base Stations&lt;br /&gt;
&lt;br /&gt;
[[Sync Blinker]] - IR beacon within [[Base Stations]] for [[Lighthouse]]&lt;br /&gt;
&lt;br /&gt;
[[FOCUS]] - 360 Camera by Link VR&lt;br /&gt;
&lt;br /&gt;
[https://www.reddit.com/r/oculus/comments/3zs85r/camera_array_news/ Camera Array News]&lt;br /&gt;
&lt;br /&gt;
[[Qualcomm VRDK]] using [[Snapdragon 385]] - [https://www.qualcomm.com/news/releases/2017/02/23/qualcomm-introduces-snapdragon-835-virtual-reality-development-kit reference 1], [http://www.techradar.com/news/first-look-qualcomm-snapdragon-835-vr-developer-kit-headset ref 2], [http://www.roadtovr.com/first-all-in-one-vr-headsets-based-on-qualcomms-vrdk-expected-in-2h-2017/ ref 3] - VR Dev kit by [[Qualcomm]] with inside-out6, DOF tracking and [[eye tracking]]&lt;br /&gt;
&lt;br /&gt;
[[Glass with Augmedix]] - [https://blog.x.company/a-new-chapter-for-glass-c7875d40bf24 ref 1] - Healthcare with [[Google Glass]]&lt;br /&gt;
&lt;br /&gt;
[[Strider VR]] - [https://www.roadtovr.com/strider-vr-intriguing-new-omnidirectional-treadmill-solution/ ref 1] - new approach to omnidirectional treadmill&lt;br /&gt;
&lt;br /&gt;
[[Eye-Sync]] - portable concussion diagnosis tool.&lt;br /&gt;
&lt;br /&gt;
[[Samsung Odyssey]] - [https://www.roadtovr.com/samsung-odyssey-windows-vr-mixed-reality-headset-hands-on-preview/ ref 1] - VR HMD part of the [[Windows Mixed Reality]] series.&lt;br /&gt;
&lt;br /&gt;
Updated [[PlayStation VR]]  and [[PlayStation Move]] controllers - [https://www.roadtovr.com/new-psvr-model-way-featuring-integrated-audio-hdr-pass/ ref 1], [https://www.roadtovr.com/playstation-move-sees-minor-hardware-update-launching-alongside-new-psvr-model/ ref 2] - not released&lt;br /&gt;
&lt;br /&gt;
==Software==&lt;br /&gt;
&lt;br /&gt;
[[Snapchat AR Lenses]]&lt;br /&gt;
&lt;br /&gt;
[[OpenXR]] - [https://www.khronos.org/openxr ref 1] - open standard for VR and AR apps and devices.&lt;br /&gt;
&lt;br /&gt;
[[CloverVR]] - [https://www.roadtovr.com/adobe-premiere-pro-now-includes-vr-editing-interface-project-clovervr/ ref 1] - VR Editing Interface  for Adobe Premiere Pro&lt;br /&gt;
&lt;br /&gt;
[[NVDIA Holodeck]] - [https://www.nvidia.com/en-us/design-visualization/technologies/holodeck/ ref 1] - Photorealistic Collaborative Design in VR or Design lab of the future&lt;br /&gt;
&lt;br /&gt;
[[Chrome VR]] - VR web browser by [[Google]] that supports [[WebVR]]&lt;br /&gt;
&lt;br /&gt;
[[Oculus Avatars]] - Universal avatars for the [[Oculus (Platform)]]&lt;br /&gt;
&lt;br /&gt;
[[Facebook 360 Capture SDK]] - integrate into VR apps so that you can capture and share your VR experiences through 360 photos and videos.&lt;br /&gt;
&lt;br /&gt;
[[V (Dashboard)]] - [http://www.roadtovr.com/v-universal-dashboard-virtual-reality-any-vr-experience/ reference 1] - a dashboard that pipes websites and mobile apps directly into any VR games or experiences.&lt;br /&gt;
&lt;br /&gt;
[[Google VR]] - Google&#039;s VR platform. Consist of [[Cardboard]] for low-end and [[Daydream]] for high-end.&lt;br /&gt;
&lt;br /&gt;
[[Destinations]] - Valve&#039;s free tool that allows anyone to create realistic VR worlds with [[Photogrammetry]].&lt;br /&gt;
&lt;br /&gt;
[[Camera Effects Platform]] - [[AR Studio]] - [[Frame Studio]] - Facebook&#039;s AR platform for camera&lt;br /&gt;
&lt;br /&gt;
[[Vuforia]] - [[Qualcomm]]&#039;s AR SDK&lt;br /&gt;
&lt;br /&gt;
[[VRidge]] - allows the user to play [[Rift]] and [[Vive]] games with [[Cardboard]]&lt;br /&gt;
&lt;br /&gt;
[[The Lab Renderer]] - [http://steamcommunity.com/games/250820/announcements/detail/604985915045842668 reference 1]&lt;br /&gt;
&lt;br /&gt;
[[Ansel]] - NVIDIA&#039;s screenshot tool that allows users to take 360 degrees, 3D images inside games.&lt;br /&gt;
&lt;br /&gt;
[[VRWorks]] - update the page with information from [https://developer.nvidia.com/vrworks source 1], add VRWorks Audio&lt;br /&gt;
&lt;br /&gt;
[[IC.IDO]] - VR CAD Simulation by ESI Group&lt;br /&gt;
&lt;br /&gt;
[[Vizor]] - easily create and publish VR content on the web&lt;br /&gt;
&lt;br /&gt;
[[VLC]] - popular video player that supports 360 videos and VR HMDs.&lt;br /&gt;
&lt;br /&gt;
[[The View]] - [http://theviewer.co/getstarted source] - create VR Tours easily.&lt;br /&gt;
&lt;br /&gt;
[[Phantom]] - AR OS&lt;br /&gt;
&lt;br /&gt;
[[Decentraland]] - [https://decentraland.org/ ref 1] - Peer-to-peer, blockchain-based [[metaverse]]&lt;br /&gt;
&lt;br /&gt;
[[SteamVR Home]] - [https://steamcommunity.com/games/250820/announcements/detail/1256913672017157095 ref 1]&lt;br /&gt;
&lt;br /&gt;
[[Pixvana&#039;s SPIN Play SDK]] - enables playback and streaming of 360 degree / VR content, powers Steam&#039;s 360 Video Player&lt;br /&gt;
&lt;br /&gt;
[[OpenVR Recorder]] - [https://www.roadtovr.com/openvr-recorder-powerful-tool-capturing-tracking-input-data/ ref 1] - record OpenVR tracking data from headsets, motion controllers and Vive Trackers.&lt;br /&gt;
&lt;br /&gt;
==Apps==&lt;br /&gt;
===Games===&lt;br /&gt;
[[Doom VFR]] - VR game for the latest Doom game&lt;br /&gt;
&lt;br /&gt;
[[Fallout 4 VR]] - VR version of Fallout 4,  Native VR support for Fallout 4 with motion tracked controllers&lt;br /&gt;
&lt;br /&gt;
[[Fragments]] - AR detective game for the [[HoloLens]].&lt;br /&gt;
&lt;br /&gt;
[[Arktika.1]] - Sci-fi FPS game using [[Touch]] made by Studio that made Metro series. [[Touch]] exclusive&lt;br /&gt;
&lt;br /&gt;
[[Robo Recall]] - Robo FPS game by [[Epic Games]]&lt;br /&gt;
&lt;br /&gt;
[[SUPERHYPERCUBE]] - VR Tetris game&lt;br /&gt;
&lt;br /&gt;
[[Resident Evil 7: Biohazard]] - VR mode is exclusive to [[PSVR]]&lt;br /&gt;
&lt;br /&gt;
[[EVE: Valkyrie]] - multiplayer spaceship dogfighting game for [[Oculus Rift]] and [[PSVR]]&lt;br /&gt;
&lt;br /&gt;
[[Kingspray Graffiti]] - Graffiti simulator in VR&lt;br /&gt;
&lt;br /&gt;
[[The Unspoken]] - magic-casting PVP VR game built for [[Rift]] with [[Touch]] by [[Insomniac Games]]&lt;br /&gt;
&lt;br /&gt;
[[Dead &amp;amp; Buried]] - gunslinging multiplayer FPS game by [[Oculus Studios]] for the [[Touch]]&lt;br /&gt;
&lt;br /&gt;
[[Minecraft (HoloLens)]]&lt;br /&gt;
&lt;br /&gt;
[[Scorched Battalion]] - turn-based artillery game ([[Oculus&#039; Mobile VR Jam 2015]] Silver Prize)&lt;br /&gt;
&lt;br /&gt;
[[Dreams]] - [[PSVR]] game made by [[Media Molecule]], creators of Little Big Planet.&lt;br /&gt;
&lt;br /&gt;
[[Budget Cuts]] - [http://www.roadtovr.com/hands-on-budget-cuts-inventive-locomotion-is-a-lesson-for-vr-developers/ refrence 1] - VR game for [[HTC Vive]] with stealth and portal elements&lt;br /&gt;
&lt;br /&gt;
[[Cloudlands]] - [http://www.roadtovr.com/cloudlands-vr-minigolf-is-here-and-real-minigolf-should-be-scared/ reference 1] - VR minigolf&lt;br /&gt;
&lt;br /&gt;
[[Lucky&#039;s Tale]] - [http://www.gizmag.com/luckys-tale-paul-bettner-interview/41343/ reference 1] - VR platforming game that comes with every [[Rift]]&lt;br /&gt;
&lt;br /&gt;
[[The Gallery: Call of the Starseed]] - adventure game for [[HTC Vive]]&lt;br /&gt;
&lt;br /&gt;
[[Mars 2030]] - [http://www.roadtovr.com/nasa-fusion-mars-2030-virtual-reality-size-of-skyrim/ reference 1] - Realistic Mars exploration game based by NASA&lt;br /&gt;
&lt;br /&gt;
[[Eagle Flight]] - [http://www.roadtovr.com/preview-eagle-flight-kinetic-thrill-ride-surprisingly-doesnt-make-want-vomit/ reference 1] - Flying [[VR]] game by [[Ubisoft]]&lt;br /&gt;
&lt;br /&gt;
[[VR Sports Challenge]] - Play football, basketball, baseball and hockey in [[VR]].&lt;br /&gt;
&lt;br /&gt;
[[Edge of Nowhere]] - 3rd person action-adventure game by [[Insomniac Games]].&lt;br /&gt;
&lt;br /&gt;
[[I Expect You To Die]]&lt;br /&gt;
&lt;br /&gt;
[[CasinoVR]] - gambling in VR&lt;br /&gt;
&lt;br /&gt;
[[Golem]] - first-person adventure game for [[PSVR]]&lt;br /&gt;
&lt;br /&gt;
[[Neos Core]] - multi-user, multi-device VR interaction and collaboration tool&lt;br /&gt;
&lt;br /&gt;
[[holos]] - by [http://www.turingvr.com/holos/ turingVR] - VR portal&lt;br /&gt;
&lt;br /&gt;
[[Vive Home]] - [[HTC]]&#039;s VR portal&lt;br /&gt;
&lt;br /&gt;
[[Loci]] - HoloLens mind mapping&lt;br /&gt;
&lt;br /&gt;
[[SketchAR]] - Trace over AR images&lt;br /&gt;
&lt;br /&gt;
[[Plevr]] - brings streaming service plex to VR with 3D and 360 video.&lt;br /&gt;
&lt;br /&gt;
[[Pearl (2016)]] - first 360 Video to be nominated for Oscar.&lt;br /&gt;
&lt;br /&gt;
==Companies and Organizations==&lt;br /&gt;
[[Anduril]]&lt;br /&gt;
&lt;br /&gt;
[[Dreamscape Immersive]] - [https://www.roadtovr.com/amc-vr-movie-theaters-dreamscape-immersive-series-b-investment/ ref 1], [http://www.dreamscapeimmersive.com/ ref 2] - VR theme park / arcade similar to [[The Void]]&lt;br /&gt;
&lt;br /&gt;
[[Surreal]] - [https://surreal.tv/ ref 1] - Immersive VR Studio&lt;br /&gt;
&lt;br /&gt;
[[Neuralink]] - [https://waitbutwhy.com/2017/04/neuralink.html ref 1] - [[Elon Musk]]&#039;s company to develop [[implantable]] [[brain–computer interface]]&lt;br /&gt;
&lt;br /&gt;
[[Reality Caucus]] - [https://delbene.house.gov/media-center/press-releases/reps-delbene-clarke-flores-issa-and-lieu-form-reality-caucus ref 1] - United States Congressional Caucus on Virtual, Augmented and Mixed Reality Technologies&lt;br /&gt;
&lt;br /&gt;
[[GVRA]] - [https://www.gvra.com/ site] - Global Virtual Reality Association&lt;br /&gt;
&lt;br /&gt;
[[VREAL]] - [http://www.roadtovr.com/vreal-virtual-reality-livestreaming-platform-htc-vive-oculus-rift/ reference 1] - VR streaming technology company.&lt;br /&gt;
&lt;br /&gt;
[[IrisVR]] - VR for architecture&lt;br /&gt;
&lt;br /&gt;
[[Envelop VR]] - Data visualization&lt;br /&gt;
&lt;br /&gt;
[[Two Bit Circus]] - VR + Circus&lt;br /&gt;
&lt;br /&gt;
[[Body Labs]] - 3D body modeling&lt;br /&gt;
&lt;br /&gt;
[[Echopixel]] - creator of [[True 3D]], medical visualization software&lt;br /&gt;
&lt;br /&gt;
[[EEVO]] - platform for VR films&lt;br /&gt;
&lt;br /&gt;
[[Virtualitics]] - visualize big data in VR&lt;br /&gt;
&lt;br /&gt;
==Events==&lt;br /&gt;
[[Kaleidoscope World Tour 2016]] - [[Kaleidoscope VR Film Festival]] that showcases VR experiences to 10 cities in 9 countries.&lt;br /&gt;
&lt;br /&gt;
[[VRLA]]&lt;br /&gt;
&lt;br /&gt;
[[Leap Motion 3D Jam 2015]] - contest&lt;br /&gt;
&lt;br /&gt;
[[VR World Congress 2016]] - April 12th 2016 at the Marriott City Centre Hotel in Bristol, U.K.&lt;br /&gt;
&lt;br /&gt;
[[SEA VR]] - based in Seattle, largest VR convention in northwest&lt;br /&gt;
&lt;br /&gt;
[[VR FEST]] - takes place in Las Vegas, same time as CES.&lt;br /&gt;
&lt;br /&gt;
[[VRS Conference]] - Virtual Reality Strategy Conference by Greenlight Insights&lt;/div&gt;</summary>
		<author><name>RealEditor</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Augmented_reality&amp;diff=36352</id>
		<title>Augmented reality</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Augmented_reality&amp;diff=36352"/>
		<updated>2025-07-28T15:18:11Z</updated>

		<summary type="html">&lt;p&gt;RealEditor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{TOCRIGHT}}&lt;br /&gt;
&#039;&#039;&#039;Augmented reality&#039;&#039;&#039; (&#039;&#039;&#039;AR&#039;&#039;&#039;) is a technology that enables a user to interact with and view 3D computer graphics that appear as if they are physically in 3D in the real world. It enhances perception, allowing environments to be enriched in new ways. A basic characteristic of AR is that it merges the real and the virtual worlds. The technology aims to enhance our perception of reality through the incorporation of computer generated data and simulations into our senses, creating a reality-based interface &amp;lt;ref name=”6”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The goal of AR devices is to supplement the real world with virtual objects by overlaying digital imageries and information on top of physical objects and enabling the users of the devices to seamlessly interact with the digital content. Through the use of [[computer vision]] and [[object recognition]], digital information about the real world around us can not only be viewed but also manipulated in real-time &amp;lt;ref name=”6”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In general, the technology combines real and virtual objects, aligns real and virtual objects with each other, and runs interactively in real-time. Furthermore, it is not restricted to a specific type of display technology, like an HMD, and can potentially be applied to other senses beside sight &amp;lt;ref name=”6”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In the [[mixed reality]] spectrum, AR is closer to a real environment. Therefore, unlike Virtual Reality, Augmented Reality does not replace the real world with a virtual one. AR simply enhances and modifies the real world &amp;lt;ref name=”6”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In 2007, MIT recognized AR as one of ten emerging technologies, reporting that this type of human-computer interaction is on the verge of major adoption &amp;lt;ref name=”6”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Technologies==&lt;br /&gt;
===Optical see-through head-mounted displays===&lt;br /&gt;
Augmented Reality devices are transparent glasses-like wearables called [[optical head-mounted display]]s ([[OHMD]]s). These devices have displays with small projectors that create digital information and rendered images on top of objects in the physical world. These devices have built-in cameras that uses [[computer vision]] and [[object recognition]] to identify objects and decipher the physical environment around the devices. Information and data about the surroundings can be streamed into the display in real time. Users can interact and manipulate the information through various input methods such as voice commands, hand and body gestures, touchpads and more.&lt;br /&gt;
&lt;br /&gt;
==Augmented Reality history timeline==&lt;br /&gt;
[[File:Augmented Reality Studierstube.jpg|thumb|2. Studierstube (Image: www.informit.com)]]&lt;br /&gt;
[[File:Augmented Reality Touring Machine.jpg|thumb|3. Touring Machine (Image: www.informit.com)]]&lt;br /&gt;
[[File:Augmented Reality ARQuake.jpg|thumb|4. ARQuake (Image: www.informit.com)]]&lt;br /&gt;
[[File:Augmented Reality Invisible Train.jpg|thumb|5. The Invisible Train (Image: www.informit.com)]]&lt;br /&gt;
&lt;br /&gt;
The historical development of AR technologies intersects with that of virtual reality. During the initial stages of its evolution, the terms augmented reality and virtual reality had not been coined and, consequently, there wasn’t a clear distinction between the two &amp;lt;ref name=”1”&amp;gt; The Interaction Design Foundation. Augmented Reality - The past, the present and the Future. Retrieved from https://www.interaction-design.org/literature/article/augmented-reality-the-past-the-present-and-the-future&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===1901- A Concept of AR===&lt;br /&gt;
Frank L Baum writes a novel in which there is a concept that can be equated to AR: a set of electronic glasses called “character marker” that were used to map data onto people &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===1957 - The Sensorama===&lt;br /&gt;
The cinematographer Morton Heilig invented the Sensorama. This machine delivered visuals, sound, vibration, and smell to the viewer. It was not controlled by a computer but, nevertheless, it was an attempt at adding additional data to an experience. The machine was patented in 1961, and it looked like an arcade machine &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”2”&amp;gt; Sawers, P. (2011). Augmented reality: The past, present and future. Retrieved from https://thenextweb.com/insider/2011/07/03/augmented-reality-the-past-present-and-future/#.tnw_tfKQ6SY7&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===1968 - The Sword of Damocles HMD===&lt;br /&gt;
Ivan Sutherland and Bob Sproull created a head-mounted display system at Harvard University and the University of Utah. The device presented simple wireframe graphics, used see-through optics, and was held to the ceiling by a mechanical arm which tracked the head movements of the user. This iteration of the technology would prove to be impractical for mass use. Sutherland also postulated the concept of the “Ultimate Display” in 1965 and would have a great impact in the VR and AR fields of study &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt; Hollerer, T. and Schmalstieg, D. (2016). Introduction to Augmented Reality. Retrieved from http://www.informit.com/articles/article.aspx?p=2516729&amp;lt;/ref&amp;gt; &amp;lt;ref name=”4”&amp;gt; Javornik, A. (2016). The mainstreaming of augmented reality: A brief history. Retrieved from https://hbr.org/2016/10/the-mainstreaming-of-augmented-reality-a-brief-history&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt; Virtual Reality Society. History of Virtual Reality. Retrieved from https://www.vrs.org.uk/virtual-reality/history.html&amp;lt;/ref&amp;gt; &amp;lt;ref name=”6”&amp;gt; van Krevelen, D. W. F. (2007). Augmented Reality: Technologies, applications, and limitations. Retrieved from https://www.researchgate.net/profile/Rick_Van_Krevelen2/publication/292150312_Augmented_Reality_Technologies_Applications_and_Limitations/links/56ab2b4108aed5a01359c113.pdf&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===1975 - Videoplace===&lt;br /&gt;
The videoplace was developed by the American computer artist Myron Krueger. It was an interface that allowed users to manipulate and interact with virtual objects in real-time. It combined projectors, video-cameras and special purpose hardware, as well as onscreen silhouettes of the users &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===1980 - Wearable computing===&lt;br /&gt;
The computational photography researcher Steve Mann creates the first example of wearable computing &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===1990 - Augmented Reality===&lt;br /&gt;
Professor Thomas P. Caudell, a researcher at Boeing, coined the term augmented reality. The term was in reference to a HMD that guided workers through assembling electrical wires in aircrafts &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”6”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===1992 - Virtual Fixtures===&lt;br /&gt;
Virtual Fixtures is developed at USAF Armstrong’s Research Lab by Louis Rosenberg. According to some sources, it can be considered the first properly functioning AR system. It was a system that overlaid sensory information on a workspace to improve human productivity &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===1993 - KARMA===&lt;br /&gt;
Feiner and colleagues introduced KARMA - Knowledge-based augmented reality for maintenance assistance. KARMA was capable of inferring instructions sequences for repair and maintenance procedures &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
During the same year, Fitzmaurice created the first handheld spatially aware display called Chameleon - a precursor to handheld AR. It consisted of a tethered handheld LCD screen that showed the video output of an SGI graphics workstation and was spatially tracked using a magnetic tracking device. The system was capable of providing information to the user such as providing information about a location on a wall-mounted map &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===1994 - Medical AR===&lt;br /&gt;
At the University of North Carolina, State and colleagues presented a medical AR application. It was capable of allowing a physician to observe a fetus directly within a pregnant woman (Figure 1) &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===1995 - NaviCam===&lt;br /&gt;
Rekimoto and Nagao developed a true handheld AR display, although it was still tethered to a workstation. The NaviCam had a forward-facing camera, and from its video feed it could detect color-coded markers, displaying information on a video see-through view &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===1996 -  Studierstube===&lt;br /&gt;
The first collaborative AR system is developed by Schmalstieg and colleagues. The Studierstube allowed for multiple users to experience virtual objects in the same shared space through the use of HMDs. Each user from their individual viewpoint could see an image in correct perspective &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===1997 - The Touring Machine===&lt;br /&gt;
Feiner and colleagues create the first outdoor AR system, at Columbia University. The Touring Machine (Figure 3) had a see-through HMD, GPS, and orientation tracking. The system needed a backpack with a computer to deliver mobile 3D graphics, various sensors, and an earlier version of a tablet computer for input &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===2000 - ARQuake===&lt;br /&gt;
The AR version of the Quake game is developed by Bruce Thomas, at the University of South Australia (Figure 4). It was an outdoor mobile version of the game developed by Id Software &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===2003 - First autonomous handheld AR system===&lt;br /&gt;
Wagner and Schmalstieg presented a precursor to the current smartphones - a handheld AR system that ran autonomously on a “personal digital assistant.” In 2004, a multiplayer handheld AR game called Invisible Train (Figure 5) was shown at the SIGGRAPH Emerging Technologies show floor &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===2008 - A commercial AR application===&lt;br /&gt;
The first commercial AR application is developed by German agencies in Munich for advertising. It consisted of a printed magazine ad of a model BMW mini. When held in front of a computer’s camera, a user could manipulate the virtual car on the screen and move it around to view different angles &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”7”&amp;gt; History Hole (2016). The history of augmented reality. Retrieved from http://historyhole.com/history-augmented-reality&amp;lt;/ref&amp;gt;.&lt;br /&gt;
During the same year, the Wikitude AR Travel Guide was released for the G1 Android phone &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===2009 - ARToolkit===&lt;br /&gt;
A design tool, ARToolkit, is made available in Adobe Flash &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===2013 - Google Glass===&lt;br /&gt;
The open beta of Google Glass is announced &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===2015 - HoloLens===&lt;br /&gt;
Microsoft announced AR support for the company’s AR headset HoloLens &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===2016 - Pokémon Go===&lt;br /&gt;
Pokémon Go is released and becomes a major success. It is considered an achievement for the AR industry. The game hit its peak in August 2016 with almost 46 million users. While the game has failed to maintain high levels of engagement, it showed the potential of AR &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”7”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Hardware and software platforms==&lt;br /&gt;
&#039;&#039;&#039;[[visionOS]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Windows Mixed Reality]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[SmartEyeglass]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Magic Leap]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Snap]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[WebXR]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==AR headsets==&lt;br /&gt;
{{see also|AR Glasses}}&lt;br /&gt;
&#039;&#039;&#039;[[XREAL One Pro]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[XREAL Air 2 Pro]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Magic Leap One]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Magic Leap 2]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Microsoft HoloLens]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Snap Spectacles 5]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Snap Spectacles 4]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Snap Spectacles 3]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Snap Spectacles 2]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Snap Spectacles]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Meta 2]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[castAR]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[SmartEyeglass Developer Edition SED-E1|SmartEyeglass Developer Edition]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Impression Pi]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[R-7 Smartglasses]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Atheer One]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Atheer AiR]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[R-8 Smartglasses]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[R-9 Smartglasses]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Snap Spectacles 3]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Apps==&lt;br /&gt;
[[AR Apps]]&lt;br /&gt;
&lt;br /&gt;
[[Pokemon Go]] - first smash hit AR app.&lt;br /&gt;
&lt;br /&gt;
Ikea Places - placing furniture into your own living room&lt;br /&gt;
&lt;br /&gt;
Our SolAR - exploring our solar system under the ceiling of your bedroom&lt;br /&gt;
&lt;br /&gt;
The Machines - AR tower defense&lt;br /&gt;
&lt;br /&gt;
Playground AR - virtual blocks and simple animations. Good physics!&lt;br /&gt;
&lt;br /&gt;
Air Measure / Measure Kit - replaces your folding rule&lt;br /&gt;
&lt;br /&gt;
Tunnel AR - an AR-enriched Video(game) of famous a German Rap/HipHop group &amp;quot;Die Fantastischen Vier&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Dumb Ways to Die 3: World Tour - a 2017 app game that also allows AR on many Apple devices!&lt;br /&gt;
&lt;br /&gt;
==Developer Resources==&lt;br /&gt;
===Developer APIs===&lt;br /&gt;
[[ARKit]] - [[Apple]]&#039;s AR API that allows developers to create AR apps for [[iOS]] devices.&lt;br /&gt;
&lt;br /&gt;
[[ARCore]] - [[Google]]&#039;s AR API that allows developers to create AR apps for [[Android]] devices.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Augmented reality]]&lt;/div&gt;</summary>
		<author><name>RealEditor</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Sixense_STEM&amp;diff=36351</id>
		<title>Sixense STEM</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Sixense_STEM&amp;diff=36351"/>
		<updated>2025-07-25T11:25:04Z</updated>

		<summary type="html">&lt;p&gt;RealEditor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Device Infobox&lt;br /&gt;
|image=&lt;br /&gt;
|Type=[[Input Device]], [[Motion Tracker]]&lt;br /&gt;
|Subtype=[[Hands/Fingers Tracking]], [[Body Tracking]]&lt;br /&gt;
|Platform=Various&lt;br /&gt;
|Creator=&lt;br /&gt;
|Developer=[[Sixense]]&lt;br /&gt;
|Manufacturer=&lt;br /&gt;
|Operating System=&lt;br /&gt;
|Versions=&lt;br /&gt;
|Requires=&lt;br /&gt;
|CPU=&lt;br /&gt;
|GPU=&lt;br /&gt;
|HPU=&lt;br /&gt;
|Memory=&lt;br /&gt;
|Storage=&lt;br /&gt;
|Display=&lt;br /&gt;
|Resolution=&lt;br /&gt;
|Refresh Rate=&lt;br /&gt;
|Persistence=&lt;br /&gt;
|Precision=&lt;br /&gt;
|Field of View=&lt;br /&gt;
|Tracking=6DOF&lt;br /&gt;
|Rotational Tracking=&lt;br /&gt;
|Positional Tracking=&lt;br /&gt;
|Update Rate=&lt;br /&gt;
|Latency=&lt;br /&gt;
|Audio=&lt;br /&gt;
|Camera=&lt;br /&gt;
|Sensors=&lt;br /&gt;
|Input=&lt;br /&gt;
|Connectivity=&lt;br /&gt;
|Power=&lt;br /&gt;
|Weight=&lt;br /&gt;
|Size=&lt;br /&gt;
|Release Date=&lt;br /&gt;
|Price=&lt;br /&gt;
|Website=http://sixense.com/wireless&lt;br /&gt;
}}&lt;br /&gt;
The STEM System by [[Sixense]] is a fully modular motion tracking system designed specifically for emerging [[Virtual Reality Devices|VR systems]] and [[VR Apps|applications]]. The system is based on the same technology that has been used in the [[Razer Hydra]] controller. With the STEM System, Sixense would like to branch out from licensing their technology to manufacturing.&lt;br /&gt;
&lt;br /&gt;
==Features==&lt;br /&gt;
Some of the main advantages of the STEM System over the Razer Hydra and other competing controllers include its wireless operation, modularity, and superb tracking performance.&lt;br /&gt;
&lt;br /&gt;
The dominant idea behind this system is to offer both users and developers a flexible way how to accurately track motion in virtual reality applications, and easily customize the form factor of the controller. The STEM System is able to read data from up to five individual STEM tracking modules, which can be mounted on virtually any place on the body or fitted inside a plastic sword, racing wheel or, for example, replica pistol. As such, the system is capable of full body tracking and application-specific motion control. &lt;br /&gt;
&lt;br /&gt;
STEM tracking modules can be located anywhere within an 8-foot radius from the Base unit, which can, in turn, be approximately 3 feet away from the receiver. The A/C electromagnetic field used for position and motion detection operates with less than 10ms latency. Because the technology does not use inertial sensors, there is no drift caused by acceleration and deceleration.&lt;br /&gt;
&lt;br /&gt;
==Software==&lt;br /&gt;
The STEM System is designed to be more than just a VR controller. Sixense wants to create an open platform for developers, content creators, and end users. The second-generation Sixense SDK is available for Windows, Mac OS, and Linux operating systems, and it provides backward compatibility with existing products powered by Sixense motion tracking technology.&lt;br /&gt;
&lt;br /&gt;
The system is not limited to just new titles. The Sixense MotionCreator for PC is able to adapt motion input to the native control system of virtually any video game. Users can create and share profiles used for seamless translation of motion tracking into button presses and mouse movement.&lt;br /&gt;
&lt;br /&gt;
Those interested in what Sixense STEM System has to offer can try the Sixense Tuscany Demo. It demonstrates what the system is capable of and acts as a reference implementation for further development.&lt;br /&gt;
&lt;br /&gt;
The project launched back in September of 2013 and greatly surpassed the original goal of $250,000 in October of the same year.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Input Devices]]&lt;/div&gt;</summary>
		<author><name>RealEditor</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=IMU&amp;diff=36349</id>
		<title>IMU</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=IMU&amp;diff=36349"/>
		<updated>2025-07-22T11:33:06Z</updated>

		<summary type="html">&lt;p&gt;RealEditor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;An &#039;&#039;&#039;inertial measurement unit&#039;&#039;&#039; (&#039;&#039;&#039;IMU&#039;&#039;&#039;) is an electronic [[sensor]] that measures and reports its specific force, angular rate, and sometimes the orientation of the body, using a combination of [[accelerometer]]s, [[gyroscope]]s, and often [[magnetometer]]s.&amp;lt;ref name=&amp;quot;TDK_IMU_Overview&amp;quot;&amp;gt;&lt;br /&gt;
TDK InvenSense. “What is an Inertial Measurement Unit (IMU)?” &lt;br /&gt;
[https://invensense.tdk.com/products/motion-tracking/6-axis/ TDK InvenSense Website]. Accessed May 3, 2025.&amp;lt;/ref&amp;gt; IMUs are fundamental components in [[virtual reality|Virtual Reality (VR)]] and [[augmented reality|Augmented Reality (AR)]] systems for tracking the orientation of [[Head-Mounted Display|HMDs]] and [[Input Devices]] like controllers.&lt;br /&gt;
&lt;br /&gt;
==Components and Function==&lt;br /&gt;
A typical IMU integrates multiple sensor types onto a microchip:&lt;br /&gt;
&lt;br /&gt;
*   &#039;&#039;&#039;[[Accelerometer]]s&#039;&#039;&#039;: Measure proper acceleration (g-force), which includes both acceleration due to movement and the constant pull of [[gravity]].&amp;lt;ref name=&amp;quot;Woodman_IMU_Tutorial&amp;quot;&amp;gt; Woodman, O. J. (2007). An introduction to inertial navigation. University of Cambridge Computer Laboratory Technical Report, UCAM-CL-TR-696. [https://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-696.pdf PDF Link]&amp;lt;/ref&amp;gt; When the IMU is relatively static, accelerometers can determine tilt (pitch and roll angles) relative to the direction of gravity. When moving, they measure linear acceleration.&lt;br /&gt;
*   &#039;&#039;&#039;[[Gyroscope]]s&#039;&#039;&#039;: Measure [[angular velocity]] (rate of rotation) around one or more axes.&amp;lt;ref name=&amp;quot;Woodman_IMU_Tutorial&amp;quot;/&amp;gt; In VR/AR, they detect rotational movements corresponding to [[pitch]] (nodding &#039;yes&#039;), [[yaw]] (shaking &#039;no&#039;), and [[roll]] (tilting head side-to-side). Gyroscopes provide fast and responsive rotational data but are prone to [[sensor drift]] over time.&lt;br /&gt;
*   &#039;&#039;&#039;[[Magnetometer]]s&#039;&#039;&#039; (Optional but common): Measure the strength and direction of the local [[magnetic field]], typically the Earth&#039;s magnetic field. They act like a compass to provide an absolute reference for the yaw orientation, helping to correct gyroscope drift around the vertical axis.&amp;lt;ref name=&amp;quot;Woodman_IMU_Tutorial&amp;quot;/&amp;gt; However, they are susceptible to interference from nearby magnetic materials or electronic devices.&lt;br /&gt;
&lt;br /&gt;
When an IMU includes all three sensors (accelerometer, gyroscope, and magnetometer), it is sometimes referred to as a 9-axis IMU or a [[MARG]] (Magnetic, Angular Rate, and Gravity) sensor.&amp;lt;ref&amp;gt;Madgwick, Sebastian OH, Andrew JL Harrison, and Ravi Vaidyanathan. &amp;quot;Estimation of IMU and MARG orientation using a gradient descent algorithm.&amp;quot; IEEE international conference on rehabilitation robotics. IEEE, 2011.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Sensor Fusion==&lt;br /&gt;
Raw data from individual sensors can be noisy (for example accelerometers during fast movement) and inaccurate (for example gyroscopes drift). [[Sensor fusion]] algorithms, such as [[Kalman filter]]s or complementary filters, are essential.&amp;lt;ref name=&amp;quot;Mahony_Filter&amp;quot;&amp;gt;&lt;br /&gt;
Mahony, R.; Hamel, T.; Pflimlin, J‑M. “Nonlinear Complementary Filters on the Special Orthogonal Group.” &lt;br /&gt;
IEEE Transactions on Automatic Control, 53 (5) (2008): 1203‑1218. &lt;br /&gt;
[https://doi.org/10.1109/TAC.2008.923738 DOI link]&amp;lt;/ref&amp;gt; These algorithms intelligently combine the data from the accelerometers, gyroscopes (and magnetometers, if present) to produce a single, more accurate, stable, and low-latency estimate of the device&#039;s orientation in real-time.&lt;br /&gt;
&lt;br /&gt;
== Role in VR/AR ==&lt;br /&gt;
IMUs are crucial for providing low-latency [[rotational tracking]], which is essential for creating a sense of [[immersion]] and preventing [[motion sickness]].&amp;lt;ref name=&amp;quot;LaValle_VR_Book&amp;quot;&amp;gt; LaValle, S. M. (2016). Virtual Reality. Cambridge University Press. Chapter 9: Tracking. [http://lavalle.pl/vr/book.html Online Book Link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Importance in Head Tracking ===&lt;br /&gt;
IMUs provide the rapid [[orientation tracking]] needed to update the virtual view in sync with the user&#039;s head movements. This low latency is critical for user comfort. The typical update rate of modern IMUs used in VR headsets is between 500Hz to 1000Hz, much faster than most visual tracking systems can achieve alone.&amp;lt;ref&amp;gt;Niehorster, Diederick C., Li Li, and Markus Lappe. &amp;quot;The accuracy and precision of position and orientation tracking in the HTC Vive virtual reality system for scientific research.&amp;quot; i-Perception 8.3 (2017).&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===3 Degrees of Freedom (DoF)===&lt;br /&gt;
An IMU inherently provides [[Degrees of Freedom|3 DoF tracking]], measuring orientation changes (pitch, yaw, roll). This is sufficient for basic VR experiences like 360-degree video viewing on mobile VR headsets where the user&#039;s physical position in the room is not tracked.&lt;br /&gt;
&lt;br /&gt;
===6DoF Tracking Systems===&lt;br /&gt;
For full [[6DoF]] tracking (which includes [[positional tracking]] translation along X, Y, and Z axes), IMU data is combined via sensor fusion with data from other tracking systems. These can include:&lt;br /&gt;
*   [[Inside-out tracking]]: Cameras on the HMD observe the external environment.&lt;br /&gt;
*   [[Outside-in tracking]]: External sensors (like cameras or [[lighthouse tracking|base stations]]) observe markers on the HMD and controllers.&lt;br /&gt;
*   [[Camera-based tracking]]: General term encompassing various visual tracking methods.&lt;br /&gt;
In these systems, the IMU provides the high-frequency orientation updates, while the positional tracking system provides absolute position data and periodically corrects for any accumulated IMU drift.&amp;lt;ref name=&amp;quot;LaValle_VR_Book&amp;quot;/&amp;gt;&amp;lt;ref&amp;gt;Hyvärinen, Timo, et al. &amp;quot;Sensor fusion for head tracking in augmented reality applications.&amp;quot; 2019 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). IEEE, 2019.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Limitations and Correction ==&lt;br /&gt;
While essential, IMUs have inherent limitations:&lt;br /&gt;
&lt;br /&gt;
*   &#039;&#039;&#039;[[Sensor Drift]]&#039;&#039;&#039;: Gyroscopes accumulate small errors over time, leading to a gradual mismatch between the tracked orientation and the real-world orientation. This is particularly noticeable in yaw if uncorrected.&lt;br /&gt;
*   &#039;&#039;&#039;Magnetic Interference&#039;&#039;&#039;: Magnetometers can be disturbed by ferrous materials or strong magnetic fields in the environment, leading to inaccurate yaw readings. Advanced sensor fusion algorithms may attempt to detect and compensate for such interference.&lt;br /&gt;
*   &#039;&#039;&#039;No Positional Data&#039;&#039;&#039;: By themselves, IMUs cannot determine a device&#039;s position in space; they only measure rotation and linear acceleration, not absolute location or translational velocity relative to the world.&lt;br /&gt;
&lt;br /&gt;
VR/AR systems address these limitations, particularly drift, through:&lt;br /&gt;
*   Visual correction using cameras or external reference points (in 6DoF systems)&lt;br /&gt;
*   [[Complementary filtering]] combining accelerometer (gravity vector) and gyroscope data for tilt correction&lt;br /&gt;
*   [[Kalman filtering]] algorithms integrating multiple sensor inputs and predictive models&lt;br /&gt;
*   Magnetometer data (if available and reliable) for absolute yaw correction&lt;br /&gt;
*   [[Zero velocity updates]] (ZUPTs) during periods of detected stillness to reset velocity error accumulation&amp;lt;ref&amp;gt;Cadena, Cesar, et al. &amp;quot;Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age.&amp;quot; IEEE Transactions on robotics 32.6 (2016): 1309-1332.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==IMU Specifications for VR/AR==&lt;br /&gt;
For optimal performance in VR/AR applications, IMUs typically require:&lt;br /&gt;
&lt;br /&gt;
*   Low latency (&amp;lt; 2ms sensor processing time desirable)&lt;br /&gt;
*   High update rate (500-1000Hz)&lt;br /&gt;
*   High precision gyroscopes (&amp;lt; 0.01 degrees/second drift)&lt;br /&gt;
*   Low noise accelerometers&lt;br /&gt;
*   Efficient power consumption&lt;br /&gt;
*   Small form factor&lt;br /&gt;
*   Integrated processing capabilities (sometimes including basic sensor fusion)&amp;lt;ref&amp;gt;Angelini, Lee, et al. &amp;quot;Understanding sensors: prioritizations for selecting sensors in mobile VR applications.&amp;quot; Internet Research (2022).&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Future Developments==&lt;br /&gt;
Next-generation IMUs for VR/AR are focusing on:&lt;br /&gt;
&lt;br /&gt;
*   Reduced power consumption for longer device battery life&lt;br /&gt;
*   Smaller form factors for integration into lighter HMDs and glasses&lt;br /&gt;
*   Integrated [[machine learning|ML]] capabilities for improved motion prediction and pattern recognition&lt;br /&gt;
*   Enhanced sensor fusion algorithms, potentially running on the sensor itself&lt;br /&gt;
*   Further reduction in sensor noise and drift characteristics&amp;lt;ref&amp;gt;Adams, Michael D. &amp;quot;MEMS IMU Navigation with Model Based Dead-Reckoning and One-Way-Travel-Time Acoustic Measurements.&amp;quot; IEEE Journal of Oceanic Engineering (2023).&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Key IMU Manufacturers==&lt;br /&gt;
Several companies manufacture IMUs used in consumer electronics, including VR/AR devices:&lt;br /&gt;
*   [[TDK]] [[Invensense]]&amp;lt;ref name=&amp;quot;TDK_Homepage&amp;quot;&amp;gt; TDK InvenSense - Motion Sensors [https://invensense.tdk.com/products/motion-tracking/ TDK InvenSense Website]. Accessed October 26, 2023.&amp;lt;/ref&amp;gt; - Major provider for consumer electronics.&lt;br /&gt;
*   [[Bosch Sensortec]]&amp;lt;ref name=&amp;quot;Bosch_Homepage&amp;quot;&amp;gt; Bosch Sensortec - IMUs [https://www.bosch-sensortec.com/products/motion-sensors/imus/ Bosch Sensortec Website]. Accessed October 26, 2023.&amp;lt;/ref&amp;gt; - Produces high-performance MEMS sensors.&lt;br /&gt;
*   [[STMicroelectronics]]&amp;lt;ref name=&amp;quot;ST_Homepage&amp;quot;&amp;gt; STMicroelectronics - MEMS Motion Sensors [https://www.st.com/en/mems-and-sensors/mems-motion-sensors.html STMicroelectronics Website]. Accessed October 26, 2023.&amp;lt;/ref&amp;gt; - Manufacturer of various MEMS sensors.&lt;br /&gt;
*   [[Analog Devices]] - Often provides higher-grade IMUs.&lt;br /&gt;
*   [[Xsens]] - Specializes in high-precision motion tracking modules often incorporating IMUs.&amp;lt;ref&amp;gt;Yole Développement. &amp;quot;MEMS &amp;amp; Sensors for Wearables Report.&amp;quot; 2023.&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Notable IMU Models in VR/AR==&lt;br /&gt;
*   &#039;&#039;&#039;MPU-6050&#039;&#039;&#039;: A popular low-cost 6-axis IMU (accelerometer + gyroscope) from InvenSense, used in hobbyist projects and early devices like the [[Oculus Rift DK1]].&amp;lt;ref name=&amp;quot;MPU6050_Datasheet&amp;quot;&amp;gt; InvenSense Inc. MPU-6000 and MPU-6050 Product Specification Revision 3.4. [https://invensense.tdk.com/wp-content/uploads/2015/02/MPU-6000-Datasheet1.pdf Datasheet Link]&amp;lt;/ref&amp;gt;&lt;br /&gt;
*   &#039;&#039;&#039;MPU-9250&#039;&#039;&#039;: An InvenSense 9-axis IMU (adds a magnetometer to the MPU-6xxx series capabilities).&amp;lt;ref name=&amp;quot;MPU9250_Datasheet&amp;quot;&amp;gt; InvenSense Inc. MPU-9250 Product Specification Revision 1.1. [https://invensense.tdk.com/wp-content/uploads/2015/02/PS-MPU-9250A-01-v1.1.pdf Datasheet Link]&amp;lt;/ref&amp;gt; Used in some dev kits and controllers.&lt;br /&gt;
*   &#039;&#039;&#039;ICM-42688-P&#039;&#039;&#039;: A high-performance 6-axis IMU from TDK InvenSense, known for its low noise and stability, used in the Meta [[Quest 2]] headset.&amp;lt;ref name=&amp;quot;Quest2_Teardown_iFixit&amp;quot;&amp;gt;&lt;br /&gt;
iFixit. “Oculus Quest 2 Disassembly.” &lt;br /&gt;
[https://www.ifixit.com/Guide/Oculus+Quest+2+Disassembly/139759 iFixit Repair Guide]. Accessed May 3, 2025.&amp;lt;/ref&amp;gt;&lt;br /&gt;
*   &#039;&#039;&#039;BMI085/BMI270&#039;&#039;&#039;: Bosch IMUs optimized for VR/AR applications, found in devices like the [[Valve Index]] controllers.&amp;lt;ref&amp;gt;Nield, David. &amp;quot;How VR Headsets Are Getting Better Through Improved Tracking.&amp;quot; TechRadar, 2022.&amp;lt;/ref&amp;gt;&lt;br /&gt;
*   &#039;&#039;&#039;LSM6DSO/LSM6DSOX&#039;&#039;&#039;: STMicroelectronics 6-axis IMUs used in various HMDs and AR glasses, including the [[HoloLens 2]].&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Terms]] [[Category:Technical Terms]] [[Category:Tracking]] [[Category:Tracking Technology]] [[Category:Hardware]]&lt;/div&gt;</summary>
		<author><name>RealEditor</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Positional_tracking&amp;diff=36348</id>
		<title>Positional tracking</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Positional_tracking&amp;diff=36348"/>
		<updated>2025-07-22T11:32:24Z</updated>

		<summary type="html">&lt;p&gt;RealEditor: /* Inertial Tracking */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{TOCRIGHT}}&lt;br /&gt;
{{see also|Tracking}}&lt;br /&gt;
&#039;&#039;&#039;Positional tracking&#039;&#039;&#039; is a technology that allows a device to know its position relative to the environment around it. It uses a combination of hardware and software to achieve the detection of its absolute position. It is an essential technology for [[virtual reality]] (VR), making it possible to track movement with six [[degrees of freedom]] (6DOF).&amp;lt;ref name=”1”&amp;gt; StereoLabs. Positional Tracking. Retrieved from https://www.stereolabs.com/documentation/overview/positional-tracking/introduction.html&amp;lt;/ref&amp;gt;&amp;lt;ref name=”2”&amp;gt; Lang, B. (2013). An introduction to positional tracking and degrees of freedom (DOF). Retrieved from http://www.roadtovr.com/introduction-positional-tracking-degrees-freedom-dof/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Positional tracking is not the same as 3DOF head tracking. 3DOF head tracking only registers the rotation of the head ([[Rotational tracking]]), with movements such as pitch, yaw, and roll. Positional tracking registers the exact position and orientation of the headset in space, recognizing forward/backward, up/down and left/right movement &amp;lt;ref name=”3”&amp;gt; Rohr, F. (2015). Positional tracking in VR: what it is and how it works. Retrieved from http://data-reality.com/positional-tracking-in-vr-what-it-is-and-how-it-works&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Positional tracking VR technology brings various benefits to the VR experience. It can change the viewpoint of the user to reflect different actions like jumping, ducking, or leaning forward; allow for an exact representation of the user’s hands and other objects in the virtual environment; increase the connection between the physical and virtual world by, for example, using hand position to move virtual objects by touch; and detect gestures by analyzing position over time &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”4”&amp;gt; Boger, Y. (2014). Overview of positional tracking technologies for virtual reality. Retrieved from http://www.roadtovr.com/overview-of-positional-tracking-technologies-virtual-reality/&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It is also known that positional tracking improves the 3D perception of the virtual environment because of parallax (the way objects closer to the eyes move faster than objects farther away). Parallax helps inform the brain about the perception of distance along with stereoscopy &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;. Also, the 6DOF tracking helps reduce drastically motion sickness during the VR experience that is caused due the disconnect between the inputs of what is being seen with the eyes and what is being felt by the ear vestibular system &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
There are different methods of positional tracking. Choosing which one to apply is dependent on various factors such as the tracking accuracy and the refresh rate required, the tracking area, if the tracking is indoor or outdoor, cost, power consumption, computational power available, whether the tracked object is rigid or flexible, and whether the objects are well known of can change &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Positional tracking VR technology is a necessity for VR to work properly since an accurate representation of objects like the head or the hands in the virtual world contribute towards achieving immersion and a greater sense of presence &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt; RealVision. The dilemma of positional tracking in cinematic vr films. Retrieved from http://realvision.ae/blog/2016/06/the-dilemma-of-positional-tracking-in-cinematic-vr-films/&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Methods of positional tracking==&lt;br /&gt;
&lt;br /&gt;
[[File:HMD and markers.png|thumb|1. Markers on a Sensics HMD (Image: www.roadtovr.com)]]&lt;br /&gt;
[[File:Optical marker.png|thumb|2. Optical marker by Intersense (Image: www.roadtovr.com)]]&lt;br /&gt;
&lt;br /&gt;
There are various methods of positional tracking. The description of the methods provided below is based on Boger (2014) &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Acoustic Tracking===&lt;br /&gt;
&lt;br /&gt;
The measurement of the time it takes for a known acoustic signal to travel between an emitter and a receiver is known as acoustic tracking. Generally, several transmitters are placed in the tracked area and various receivers placed on the tracked objects. The distance between the receiver and transmitter is calculated by the amount of time the acoustic signal takes to reach the receiver. However, for this to work, the system must be aware of when the acoustic signal was sent. The orientation of a rigid object can be known if this object has multiple receivers placed in a known position. The difference between the time of arrival of the acoustic signal to the multiple receivers will provide data about the orientation of the object relative to the transmitters.&lt;br /&gt;
&lt;br /&gt;
One of the downsides of acoustic tracking is that it requires time-consuming calibration to function properly. The acoustic trackers are also susceptible to measurement error due to ambient disturbances such as noise and do not provide high update rates. Due to these disadvantages, acoustic tracking systems are commonly used with other sensors (e.g. inertial sensors) to provide better accuracy.&lt;br /&gt;
&lt;br /&gt;
Intersense, an American technology company, has developed successful acoustic tracking systems.&lt;br /&gt;
&lt;br /&gt;
===Wireless tracking===&lt;br /&gt;
Wireless tracking uses a set of anchors that are placed around the perimeter of the tracking space and one or more tags that are tracked. This system is similar in concept to GPS, but works both indoors and outdoors. Sometimes referred to as indoor GPS. The tags [[triangulation (computer vision)|triangulate]] their 3D position using the anchors placed around the perimeter. A wireless technology called Ultra Wideband has enabled the position tracking to reach a precision of under 100 mm. By using sensor fusion and high speed algorithms, the tracking precision can reach 5 mm level with update speeds of 200 Hz or 5 ms [[Latency (engineering)|latency]].&lt;br /&gt;
&amp;lt;ref name=”6”&amp;gt; IndoTraq. Positional Tracking. Retrieved from http://indotraq.com/?page_id=122&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;ref name=”7”&amp;gt; Hands-On With Indotraq. Retrieved from https://www.vrfocus.com/2016/01/hands-on-with-indotraq/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;ref name=”8”&amp;gt; INDOTRAQ INDOOR TRACKING FOR VIRTUAL REALITY. Retrieved from https://blog.abt.com/2016/01/ces-2016-indotraq-indoor-tracking-for-virtual-reality/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Inertial tracking===&lt;br /&gt;
Inertial tracking is made possible by the use of accelerometers and gyroscopes, commonly bundled together in chips called [[IMU]]s. Accelerometers measure linear acceleration, which is used to calculate velocity and the position of the object relative to an initial point. This is possible due to the mathematical relationship between position over time and velocity, and velocity and acceleration (4). A gyroscope measures angular velocity. It is a solid-state component based on microelectromechanical systems (MEMS) technology and operates based on the same principles as a mechanical gyro. From the angular velocity data provided by the gyroscope, angular position relative to the initial point is calculated.&lt;br /&gt;
&lt;br /&gt;
This technology is inexpensive and can provide high update rates as well as low latency. On the other side, the calculations (i.e. integration and double-integration) of the values given by the accelerometers (acceleration) and gyroscope (angular velocity) that lead to the object’s position can result in a significant drift in position information - decreasing this method’s accuracy.&lt;br /&gt;
&lt;br /&gt;
===Magnetic Tracking===&lt;br /&gt;
&lt;br /&gt;
This method measures the magnitude of the magnetic field in different directions. Normally, the system has a base station that generates a magnetic field, with the strength of the field diminishing as distance increases between the measurement point and base station. Furthermore, a magnetic field allows for the determination of orientation. For example, if the measured object is rotated, the distribution of the magnetic field along the various axes is modified.&lt;br /&gt;
&lt;br /&gt;
In a controlled environment, magnetic tracking’s accuracy is good. However, it can be influenced by interference from conductive materials near the emitter of sensors, from other magnetic fields generated by other devices and from ferromagnetic materials in the tracking area.&lt;br /&gt;
The [[Razer Hydra]] motion controllers is an example of implementation of this specific type of positional tracking in a product.&lt;br /&gt;
&lt;br /&gt;
Most [[Head-mounted display|Head-mounted displays]] (HMDs) and smartphones contain [[IMUs]] or [[magnetometer|magnetometers]] that detect the magnetic field of Earth.&lt;br /&gt;
&lt;br /&gt;
Magnetic tracking can be AC or DC. Magnetic tracking is great because it doesn&#039;t need a Kalman filter. It is much higher quality than all other tracking methods, but there are constraints on its usage, like how it cannot be used in environments with a lot of metal due to interference.&lt;br /&gt;
&lt;br /&gt;
===Optical Tracking===&lt;br /&gt;
&lt;br /&gt;
For optical tracking, there are various methods available. The commonality between them all is the use of cameras to gather positional information.&lt;br /&gt;
&lt;br /&gt;
====Tracking with markers====&lt;br /&gt;
&lt;br /&gt;
This optical tracking method uses a specific pattern of markers placed on an object (Figure 1). One or more cameras then seek the markers, using algorithms to extract the position of the object from the visible markers. From the difference between what the video camera is detecting and the known marker pattern, an algorithm calculates the position and orientation of the tracked object. The pattern of markers that are placed in the tracked object is not random. The number, location, and arrangement of the markers are carefully chosen in order to provide the system with as much information possible so the algorithms do not have missing data.&lt;br /&gt;
&lt;br /&gt;
There are two types of markers: passive and active. Passive markers reflect infrared light (IR) towards the light source. In this case, the camera provides the IR signal that is reflected from the markers for detection. Active markers are IR lights that flash periodically and are detected by the cameras. Choosing between the two types of markers depends on several variables like distance, type of surface, required viewing direction, and others.&lt;br /&gt;
&lt;br /&gt;
====Tracking with visible markers====&lt;br /&gt;
&lt;br /&gt;
Visible markers (Figure 2) placed in a predetermined arrangement are also used in optical tracking. The camera detects the markers and their positions leading to the determination of the position and orientation of the object. For example, visible markers can be placed in a specific pattern on the tracking area, and an HMD with cameras would then use this to calculate its position. The shape and size of this type of markers can vary. What is important is that they can be easily identified by the cameras.&lt;br /&gt;
&lt;br /&gt;
====Markerless tracking====&lt;br /&gt;
&lt;br /&gt;
Objects can be tracked without markers if their geometry is known. With markerless tracking, the system camera searches and compares the received image with the known 3D model for features like edges or color transitions, for example.&lt;br /&gt;
&lt;br /&gt;
====Depth map tracking====&lt;br /&gt;
&lt;br /&gt;
A depth camera uses various technologies to create a real-time map of the distances of the objects in the tracking area from the camera. The tracking is performed by extracting the object to be tracked (e.g. hang) from the general depth map and analyzing it. An example of a depth map camera is Microsoft’s Kinect.&lt;br /&gt;
&lt;br /&gt;
====Sensor Fusion====&lt;br /&gt;
&lt;br /&gt;
Sensor fusion is a method of using more than one tracking technique in order to improve the detection of position and orientation of the tracked object. By using a combination of techniques, one method’s disadvantage can be compensated by another. An example of this would be the combination of inertial tracking and optical tracking. The former can develop drift, and the latter is susceptible to markers being hidden (occlusion). By combining both, if markers are occluded, the position can be estimated by the inertial trackers, and even if the optical markers are completely visible, the inertial sensors provide updates at a higher rate, improving the overall positional tracking.&lt;br /&gt;
&lt;br /&gt;
===Oculus Rift and HTC Vive’s positional tracking===&lt;br /&gt;
&lt;br /&gt;
The [[Oculus Rift]] positional tracking is different from the one the HTC Vive uses. While the Oculus Rift uses [[Constellation]], an IR-LED array that is tracked by a camera, the HTC Vive uses Valve’s [[Lighthouse]] technology, which is a laser-based system &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In the Oculus Rift, movement is limited to the sight area of the camera - when not enough LEDs are in sight of the camera, the software relies on data sent by the headset’s IMU sensors. With Valve’s position tracking system, the tracking area is flooded with non-visible light which the HTC Vive detects using photosensors &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”6”&amp;gt; Buckley, S. (2015). This is how Valve’s amazing lighthouse tracking technology works. Retrieved from http://gizmodo.com/this-is-how-valve-s-amazing-lighthouse-tracking-technol-1705356768&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Positional tracking and smartphones===&lt;br /&gt;
&lt;br /&gt;
Positional tracking in mobile VR still struggles to achieve a good level of accuracy mainly due to the power needed to handle a positional tracking VR system and the fact that using QR codes and cameras for tracking would contradict the essence of having a simple, intuitive, and mobile VR experience (3). Currently, mobile devices are limited by their form factor and can only track the movements of a user’s head. Nevertheless, companies are still investing in the development of an accurate positional tracking system for smartphones. Having this system available to anyone with a phone capable of VR would facilitate the adoption of VR by the general public, possibly unlocking the potential of the VR market &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”7”&amp;gt; Grubb, J. (2016). Why positional tracking for mobile virtual reality is so damn hard. Retrieved from https://venturebeat.com/2016/02/24/why-positional-tracking-for-mobile-virtual-reality-is-so-damn-hard&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Types of positional tracking==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Inside-out tracking]]&#039;&#039;&#039; - tracking camera is placed on the device ([[HMD]]) being tracking.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Outside-in tracking]]&#039;&#039;&#039; - tracking camera(s) is placed in the external environment where the tracked device (HMD) is within its view.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Markerless tracking]]&#039;&#039;&#039; - tracking system that does not use [[fiducial markers]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Markerless inside-out tracking]]&#039;&#039;&#039; - combines markerless tracking with inside-out tracking&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Markerless outside-in tracking]]&#039;&#039;&#039; - combines markerless tracking with outside-in tracking&lt;br /&gt;
&lt;br /&gt;
==Comparison of tracking systems==&lt;br /&gt;
{{see also|Comparison of tracking systems}}&lt;br /&gt;
{{:Comparison of tracking systems}}&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Terms]] [[Category:Technical Terms]]&lt;/div&gt;</summary>
		<author><name>RealEditor</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Positional_tracking&amp;diff=36347</id>
		<title>Positional tracking</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Positional_tracking&amp;diff=36347"/>
		<updated>2025-07-22T11:31:50Z</updated>

		<summary type="html">&lt;p&gt;RealEditor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{TOCRIGHT}}&lt;br /&gt;
{{see also|Tracking}}&lt;br /&gt;
&#039;&#039;&#039;Positional tracking&#039;&#039;&#039; is a technology that allows a device to know its position relative to the environment around it. It uses a combination of hardware and software to achieve the detection of its absolute position. It is an essential technology for [[virtual reality]] (VR), making it possible to track movement with six [[degrees of freedom]] (6DOF).&amp;lt;ref name=”1”&amp;gt; StereoLabs. Positional Tracking. Retrieved from https://www.stereolabs.com/documentation/overview/positional-tracking/introduction.html&amp;lt;/ref&amp;gt;&amp;lt;ref name=”2”&amp;gt; Lang, B. (2013). An introduction to positional tracking and degrees of freedom (DOF). Retrieved from http://www.roadtovr.com/introduction-positional-tracking-degrees-freedom-dof/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Positional tracking is not the same as 3DOF head tracking. 3DOF head tracking only registers the rotation of the head ([[Rotational tracking]]), with movements such as pitch, yaw, and roll. Positional tracking registers the exact position and orientation of the headset in space, recognizing forward/backward, up/down and left/right movement &amp;lt;ref name=”3”&amp;gt; Rohr, F. (2015). Positional tracking in VR: what it is and how it works. Retrieved from http://data-reality.com/positional-tracking-in-vr-what-it-is-and-how-it-works&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Positional tracking VR technology brings various benefits to the VR experience. It can change the viewpoint of the user to reflect different actions like jumping, ducking, or leaning forward; allow for an exact representation of the user’s hands and other objects in the virtual environment; increase the connection between the physical and virtual world by, for example, using hand position to move virtual objects by touch; and detect gestures by analyzing position over time &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”4”&amp;gt; Boger, Y. (2014). Overview of positional tracking technologies for virtual reality. Retrieved from http://www.roadtovr.com/overview-of-positional-tracking-technologies-virtual-reality/&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It is also known that positional tracking improves the 3D perception of the virtual environment because of parallax (the way objects closer to the eyes move faster than objects farther away). Parallax helps inform the brain about the perception of distance along with stereoscopy &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;. Also, the 6DOF tracking helps reduce drastically motion sickness during the VR experience that is caused due the disconnect between the inputs of what is being seen with the eyes and what is being felt by the ear vestibular system &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
There are different methods of positional tracking. Choosing which one to apply is dependent on various factors such as the tracking accuracy and the refresh rate required, the tracking area, if the tracking is indoor or outdoor, cost, power consumption, computational power available, whether the tracked object is rigid or flexible, and whether the objects are well known of can change &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Positional tracking VR technology is a necessity for VR to work properly since an accurate representation of objects like the head or the hands in the virtual world contribute towards achieving immersion and a greater sense of presence &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt; RealVision. The dilemma of positional tracking in cinematic vr films. Retrieved from http://realvision.ae/blog/2016/06/the-dilemma-of-positional-tracking-in-cinematic-vr-films/&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Methods of positional tracking==&lt;br /&gt;
&lt;br /&gt;
[[File:HMD and markers.png|thumb|1. Markers on a Sensics HMD (Image: www.roadtovr.com)]]&lt;br /&gt;
[[File:Optical marker.png|thumb|2. Optical marker by Intersense (Image: www.roadtovr.com)]]&lt;br /&gt;
&lt;br /&gt;
There are various methods of positional tracking. The description of the methods provided below is based on Boger (2014) &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Acoustic Tracking===&lt;br /&gt;
&lt;br /&gt;
The measurement of the time it takes for a known acoustic signal to travel between an emitter and a receiver is known as acoustic tracking. Generally, several transmitters are placed in the tracked area and various receivers placed on the tracked objects. The distance between the receiver and transmitter is calculated by the amount of time the acoustic signal takes to reach the receiver. However, for this to work, the system must be aware of when the acoustic signal was sent. The orientation of a rigid object can be known if this object has multiple receivers placed in a known position. The difference between the time of arrival of the acoustic signal to the multiple receivers will provide data about the orientation of the object relative to the transmitters.&lt;br /&gt;
&lt;br /&gt;
One of the downsides of acoustic tracking is that it requires time-consuming calibration to function properly. The acoustic trackers are also susceptible to measurement error due to ambient disturbances such as noise and do not provide high update rates. Due to these disadvantages, acoustic tracking systems are commonly used with other sensors (e.g. inertial sensors) to provide better accuracy.&lt;br /&gt;
&lt;br /&gt;
Intersense, an American technology company, has developed successful acoustic tracking systems.&lt;br /&gt;
&lt;br /&gt;
===Wireless tracking===&lt;br /&gt;
Wireless tracking uses a set of anchors that are placed around the perimeter of the tracking space and one or more tags that are tracked. This system is similar in concept to GPS, but works both indoors and outdoors. Sometimes referred to as indoor GPS. The tags [[triangulation (computer vision)|triangulate]] their 3D position using the anchors placed around the perimeter. A wireless technology called Ultra Wideband has enabled the position tracking to reach a precision of under 100 mm. By using sensor fusion and high speed algorithms, the tracking precision can reach 5 mm level with update speeds of 200 Hz or 5 ms [[Latency (engineering)|latency]].&lt;br /&gt;
&amp;lt;ref name=”6”&amp;gt; IndoTraq. Positional Tracking. Retrieved from http://indotraq.com/?page_id=122&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;ref name=”7”&amp;gt; Hands-On With Indotraq. Retrieved from https://www.vrfocus.com/2016/01/hands-on-with-indotraq/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;ref name=”8”&amp;gt; INDOTRAQ INDOOR TRACKING FOR VIRTUAL REALITY. Retrieved from https://blog.abt.com/2016/01/ces-2016-indotraq-indoor-tracking-for-virtual-reality/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Inertial Tracking===&lt;br /&gt;
&lt;br /&gt;
Inertial tracking is made possible by the use of accelerometers and gyroscopes. Accelerometers measure linear acceleration, which is used to calculate velocity and the position of the object relative to an initial point. This is possible due to the mathematical relationship between position over time and velocity, and velocity and acceleration (4). A gyroscope measures angular velocity. It is a solid-state component based on microelectromechanical systems (MEMS) technology and operates based on the same principles as a mechanical gyro. From the angular velocity data provided by the gyroscope, angular position relative to the initial point is calculated.&lt;br /&gt;
&lt;br /&gt;
This technology is inexpensive and can provide high update rates as well as low latency. On the other side, the calculations (i.e. integration and double-integration) of the values given by the accelerometers (acceleration) and gyroscope (angular velocity) that lead to the object’s position can result in a significant drift in position information - decreasing this method’s accuracy.&lt;br /&gt;
&lt;br /&gt;
===Magnetic Tracking===&lt;br /&gt;
&lt;br /&gt;
This method measures the magnitude of the magnetic field in different directions. Normally, the system has a base station that generates a magnetic field, with the strength of the field diminishing as distance increases between the measurement point and base station. Furthermore, a magnetic field allows for the determination of orientation. For example, if the measured object is rotated, the distribution of the magnetic field along the various axes is modified.&lt;br /&gt;
&lt;br /&gt;
In a controlled environment, magnetic tracking’s accuracy is good. However, it can be influenced by interference from conductive materials near the emitter of sensors, from other magnetic fields generated by other devices and from ferromagnetic materials in the tracking area.&lt;br /&gt;
The [[Razer Hydra]] motion controllers is an example of implementation of this specific type of positional tracking in a product.&lt;br /&gt;
&lt;br /&gt;
Most [[Head-mounted display|Head-mounted displays]] (HMDs) and smartphones contain [[IMUs]] or [[magnetometer|magnetometers]] that detect the magnetic field of Earth.&lt;br /&gt;
&lt;br /&gt;
Magnetic tracking can be AC or DC. Magnetic tracking is great because it doesn&#039;t need a Kalman filter. It is much higher quality than all other tracking methods, but there are constraints on its usage, like how it cannot be used in environments with a lot of metal due to interference.&lt;br /&gt;
&lt;br /&gt;
===Optical Tracking===&lt;br /&gt;
&lt;br /&gt;
For optical tracking, there are various methods available. The commonality between them all is the use of cameras to gather positional information.&lt;br /&gt;
&lt;br /&gt;
====Tracking with markers====&lt;br /&gt;
&lt;br /&gt;
This optical tracking method uses a specific pattern of markers placed on an object (Figure 1). One or more cameras then seek the markers, using algorithms to extract the position of the object from the visible markers. From the difference between what the video camera is detecting and the known marker pattern, an algorithm calculates the position and orientation of the tracked object. The pattern of markers that are placed in the tracked object is not random. The number, location, and arrangement of the markers are carefully chosen in order to provide the system with as much information possible so the algorithms do not have missing data.&lt;br /&gt;
&lt;br /&gt;
There are two types of markers: passive and active. Passive markers reflect infrared light (IR) towards the light source. In this case, the camera provides the IR signal that is reflected from the markers for detection. Active markers are IR lights that flash periodically and are detected by the cameras. Choosing between the two types of markers depends on several variables like distance, type of surface, required viewing direction, and others.&lt;br /&gt;
&lt;br /&gt;
====Tracking with visible markers====&lt;br /&gt;
&lt;br /&gt;
Visible markers (Figure 2) placed in a predetermined arrangement are also used in optical tracking. The camera detects the markers and their positions leading to the determination of the position and orientation of the object. For example, visible markers can be placed in a specific pattern on the tracking area, and an HMD with cameras would then use this to calculate its position. The shape and size of this type of markers can vary. What is important is that they can be easily identified by the cameras.&lt;br /&gt;
&lt;br /&gt;
====Markerless tracking====&lt;br /&gt;
&lt;br /&gt;
Objects can be tracked without markers if their geometry is known. With markerless tracking, the system camera searches and compares the received image with the known 3D model for features like edges or color transitions, for example.&lt;br /&gt;
&lt;br /&gt;
====Depth map tracking====&lt;br /&gt;
&lt;br /&gt;
A depth camera uses various technologies to create a real-time map of the distances of the objects in the tracking area from the camera. The tracking is performed by extracting the object to be tracked (e.g. hang) from the general depth map and analyzing it. An example of a depth map camera is Microsoft’s Kinect.&lt;br /&gt;
&lt;br /&gt;
====Sensor Fusion====&lt;br /&gt;
&lt;br /&gt;
Sensor fusion is a method of using more than one tracking technique in order to improve the detection of position and orientation of the tracked object. By using a combination of techniques, one method’s disadvantage can be compensated by another. An example of this would be the combination of inertial tracking and optical tracking. The former can develop drift, and the latter is susceptible to markers being hidden (occlusion). By combining both, if markers are occluded, the position can be estimated by the inertial trackers, and even if the optical markers are completely visible, the inertial sensors provide updates at a higher rate, improving the overall positional tracking.&lt;br /&gt;
&lt;br /&gt;
===Oculus Rift and HTC Vive’s positional tracking===&lt;br /&gt;
&lt;br /&gt;
The [[Oculus Rift]] positional tracking is different from the one the HTC Vive uses. While the Oculus Rift uses [[Constellation]], an IR-LED array that is tracked by a camera, the HTC Vive uses Valve’s [[Lighthouse]] technology, which is a laser-based system &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In the Oculus Rift, movement is limited to the sight area of the camera - when not enough LEDs are in sight of the camera, the software relies on data sent by the headset’s IMU sensors. With Valve’s position tracking system, the tracking area is flooded with non-visible light which the HTC Vive detects using photosensors &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”6”&amp;gt; Buckley, S. (2015). This is how Valve’s amazing lighthouse tracking technology works. Retrieved from http://gizmodo.com/this-is-how-valve-s-amazing-lighthouse-tracking-technol-1705356768&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Positional tracking and smartphones===&lt;br /&gt;
&lt;br /&gt;
Positional tracking in mobile VR still struggles to achieve a good level of accuracy mainly due to the power needed to handle a positional tracking VR system and the fact that using QR codes and cameras for tracking would contradict the essence of having a simple, intuitive, and mobile VR experience (3). Currently, mobile devices are limited by their form factor and can only track the movements of a user’s head. Nevertheless, companies are still investing in the development of an accurate positional tracking system for smartphones. Having this system available to anyone with a phone capable of VR would facilitate the adoption of VR by the general public, possibly unlocking the potential of the VR market &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”7”&amp;gt; Grubb, J. (2016). Why positional tracking for mobile virtual reality is so damn hard. Retrieved from https://venturebeat.com/2016/02/24/why-positional-tracking-for-mobile-virtual-reality-is-so-damn-hard&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Types of positional tracking==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Inside-out tracking]]&#039;&#039;&#039; - tracking camera is placed on the device ([[HMD]]) being tracking.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Outside-in tracking]]&#039;&#039;&#039; - tracking camera(s) is placed in the external environment where the tracked device (HMD) is within its view.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Markerless tracking]]&#039;&#039;&#039; - tracking system that does not use [[fiducial markers]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Markerless inside-out tracking]]&#039;&#039;&#039; - combines markerless tracking with inside-out tracking&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Markerless outside-in tracking]]&#039;&#039;&#039; - combines markerless tracking with outside-in tracking&lt;br /&gt;
&lt;br /&gt;
==Comparison of tracking systems==&lt;br /&gt;
{{see also|Comparison of tracking systems}}&lt;br /&gt;
{{:Comparison of tracking systems}}&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Terms]] [[Category:Technical Terms]]&lt;/div&gt;</summary>
		<author><name>RealEditor</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Positional_tracking&amp;diff=36346</id>
		<title>Positional tracking</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Positional_tracking&amp;diff=36346"/>
		<updated>2025-07-22T11:31:32Z</updated>

		<summary type="html">&lt;p&gt;RealEditor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{TOCRIGHT}}&lt;br /&gt;
{{see also|Tracking}}&lt;br /&gt;
&#039;&#039;&#039;Positional tracking&#039;&#039;&#039; is a technology that allows a device to know its position relative to the environment around it. It uses a combination of hardware and software to achieve the detection of its absolute position. It is an essential technology for [[virtual reality]] (VR), making it possible to track movement with six [[degrees of freedom]] (6DOF) &amp;lt;ref name=”1”&amp;gt; StereoLabs. Positional Tracking. Retrieved from https://www.stereolabs.com/documentation/overview/positional-tracking/introduction.html&amp;lt;/ref&amp;gt; &amp;lt;ref name=”2”&amp;gt; Lang, B. (2013). An introduction to positional tracking and degrees of freedom (DOF). Retrieved from http://www.roadtovr.com/introduction-positional-tracking-degrees-freedom-dof/&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Positional tracking is not the same as 3DOF head tracking. 3DOF head tracking only registers the rotation of the head ([[Rotational tracking]]), with movements such as pitch, yaw, and roll. Positional tracking registers the exact position and orientation of the headset in space, recognizing forward/backward, up/down and left/right movement &amp;lt;ref name=”3”&amp;gt; Rohr, F. (2015). Positional tracking in VR: what it is and how it works. Retrieved from http://data-reality.com/positional-tracking-in-vr-what-it-is-and-how-it-works&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Positional tracking VR technology brings various benefits to the VR experience. It can change the viewpoint of the user to reflect different actions like jumping, ducking, or leaning forward; allow for an exact representation of the user’s hands and other objects in the virtual environment; increase the connection between the physical and virtual world by, for example, using hand position to move virtual objects by touch; and detect gestures by analyzing position over time &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”4”&amp;gt; Boger, Y. (2014). Overview of positional tracking technologies for virtual reality. Retrieved from http://www.roadtovr.com/overview-of-positional-tracking-technologies-virtual-reality/&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It is also known that positional tracking improves the 3D perception of the virtual environment because of parallax (the way objects closer to the eyes move faster than objects farther away). Parallax helps inform the brain about the perception of distance along with stereoscopy &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;. Also, the 6DOF tracking helps reduce drastically motion sickness during the VR experience that is caused due the disconnect between the inputs of what is being seen with the eyes and what is being felt by the ear vestibular system &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
There are different methods of positional tracking. Choosing which one to apply is dependent on various factors such as the tracking accuracy and the refresh rate required, the tracking area, if the tracking is indoor or outdoor, cost, power consumption, computational power available, whether the tracked object is rigid or flexible, and whether the objects are well known of can change &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Positional tracking VR technology is a necessity for VR to work properly since an accurate representation of objects like the head or the hands in the virtual world contribute towards achieving immersion and a greater sense of presence &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt; RealVision. The dilemma of positional tracking in cinematic vr films. Retrieved from http://realvision.ae/blog/2016/06/the-dilemma-of-positional-tracking-in-cinematic-vr-films/&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Methods of positional tracking==&lt;br /&gt;
&lt;br /&gt;
[[File:HMD and markers.png|thumb|1. Markers on a Sensics HMD (Image: www.roadtovr.com)]]&lt;br /&gt;
[[File:Optical marker.png|thumb|2. Optical marker by Intersense (Image: www.roadtovr.com)]]&lt;br /&gt;
&lt;br /&gt;
There are various methods of positional tracking. The description of the methods provided below is based on Boger (2014) &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Acoustic Tracking===&lt;br /&gt;
&lt;br /&gt;
The measurement of the time it takes for a known acoustic signal to travel between an emitter and a receiver is known as acoustic tracking. Generally, several transmitters are placed in the tracked area and various receivers placed on the tracked objects. The distance between the receiver and transmitter is calculated by the amount of time the acoustic signal takes to reach the receiver. However, for this to work, the system must be aware of when the acoustic signal was sent. The orientation of a rigid object can be known if this object has multiple receivers placed in a known position. The difference between the time of arrival of the acoustic signal to the multiple receivers will provide data about the orientation of the object relative to the transmitters.&lt;br /&gt;
&lt;br /&gt;
One of the downsides of acoustic tracking is that it requires time-consuming calibration to function properly. The acoustic trackers are also susceptible to measurement error due to ambient disturbances such as noise and do not provide high update rates. Due to these disadvantages, acoustic tracking systems are commonly used with other sensors (e.g. inertial sensors) to provide better accuracy.&lt;br /&gt;
&lt;br /&gt;
Intersense, an American technology company, has developed successful acoustic tracking systems.&lt;br /&gt;
&lt;br /&gt;
===Wireless tracking===&lt;br /&gt;
Wireless tracking uses a set of anchors that are placed around the perimeter of the tracking space and one or more tags that are tracked. This system is similar in concept to GPS, but works both indoors and outdoors. Sometimes referred to as indoor GPS. The tags [[triangulation (computer vision)|triangulate]] their 3D position using the anchors placed around the perimeter. A wireless technology called Ultra Wideband has enabled the position tracking to reach a precision of under 100 mm. By using sensor fusion and high speed algorithms, the tracking precision can reach 5 mm level with update speeds of 200 Hz or 5 ms [[Latency (engineering)|latency]].&lt;br /&gt;
&amp;lt;ref name=”6”&amp;gt; IndoTraq. Positional Tracking. Retrieved from http://indotraq.com/?page_id=122&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;ref name=”7”&amp;gt; Hands-On With Indotraq. Retrieved from https://www.vrfocus.com/2016/01/hands-on-with-indotraq/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;ref name=”8”&amp;gt; INDOTRAQ INDOOR TRACKING FOR VIRTUAL REALITY. Retrieved from https://blog.abt.com/2016/01/ces-2016-indotraq-indoor-tracking-for-virtual-reality/&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Inertial Tracking===&lt;br /&gt;
&lt;br /&gt;
Inertial tracking is made possible by the use of accelerometers and gyroscopes. Accelerometers measure linear acceleration, which is used to calculate velocity and the position of the object relative to an initial point. This is possible due to the mathematical relationship between position over time and velocity, and velocity and acceleration (4). A gyroscope measures angular velocity. It is a solid-state component based on microelectromechanical systems (MEMS) technology and operates based on the same principles as a mechanical gyro. From the angular velocity data provided by the gyroscope, angular position relative to the initial point is calculated.&lt;br /&gt;
&lt;br /&gt;
This technology is inexpensive and can provide high update rates as well as low latency. On the other side, the calculations (i.e. integration and double-integration) of the values given by the accelerometers (acceleration) and gyroscope (angular velocity) that lead to the object’s position can result in a significant drift in position information - decreasing this method’s accuracy.&lt;br /&gt;
&lt;br /&gt;
===Magnetic Tracking===&lt;br /&gt;
&lt;br /&gt;
This method measures the magnitude of the magnetic field in different directions. Normally, the system has a base station that generates a magnetic field, with the strength of the field diminishing as distance increases between the measurement point and base station. Furthermore, a magnetic field allows for the determination of orientation. For example, if the measured object is rotated, the distribution of the magnetic field along the various axes is modified.&lt;br /&gt;
&lt;br /&gt;
In a controlled environment, magnetic tracking’s accuracy is good. However, it can be influenced by interference from conductive materials near the emitter of sensors, from other magnetic fields generated by other devices and from ferromagnetic materials in the tracking area.&lt;br /&gt;
The [[Razer Hydra]] motion controllers is an example of implementation of this specific type of positional tracking in a product.&lt;br /&gt;
&lt;br /&gt;
Most [[Head-mounted display|Head-mounted displays]] (HMDs) and smartphones contain [[IMUs]] or [[magnetometer|magnetometers]] that detect the magnetic field of Earth.&lt;br /&gt;
&lt;br /&gt;
Magnetic tracking can be AC or DC. Magnetic tracking is great because it doesn&#039;t need a Kalman filter. It is much higher quality than all other tracking methods, but there are constraints on its usage, like how it cannot be used in environments with a lot of metal due to interference.&lt;br /&gt;
&lt;br /&gt;
===Optical Tracking===&lt;br /&gt;
&lt;br /&gt;
For optical tracking, there are various methods available. The commonality between them all is the use of cameras to gather positional information.&lt;br /&gt;
&lt;br /&gt;
====Tracking with markers====&lt;br /&gt;
&lt;br /&gt;
This optical tracking method uses a specific pattern of markers placed on an object (Figure 1). One or more cameras then seek the markers, using algorithms to extract the position of the object from the visible markers. From the difference between what the video camera is detecting and the known marker pattern, an algorithm calculates the position and orientation of the tracked object. The pattern of markers that are placed in the tracked object is not random. The number, location, and arrangement of the markers are carefully chosen in order to provide the system with as much information possible so the algorithms do not have missing data.&lt;br /&gt;
&lt;br /&gt;
There are two types of markers: passive and active. Passive markers reflect infrared light (IR) towards the light source. In this case, the camera provides the IR signal that is reflected from the markers for detection. Active markers are IR lights that flash periodically and are detected by the cameras. Choosing between the two types of markers depends on several variables like distance, type of surface, required viewing direction, and others.&lt;br /&gt;
&lt;br /&gt;
====Tracking with visible markers====&lt;br /&gt;
&lt;br /&gt;
Visible markers (Figure 2) placed in a predetermined arrangement are also used in optical tracking. The camera detects the markers and their positions leading to the determination of the position and orientation of the object. For example, visible markers can be placed in a specific pattern on the tracking area, and an HMD with cameras would then use this to calculate its position. The shape and size of this type of markers can vary. What is important is that they can be easily identified by the cameras.&lt;br /&gt;
&lt;br /&gt;
====Markerless tracking====&lt;br /&gt;
&lt;br /&gt;
Objects can be tracked without markers if their geometry is known. With markerless tracking, the system camera searches and compares the received image with the known 3D model for features like edges or color transitions, for example.&lt;br /&gt;
&lt;br /&gt;
====Depth map tracking====&lt;br /&gt;
&lt;br /&gt;
A depth camera uses various technologies to create a real-time map of the distances of the objects in the tracking area from the camera. The tracking is performed by extracting the object to be tracked (e.g. hang) from the general depth map and analyzing it. An example of a depth map camera is Microsoft’s Kinect.&lt;br /&gt;
&lt;br /&gt;
====Sensor Fusion====&lt;br /&gt;
&lt;br /&gt;
Sensor fusion is a method of using more than one tracking technique in order to improve the detection of position and orientation of the tracked object. By using a combination of techniques, one method’s disadvantage can be compensated by another. An example of this would be the combination of inertial tracking and optical tracking. The former can develop drift, and the latter is susceptible to markers being hidden (occlusion). By combining both, if markers are occluded, the position can be estimated by the inertial trackers, and even if the optical markers are completely visible, the inertial sensors provide updates at a higher rate, improving the overall positional tracking.&lt;br /&gt;
&lt;br /&gt;
===Oculus Rift and HTC Vive’s positional tracking===&lt;br /&gt;
&lt;br /&gt;
The [[Oculus Rift]] positional tracking is different from the one the HTC Vive uses. While the Oculus Rift uses [[Constellation]], an IR-LED array that is tracked by a camera, the HTC Vive uses Valve’s [[Lighthouse]] technology, which is a laser-based system &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In the Oculus Rift, movement is limited to the sight area of the camera - when not enough LEDs are in sight of the camera, the software relies on data sent by the headset’s IMU sensors. With Valve’s position tracking system, the tracking area is flooded with non-visible light which the HTC Vive detects using photosensors &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”6”&amp;gt; Buckley, S. (2015). This is how Valve’s amazing lighthouse tracking technology works. Retrieved from http://gizmodo.com/this-is-how-valve-s-amazing-lighthouse-tracking-technol-1705356768&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Positional tracking and smartphones===&lt;br /&gt;
&lt;br /&gt;
Positional tracking in mobile VR still struggles to achieve a good level of accuracy mainly due to the power needed to handle a positional tracking VR system and the fact that using QR codes and cameras for tracking would contradict the essence of having a simple, intuitive, and mobile VR experience (3). Currently, mobile devices are limited by their form factor and can only track the movements of a user’s head. Nevertheless, companies are still investing in the development of an accurate positional tracking system for smartphones. Having this system available to anyone with a phone capable of VR would facilitate the adoption of VR by the general public, possibly unlocking the potential of the VR market &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”7”&amp;gt; Grubb, J. (2016). Why positional tracking for mobile virtual reality is so damn hard. Retrieved from https://venturebeat.com/2016/02/24/why-positional-tracking-for-mobile-virtual-reality-is-so-damn-hard&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Types of positional tracking==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Inside-out tracking]]&#039;&#039;&#039; - tracking camera is placed on the device ([[HMD]]) being tracking.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Outside-in tracking]]&#039;&#039;&#039; - tracking camera(s) is placed in the external environment where the tracked device (HMD) is within its view.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Markerless tracking]]&#039;&#039;&#039; - tracking system that does not use [[fiducial markers]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Markerless inside-out tracking]]&#039;&#039;&#039; - combines markerless tracking with inside-out tracking&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Markerless outside-in tracking]]&#039;&#039;&#039; - combines markerless tracking with outside-in tracking&lt;br /&gt;
&lt;br /&gt;
==Comparison of tracking systems==&lt;br /&gt;
{{see also|Comparison of tracking systems}}&lt;br /&gt;
{{:Comparison of tracking systems}}&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Terms]] [[Category:Technical Terms]]&lt;/div&gt;</summary>
		<author><name>RealEditor</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=User:FloraLepage576&amp;diff=36345</id>
		<title>User:FloraLepage576</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=User:FloraLepage576&amp;diff=36345"/>
		<updated>2025-07-22T11:25:15Z</updated>

		<summary type="html">&lt;p&gt;RealEditor: This is spam&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>RealEditor</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=User:IvaEre298376&amp;diff=36343</id>
		<title>User:IvaEre298376</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=User:IvaEre298376&amp;diff=36343"/>
		<updated>2025-07-17T05:56:17Z</updated>

		<summary type="html">&lt;p&gt;RealEditor: Blanked the page&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>RealEditor</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=User:OliverW15011636&amp;diff=36342</id>
		<title>User:OliverW15011636</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=User:OliverW15011636&amp;diff=36342"/>
		<updated>2025-07-17T05:56:13Z</updated>

		<summary type="html">&lt;p&gt;RealEditor: Blanked the page&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>RealEditor</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=User:DinahClick54754&amp;diff=36341</id>
		<title>User:DinahClick54754</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=User:DinahClick54754&amp;diff=36341"/>
		<updated>2025-07-17T05:56:08Z</updated>

		<summary type="html">&lt;p&gt;RealEditor: Blanked the page&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>RealEditor</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Virtual_Reality&amp;diff=36337</id>
		<title>Virtual Reality</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Virtual_Reality&amp;diff=36337"/>
		<updated>2025-07-14T02:11:34Z</updated>

		<summary type="html">&lt;p&gt;RealEditor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{TOCRIGHT}}&lt;br /&gt;
&#039;&#039;&#039;Virtual Reality&#039;&#039;&#039; (&#039;&#039;&#039;VR&#039;&#039;&#039;) is a computer-simulated artificial multisensory 3D environment that can mimic the properties and imagery of the physical world, be completely based in fantasy, or a mix of both. It involves technology that uses computer-generated environments to simulate a physical presence in a virtual world. The system uses position-tracking and responds to the user’s inputs. In VR, the senses are temporarily fooled into believing that the artificial environment is real. The goal of a true VR experience is to create [[presence]] - the feeling of physically being somewhere else, of being in another reality &amp;lt;ref name=”0”&amp;gt; Bierbaum, A.D. (2000). VR Juggler: A Virtual Platform for Virtual Reality Application Development. Masters of Science Thesis, Iowa State University, Iowa&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Virtual Reality is an interactive and immersive medium that can be used to create unique experiences that are unattainable elsewhere. VR has the power to transform [[games]], [[films]] and other forms of media. Some enthusiasts call VR the &amp;quot;ultimate input/output device&amp;quot; or the &amp;quot;last medium&amp;quot; because any subsequent medium can be created within VR, using only software &amp;lt;ref name=”0”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”6”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
 &lt;br /&gt;
While [[Augmented Reality]] enhances the real world with digital content, Virtual Reality completely replaces the real world with a virtual one, creating a brand new digital environment.&amp;lt;ref name=”0”&amp;gt;&amp;lt;/ref&amp;gt;&amp;lt;ref name=”6”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Main characteristics==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Interactive -&#039;&#039;&#039; The user’s input controls the system and guides the behavior of the VR experience, while also modifying the virtual environment. This type of interaction engages the user, connecting him to the application in a more natural way since the environment responds directly to the stimuli &amp;lt;ref name=”0”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Immersive -&#039;&#039;&#039; An immersive experience has to provide a sense of presence as well as a sense of engagement. Immersion can be divided into three different aspects:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1.&#039;&#039;&#039; According to Bierbaum (2000), “For a VR application to be immersive, it must be perceptually immersive by providing ‘the presentation of sensory cues that convey perceptually to users that they’re surrounded by the computer-generated environment.’” Therefore, the VR must provide the user with an all-encompassing sensory input &amp;lt;ref name=”0”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;2.&#039;&#039;&#039; The second aspect of immersion is the sense of presence. This implies that the VR experience must give the user the sense they are “in” the virtual world &amp;lt;ref name=”0”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;3.&#039;&#039;&#039; The final aspect is engagement. It is the degree “to which the user has a sense they are deeply involved in the environment.” &amp;lt;ref name=”0”&amp;gt;&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Multisensory -&#039;&#039;&#039; providing a virtual experience that uses multiple human sensory systems increases the level of immersion. While current VR systems cannot provide a full range of stimuli to all human senses, it is expected that in the future this problem will be solved and the VR experience will be completely or almost indistinguishable from reality. The more senses are involved in the VR experience, the higher the degree of engagement and, consequently, this results in a greater sense of presence &amp;lt;ref name=”0”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Synthetic -&#039;&#039;&#039; The environment is artificial, created by a computer in real-time &amp;lt;ref name=”0”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Hardware Technologies==&lt;br /&gt;
===Head-mounted Display===&lt;br /&gt;
VR is created by [[head-mounted display]]s (HMDs) such as the [[Oculus Rift]]. HMDs utilize [[stereoscopic displays]] and specialized [[lenses]] along with [[#Motion Tracking|motion tracking hardware]] to give the illusion that the user is physically inside the virtual world. &lt;br /&gt;
&lt;br /&gt;
To create the illusion of depth, a display is placed very close to the users&#039; eyes, covering their entire field of view. Two images that are very similar but have different perspectives are channeled into each eye to create [[parallax]], the visual phenomenon where our brains perceive depth based on the difference in the apparent position of objects.&lt;br /&gt;
&lt;br /&gt;
Specialized lenses are placed between the display and our eyes. The lenses allow our eyes to focus on the images on the display, even though the display is only a few inches in front of our faces. Without lenses, our entire VR world would become blurry because human eyes have trouble focusing on things that are very close.&amp;lt;ref&amp;gt;http://doc-ok.org/?p=1360&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The headset tracks the movement of your head and changes the images shown on the display based on it. This process creates the sensation that users are located within the virtual environment. Users of these devices are not only able to experience the computer-simulated environments but also interact with them. Various input methods, from the traditional game controllers and keyboards to the futuristic hand gestures and voice commands, are available or under development.&lt;br /&gt;
&lt;br /&gt;
===Motion Tracking===&lt;br /&gt;
HMD [[tracking|tracks]] the movement of your head and updates the rendered scene based on its orientation and location. This process is similar to how we look around in real life. There are 2 types of tracking: [[rotational|rotational tracking]] and [[positional tracking|positional]]. &lt;br /&gt;
&lt;br /&gt;
[[Rotational tracking]] tracks the 3 rotational movements: pitch, yaw, and roll. It is performed by [[IMUs]] such as [[accelerometer]]s, [[gyroscope]]s and [[magnetometer]]s. &lt;br /&gt;
&lt;br /&gt;
[[Positional tracking]] tracks the 3 translational movements: forward/back, up/down and left/right. Positional tracking is usually more difficult than rotational tracking and is accomplished through different [[Positional tracking#Types|Types]] and [[Positional tracking#Systems|Systems]].&lt;br /&gt;
&lt;br /&gt;
Motion tracking is not only used to track your head in HMDs but also used to track your hands and rest of your body through various [[Input Devices|input devices]].&lt;br /&gt;
&lt;br /&gt;
===Input Devices===&lt;br /&gt;
[[Input Devices]] allow the users to influence and manipulate the virtual realm they are in. These devices include traditional input methods such as gamepad, mouse and keyboard and novel devices that track the position and orientation of your [[:Category:Hands/Fingers Tracking|hands]], [[:Category:Hands/Fingers Tracking|fingers]], [[:Category:Feet Tracking|feet]] and other [[:Category:Body Tracking|body parts]].&lt;br /&gt;
&lt;br /&gt;
==Platforms==&lt;br /&gt;
&#039;&#039;&#039;[[visionOS]]&#039;&#039;&#039; - &#039;&#039;&#039;[[Apple Vision Pro]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Oculus Rift (Platform)]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Oculus Quest (Platform)]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[SteamVR]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[PlayStation VR]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[OpenVR]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Daydream]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[OSVR]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[WebVR]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Windows 10 VR]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[HP VR]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Pico VR]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[[Vive]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
===Additional Information===&lt;br /&gt;
[[VR Headset Demo Locations]]&lt;br /&gt;
&lt;br /&gt;
==VR Headsets==&lt;br /&gt;
{{see also|VR Headsets}}&lt;br /&gt;
{{:VR Headsets}}&lt;br /&gt;
&lt;br /&gt;
==Apps==&lt;br /&gt;
&#039;&#039;&#039;[[VR Apps]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==Developer Resources==&lt;br /&gt;
===Game Engines===&lt;br /&gt;
[[Unity]]&lt;br /&gt;
&lt;br /&gt;
[[Unreal Engine]]&lt;br /&gt;
&lt;br /&gt;
===WebVR===&lt;br /&gt;
&lt;br /&gt;
==Virtual Reality History timeline==&lt;br /&gt;
&lt;br /&gt;
[[File:Stereoscopic images.png|thumb|Figure 1. Stereoscopic images (Image: www.vrs.org.uk)]]&lt;br /&gt;
[[File:Link trainer.png|thumb|Figure 2. Link Trainer (Image: www.vrs.org.uk)]]&lt;br /&gt;
[[File:Sensorama.png|thumb|Figure 3. Sensorama (Image: www.vrs.org.uk)]]&lt;br /&gt;
[[File:VR Nasa.png|thumb|Figure 4. Virtual Environment Reality workstation technology (Image: www.sciencefocus.com)]]&lt;br /&gt;
[[File:VR arcade.png|thumb|Figure 5. VR Arcade Machines (Image: www.vrs.org.uk)]]&lt;br /&gt;
&lt;br /&gt;
Virtual reality has a long history of development. While the main advancements happened after the introduction of electronics and computer technology, there are precursors to the ideas and implementation of VR that date as far back as the 1800s. For example, focusing solely on VR as a means of creating the illusion of being someplace else, then the earliest attempts at virtual reality could be considered the panoramic murals (or 360-degree murals). These would fill the viewer’s field of vision with the intention of making them feel a sense of presence at a certain historical event or scene &amp;lt;ref name=”1”&amp;gt; Virtual Reality Society. History of Virtual Reality. Retrieved from https://www.vrs.org.uk/virtual-reality/history.html&amp;lt;/ref&amp;gt; &amp;lt;ref name=”2”&amp;gt; The Franklin Institute. History of Virtual Reality. Retrieved from https://www.fi.edu/virtual-reality/history-of-virtual-reality&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
What follows is a timeline of the main historical dates and events in the development of VR.&lt;br /&gt;
&lt;br /&gt;
===1838 - Stereoscopic viewers and photos===&lt;br /&gt;
&lt;br /&gt;
Charles Wheatstone demonstrated that the brain processes different two-dimensional images for each eye into a single three dimensional object (Figure 1). The stereoscope was invented in the same year and used twin mirrors to project a single image. When viewing two side by side stereoscopic images through a stereoscope, it gave the sense of depth and immersion &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt; Gemsense. Virtual Reality: History, projections and developments. Retrieved from http://gemsense.cool/virtual-reality-developments/&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In 1839, William Gruber also patented the View-Master stereoscope which was used for “virtual tourism” and still is produced today. The design principles of the stereoscope can still be found in the Google Cardboard and low-budget VR headsets for smartphones &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
It could be argued that since the creation of stereoscopic images, people have been interested in making images more three dimensional to enrich its experience &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===1929 - Link Trainer===&lt;br /&gt;
&lt;br /&gt;
Edward Link creates the first commercial flight simulator - the Link Trainer (Figure 2). It was entirely electromechanical, “controlled by motors that linked to the rudder and steering column to modify the pitch and roll.” It had a small motor-driven device that simulated turbulence and other disturbances. These flight simulators were used by over 500,000 pilots during World War II for initial training and improving skills &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===1936 - Pygmalion’s Spectacles===&lt;br /&gt;
&lt;br /&gt;
Science fiction writer Stanley G. Weinbaum wrote a short story - Pygmalion’s Spectacles - that had the idea of a pair of goggles that allowed the user to experience a different world through holographic recordings, smell, taste, and touch. This concept can be easily equated to the VR devices that are currently available or under development &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”3”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”4”&amp;gt; Evenden, I. (2016). The history of virtual reality. Retrieved from http://www.sciencefocus.com/article/history-of-virtual-reality&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===1956 - The Sensorama===&lt;br /&gt;
&lt;br /&gt;
Cinematographer Morton Heilig develops the Sensorama, which was patented only in 1962 and might be considered the first true VR system. It was an arcade-style cabinet that stimulated all the senses. It had a stereoscopic 3D display, stereo speakers, vibrating seat, fans, and a scent producer. It was intended to fully immerse the person in a film. Heilig created six short films for his invention titled Motorcycle, Belly Dancer, Dune Buggy, Helicopter, A date with Sabina and I’m a coca cola bottle! Heilig intended the Sensorama to be one in a line of products for the “cinema of the future”. Unable to secure financial backing, his vision never became reality &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt; Robertson, A. and Zelenko, M. Voices from a virtual past. Retrieved from https://www.theverge.com/a/virtual-reality/oral_history&amp;lt;/ref&amp;gt; &amp;lt;ref name=”6”&amp;gt; Mazuryk, T. and Gervautz, M. (1996). Virtual Reality - History, applications, technology and Future (Technical Report). Retrieved from https://www.cg.tuwien.ac.at/research/publications/1996/mazuryk-1996-VRH/TR-186-2-96-06Paper.pdf&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===1960 - First VR Head-Mounted Display===&lt;br /&gt;
&lt;br /&gt;
After the Sensorama, Morton Heilig invented the first example of a virtual reality headset - the Telesphere Mask. It only worked with non-interactive films and didn’t have motion tracking. Nevertheless, the headset provided stereoscopic 3D and wide vision with stereo sound &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===1961 - First motion tracking HMD===&lt;br /&gt;
&lt;br /&gt;
The true precursor of the HMDs available today was developed by two Philco Corporation engineers, Comeau and Bryan. It was called Headsight and it incorporated a video screen for each eye and a magnetic motion tracking system. This system was linked to a closed circuit camera. The device wasn’t developed for virtual reality applications. Instead, its goal was to allow immersive remote viewing of dangerous situations by the military. The head movements of the used would be replicated by a remote camera, allowing him to look around the environment. While the Headsight was a step in the evolution of the virtual reality headset, it lacked the integration of a computer and image generation &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===1965 - The Ultimate Display===&lt;br /&gt;
&lt;br /&gt;
Ivan Sutherland developed the concept of the “Ultimate Display”. This device could simulate the natural world so realistically that a user could not tell the difference between actual reality and virtual reality. The concept comprised of a virtual world viewed through an HMD and had augmented 3D sound and tactile feedback; computer hardware that created the virtual environment and maintained it in real time; and interactivity between users and objects from the VR world in a realistic way. Sutherland suggested that the device would serve as a “windows into a virtual world”, and his idea would become a core blueprint for the concepts that encompass current VR &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”6”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===1968 - Sword of Damocles===&lt;br /&gt;
&lt;br /&gt;
Ivan Sutherland and Bob Sproull created the Sword of Damocles, an HMD that was held by a mechanical arm mounted on a ceiling. The device was connected to a computer and displayed simple wireframe graphics to the user. The arm tracked the user’s head movements but was difficult to use. The contraption was also too heavy and bulky for comfortable use &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”6”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===1969 - Artificial Reality===&lt;br /&gt;
&lt;br /&gt;
Myron Kruegere developed a series of experiences called “Artificial Reality”. He developed computer-generated environments that responded to the people in it. He created several projects such as Glowflow, Metaplay, and Psychic Space leading to the development of the Videoplace technology. This enabled communication between people at a distance in a responsive computer-generated environment &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===1975 - Videoplace===&lt;br /&gt;
&lt;br /&gt;
Myron Kruegere created the Videoplace, which was the first interactive VR platform. The virtual reality surrounded the user and responded to movements and actions without the use of goggles or gloves. The Videoplace was a mix of several other artificial reality systems that he had developed &amp;lt;ref name=”6”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”7”&amp;gt; Freefly VR. Time travel through virtual reality. Retrieved from https://freeflyvr.com/time-travel-through-virtual-reality/&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===1982 - Sayre gloves===&lt;br /&gt;
&lt;br /&gt;
The Sayre glove was the first wired glove. It was invented by Daniel J. Sandin and Thomas Defanti from an idea by Richard Sayre. Both scientists were from the Electronic Visualization Laboratory at the University of Illinois, Chicago. The glove used light emitters and photocells in the fingers. When flexed, the quantity of light reaching the photocell changed, translating the finger movements into electrical signals &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===1985 - NASA project===&lt;br /&gt;
&lt;br /&gt;
The Virtual Environment Workstation Project at NASA’s Ames Research Center in Mountain View, California, was founded with the purpose of producing a VR system that allowed astronauts to control robots outside a space station (Figure 4). The HMD that was developed had super-wide optics (almost an 180-degree field of view) &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===1987 - The “Virtual Reality” name is coined===&lt;br /&gt;
&lt;br /&gt;
Before this date, even though there had been developments in VR, there wasn’t a term to describe the field. In 1987, Jaron Lanier (founder of the Visual Programming Lab, VPL) finally coined the term “virtual reality”. Lanier, through his company, developed a range of VR gear like the Dataglove and the EyePhone headset. The company also made the first surgical simulator, the first vehicle prototyping simulator, and the first architecture simulators &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”2”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===1991 - Virtuality Group===&lt;br /&gt;
&lt;br /&gt;
By this time, VR devices started to be available to the public (although owning cutting-edge VR was still out of reach). The Virtuality Group launched several arcade games and machines in which players would use a set of VR goggles (Figure 5). The machines had immersive stereoscopic 3D visuals, handheld joysticks, and some unit were networked together for multiplayer gaming. There were some discussions about bringing Virtuality to Atari’s Jaguar console, but the idea was abandoned &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===1993 - Sega’s virtual reality headset===&lt;br /&gt;
&lt;br /&gt;
At the Consumer Electronics Show in 1993, Sega announced a virtual reality headset for the Sega Genesis console. The prototype had head tracking, stereo sound and LCD screens in the visor. The company intended to have a general release of the product but technical difficulties stopped that from happening and the headset would remain in the prototype phase &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===1995 - Nintendo Virtual Boy===&lt;br /&gt;
&lt;br /&gt;
The Virtual Boy was a 3D gaming console, marketed as the first portable console that could display 3D graphics. It was released in Japan and North America, and it was a commercial failure for the Japanese company. Some of the reasons for the failure were the lack of color in graphics (only red and black), lack of software support, and difficulty in using the console in a comfortable position. Production of the console was halted in 1996 &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Virtual reality in the 21st century===&lt;br /&gt;
After 1997, the public interest in VR saw a decrease in what is known as the [[first VR winter]]. Nevertheless, the first fifteen years of the 21st century had several advancements in the field of virtual reality. Computer technology, including small and powerful mobile technologies, increased in power while prices were getting more accessible &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”4”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
The interest in VR regained momentum after Palmer Luckey created the first prototype of the Oculus Rift, in 2011, and launched a kickstarter campaign for its development in 2012. The campaign was successful, raising $2.5 million. In March 2014, Facebook bought the company Oculus VR for $2 billion dollars. After this, virtual reality blew up, with multiple companies investing in the development of their own VR systems. The rise of smartphones with high-density displays and 3D capabilities has also enabled the development of lightweight and practical VR devices &amp;lt;ref name=”1”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”5”&amp;gt;&amp;lt;/ref&amp;gt; &amp;lt;ref name=”7”&amp;gt;&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Terms]]&lt;br /&gt;
[[Category:Virtual reality]]&lt;/div&gt;</summary>
		<author><name>RealEditor</name></author>
	</entry>
</feed>