<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://vrarwiki.com/index.php?action=history&amp;feed=atom&amp;title=Spatial_mapping</id>
	<title>Spatial mapping - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://vrarwiki.com/index.php?action=history&amp;feed=atom&amp;title=Spatial_mapping"/>
	<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Spatial_mapping&amp;action=history"/>
	<updated>2026-04-14T05:04:21Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.43.0</generator>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Spatial_mapping&amp;diff=36700&amp;oldid=prev</id>
		<title>Xinreality at 00:46, 28 October 2025</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Spatial_mapping&amp;diff=36700&amp;oldid=prev"/>
		<updated>2025-10-28T00:46:06Z</updated>

		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;a href=&quot;https://vrarwiki.com/index.php?title=Spatial_mapping&amp;amp;diff=36700&amp;amp;oldid=36681&quot;&gt;Show changes&lt;/a&gt;</summary>
		<author><name>Xinreality</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Spatial_mapping&amp;diff=36681&amp;oldid=prev</id>
		<title>Xinreality: Text replacement - &quot;e.g.,&quot; to &quot;for example&quot;</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Spatial_mapping&amp;diff=36681&amp;oldid=prev"/>
		<updated>2025-10-28T00:29:36Z</updated>

		<summary type="html">&lt;p&gt;Text replacement - &amp;quot;e.g.,&amp;quot; to &amp;quot;for example&amp;quot;&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 00:29, 28 October 2025&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l52&quot;&gt;Line 52:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 52:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;This was followed by mobile AR frameworks: Apple&amp;#039;s [[ARKit]] in June 2017 integrated visual-inertial odometry (VIO) for iOS devices, revolutionizing mobile AR by solving monocular Visual-Inertial Odometry without requiring depth sensors, instantly enabling 380 million devices.&amp;lt;ref name=&amp;quot;AndreasJakl&amp;quot;&amp;gt;{{cite web |url=https://www.andreasjakl.com/basics-of-ar-slam-simultaneous-localization-and-mapping/ |title=Basics of AR: SLAM – Simultaneous Localization and Mapping |publisher=Andreas Jakl |date=2018-08-14 |access-date=2025-10-27}}&amp;lt;/ref&amp;gt; Google&amp;#039;s [[ARCore]] in 2017 brought SLAM to Android, using similar depth-from-motion algorithms that compare images from different angles combined with IMU measurements to generate depth maps on standard hardware.&amp;lt;ref name=&amp;quot;AndreasJakl&amp;quot;/&amp;gt; Meta&amp;#039;s Oculus Quest (2019) incorporated inside-out tracking with SLAM for standalone VR/AR, eliminating external sensors.&amp;lt;ref name=&amp;quot;MetaAnchorsDev&amp;quot;&amp;gt;{{cite web |url=https://developers.meta.com/horizon/documentation/unity/unity-spatial-anchors-overview/ |title=Spatial Anchors Overview |publisher=Meta for Developers |date=2024-05-15 |access-date=2025-10-27}}&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;This was followed by mobile AR frameworks: Apple&amp;#039;s [[ARKit]] in June 2017 integrated visual-inertial odometry (VIO) for iOS devices, revolutionizing mobile AR by solving monocular Visual-Inertial Odometry without requiring depth sensors, instantly enabling 380 million devices.&amp;lt;ref name=&amp;quot;AndreasJakl&amp;quot;&amp;gt;{{cite web |url=https://www.andreasjakl.com/basics-of-ar-slam-simultaneous-localization-and-mapping/ |title=Basics of AR: SLAM – Simultaneous Localization and Mapping |publisher=Andreas Jakl |date=2018-08-14 |access-date=2025-10-27}}&amp;lt;/ref&amp;gt; Google&amp;#039;s [[ARCore]] in 2017 brought SLAM to Android, using similar depth-from-motion algorithms that compare images from different angles combined with IMU measurements to generate depth maps on standard hardware.&amp;lt;ref name=&amp;quot;AndreasJakl&amp;quot;/&amp;gt; Meta&amp;#039;s Oculus Quest (2019) incorporated inside-out tracking with SLAM for standalone VR/AR, eliminating external sensors.&amp;lt;ref name=&amp;quot;MetaAnchorsDev&amp;quot;&amp;gt;{{cite web |url=https://developers.meta.com/horizon/documentation/unity/unity-spatial-anchors-overview/ |title=Spatial Anchors Overview |publisher=Meta for Developers |date=2024-05-15 |access-date=2025-10-27}}&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The introduction of LiDAR to consumer devices began with iPad Pro in March 2020 and iPhone 12 Pro in October 2020, using Vertical Cavity Surface Emitting Laser technology with direct Time-of-Flight measurement. This enabled ARKit 3.5&#039;s Scene Geometry API for instant AR with triangle mesh classification into semantic categories.&amp;lt;ref name=&quot;AppleDeveloper&quot;&amp;gt;{{cite web |url=https://developer.apple.com/documentation/arkit/arkit_scene_reconstruction |title=ARKit Scene Reconstruction |publisher=Apple Developer Documentation |date=2020 |access-date=2025-10-27}}&amp;lt;/ref&amp;gt; The 2020s have seen refinements, such as HoloLens 2&#039;s Scene Understanding SDK (2019), which builds on spatial mapping for semantic environmental analysis.&amp;lt;ref name=&quot;MicrosoftDoc&quot;/&amp;gt; Advancements in LiDAR (&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;e.g., &lt;/del&gt;iPhone 12 Pro, 2020) and AI-driven feature detection have further democratized high-fidelity mapping.&amp;lt;ref name=&quot;AndreasJakl&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The introduction of LiDAR to consumer devices began with iPad Pro in March 2020 and iPhone 12 Pro in October 2020, using Vertical Cavity Surface Emitting Laser technology with direct Time-of-Flight measurement. This enabled ARKit 3.5&#039;s Scene Geometry API for instant AR with triangle mesh classification into semantic categories.&amp;lt;ref name=&quot;AppleDeveloper&quot;&amp;gt;{{cite web |url=https://developer.apple.com/documentation/arkit/arkit_scene_reconstruction |title=ARKit Scene Reconstruction |publisher=Apple Developer Documentation |date=2020 |access-date=2025-10-27}}&amp;lt;/ref&amp;gt; The 2020s have seen refinements, such as HoloLens 2&#039;s Scene Understanding SDK (2019), which builds on spatial mapping for semantic environmental analysis.&amp;lt;ref name=&quot;MicrosoftDoc&quot;/&amp;gt; Advancements in LiDAR (&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;for example &lt;/ins&gt;iPhone 12 Pro, 2020) and AI-driven feature detection have further democratized high-fidelity mapping.&amp;lt;ref name=&quot;AndreasJakl&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Microsoft launched HoloLens 2 in 2019 with improved Azure Kinect sensors, and Meta Quest 3 arrived in 2023 with full-color passthrough, depth sensing via IR patterned light projector, and sophisticated Scene API with semantic labeling. Apple Vision Pro launched in 2024, representing the current state-of-the-art in spatial computing with advanced eye tracking and hand tracking. Today, spatial mapping is integral to spatial computing, with ongoing research in collaborative SLAM for multi-user experiences.&amp;lt;ref name=&amp;quot;WikipediaSLAM&amp;quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Microsoft launched HoloLens 2 in 2019 with improved Azure Kinect sensors, and Meta Quest 3 arrived in 2023 with full-color passthrough, depth sensing via IR patterned light projector, and sophisticated Scene API with semantic labeling. Apple Vision Pro launched in 2024, representing the current state-of-the-art in spatial computing with advanced eye tracking and hand tracking. Today, spatial mapping is integral to spatial computing, with ongoing research in collaborative SLAM for multi-user experiences.&amp;lt;ref name=&amp;quot;WikipediaSLAM&amp;quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l98&quot;&gt;Line 98:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 98:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;| &amp;#039;&amp;#039;&amp;#039;[[Mapping Range]]&amp;#039;&amp;#039;&amp;#039; || Controls the maximum distance from the sensor at which depth data is incorporated into the map. || 2 m – 20 m &amp;lt;ref name=&amp;quot;StereolabsDocsS2&amp;quot;/&amp;gt; || High (longer range = more data to process = higher resource usage) || Moderate (longer range can map large areas faster but may reduce accuracy at the farthest points)&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;| &amp;#039;&amp;#039;&amp;#039;[[Mapping Range]]&amp;#039;&amp;#039;&amp;#039; || Controls the maximum distance from the sensor at which depth data is incorporated into the map. || 2 m – 20 m &amp;lt;ref name=&amp;quot;StereolabsDocsS2&amp;quot;/&amp;gt; || High (longer range = more data to process = higher resource usage) || Moderate (longer range can map large areas faster but may reduce accuracy at the farthest points)&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|-&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|-&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;| &#039;&#039;&#039;[[Mesh Filtering]]&#039;&#039;&#039; || Post-processing to reduce polygon count (decimation) and clean mesh artifacts (&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;e.g., &lt;/del&gt;fill holes). || Presets (&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;e.g., &lt;/del&gt;Low, Medium, High) &amp;lt;ref name=&quot;StereolabsDocsS2&quot;/&amp;gt; || Low (reduces polygon count, leading to significant performance improvement in rendering) || Moderate (aggressive filtering can lead to loss of fine geometric detail)&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;| &#039;&#039;&#039;[[Mesh Filtering]]&#039;&#039;&#039; || Post-processing to reduce polygon count (decimation) and clean mesh artifacts (&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;for example &lt;/ins&gt;fill holes). || Presets (&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;for example &lt;/ins&gt;Low, Medium, High) &amp;lt;ref name=&quot;StereolabsDocsS2&quot;/&amp;gt; || Low (reduces polygon count, leading to significant performance improvement in rendering) || Moderate (aggressive filtering can lead to loss of fine geometric detail)&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|-&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;|-&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;| &amp;#039;&amp;#039;&amp;#039;[[Mesh Texturing]]&amp;#039;&amp;#039;&amp;#039; || The process of applying camera images to the mesh surface to create a photorealistic model. || On / Off &amp;lt;ref name=&amp;quot;StereolabsDocsS2&amp;quot;/&amp;gt; || High (requires storing and processing images, creating a texture map, and using more complex shaders for rendering) || High (dramatically increases visual realism)&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;| &amp;#039;&amp;#039;&amp;#039;[[Mesh Texturing]]&amp;#039;&amp;#039;&amp;#039; || The process of applying camera images to the mesh surface to create a photorealistic model. || On / Off &amp;lt;ref name=&amp;quot;StereolabsDocsS2&amp;quot;/&amp;gt; || High (requires storing and processing images, creating a texture map, and using more complex shaders for rendering) || High (dramatically increases visual realism)&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l111&quot;&gt;Line 111:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 111:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;=== Essential Sensor Technologies ===&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;=== Essential Sensor Technologies ===&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Modern XR devices rely on [[sensor fusion]], the process of combining data from multiple sensors to achieve a result that is more accurate and robust than could be achieved by any single sensor alone.&amp;lt;ref name=&quot;SLAMSystems&quot;&amp;gt;{{cite web |url=https://www.sbg-systems.com/glossary/slam-simultaneous-localization-and-mapping/ |title=SLAM - Simultaneous localization and mapping |publisher=SBG Systems |access-date=2025-10-23}}&amp;lt;/ref&amp;gt;&amp;lt;ref name=&quot;MilvusSensors&quot;&amp;gt;{{cite web |url=https://milvus.io/ai-quick-reference/what-sensors-eg-accelerometer-gyroscope-are-essential-in-ar-devices |title=What sensors (&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;e.g., &lt;/del&gt;accelerometer, gyroscope) are essential in AR devices? |publisher=Milvus |access-date=2025-10-23}}&amp;lt;/ref&amp;gt; The essential sensor suite includes:&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Modern XR devices rely on [[sensor fusion]], the process of combining data from multiple sensors to achieve a result that is more accurate and robust than could be achieved by any single sensor alone.&amp;lt;ref name=&quot;SLAMSystems&quot;&amp;gt;{{cite web |url=https://www.sbg-systems.com/glossary/slam-simultaneous-localization-and-mapping/ |title=SLAM - Simultaneous localization and mapping |publisher=SBG Systems |access-date=2025-10-23}}&amp;lt;/ref&amp;gt;&amp;lt;ref name=&quot;MilvusSensors&quot;&amp;gt;{{cite web |url=https://milvus.io/ai-quick-reference/what-sensors-eg-accelerometer-gyroscope-are-essential-in-ar-devices |title=What sensors (&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;for example &lt;/ins&gt;accelerometer, gyroscope) are essential in AR devices? |publisher=Milvus |access-date=2025-10-23}}&amp;lt;/ref&amp;gt; The essential sensor suite includes:&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;==== Depth Cameras ====&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;==== Depth Cameras ====&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l153&quot;&gt;Line 153:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 153:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &amp;#039;&amp;#039;&amp;#039;[[Visual SLAM]] (vSLAM)&amp;#039;&amp;#039;&amp;#039;: Uses one or more cameras to track visual features.&amp;lt;ref name=&amp;quot;MathWorksSLAM&amp;quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &amp;#039;&amp;#039;&amp;#039;[[Visual SLAM]] (vSLAM)&amp;#039;&amp;#039;&amp;#039;: Uses one or more cameras to track visual features.&amp;lt;ref name=&amp;quot;MathWorksSLAM&amp;quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &amp;#039;&amp;#039;&amp;#039;[[LiDAR SLAM]]&amp;#039;&amp;#039;&amp;#039;: Uses a LiDAR sensor to build a precise geometric map.&amp;lt;ref name=&amp;quot;MathWorksSLAM&amp;quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &amp;#039;&amp;#039;&amp;#039;[[LiDAR SLAM]]&amp;#039;&amp;#039;&amp;#039;: Uses a LiDAR sensor to build a precise geometric map.&amp;lt;ref name=&amp;quot;MathWorksSLAM&amp;quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &#039;&#039;&#039;[[Multi-Sensor SLAM]]&#039;&#039;&#039;: Fuses data from various sources (&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;e.g., &lt;/del&gt;cameras, IMU, LiDAR) for enhanced robustness and accuracy.&amp;lt;ref name=&quot;MathWorksSLAM&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &#039;&#039;&#039;[[Multi-Sensor SLAM]]&#039;&#039;&#039;: Fuses data from various sources (&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;for example &lt;/ins&gt;cameras, IMU, LiDAR) for enhanced robustness and accuracy.&amp;lt;ref name=&quot;MathWorksSLAM&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Spatial mapping is typically accomplished via SLAM algorithms, which build a map of the environment in real time while tracking the device&amp;#039;s position within it.&amp;lt;ref name=&amp;quot;Adeia&amp;quot;&amp;gt;{{cite web |url=https://adeia.com/blog/spatial-mapping-empowering-the-future-of-ar |title=Spatial Mapping: Empowering the Future of AR |publisher=Adeia |date=2022-03-02 |access-date=2025-10-27}}&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Spatial mapping is typically accomplished via SLAM algorithms, which build a map of the environment in real time while tracking the device&amp;#039;s position within it.&amp;lt;ref name=&amp;quot;Adeia&amp;quot;&amp;gt;{{cite web |url=https://adeia.com/blog/spatial-mapping-empowering-the-future-of-ar |title=Spatial Mapping: Empowering the Future of AR |publisher=Adeia |date=2022-03-02 |access-date=2025-10-27}}&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l237&quot;&gt;Line 237:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 237:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The principles of spatial mapping extend to a planetary scale through [[geospatial mapping]]. Instead of headset sensors, this field uses data from satellites, aircraft, drones, and ground-based sensors to create comprehensive 3D maps of the Earth.&amp;lt;ref name=&amp;quot;Matrack&amp;quot;&amp;gt;{{cite web |url=https://matrackinc.com/geospatial-mapping/ |title=What is Geospatial Mapping and How does it Work? |publisher=Matrack Inc. |access-date=2025-10-23}}&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Spyrosoft&amp;quot;&amp;gt;{{cite web |url=https://spyro-soft.com/blog/geospatial/what-is-geospatial-mapping-and-how-does-it-work |title=What is Geospatial Mapping and How Does It Work? |publisher=Spyrosoft |access-date=2025-10-23}}&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The principles of spatial mapping extend to a planetary scale through [[geospatial mapping]]. Instead of headset sensors, this field uses data from satellites, aircraft, drones, and ground-based sensors to create comprehensive 3D maps of the Earth.&amp;lt;ref name=&amp;quot;Matrack&amp;quot;&amp;gt;{{cite web |url=https://matrackinc.com/geospatial-mapping/ |title=What is Geospatial Mapping and How does it Work? |publisher=Matrack Inc. |access-date=2025-10-23}}&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Spyrosoft&amp;quot;&amp;gt;{{cite web |url=https://spyro-soft.com/blog/geospatial/what-is-geospatial-mapping-and-how-does-it-work |title=What is Geospatial Mapping and How Does It Work? |publisher=Spyrosoft |access-date=2025-10-23}}&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* This large-scale mapping is critical for urban planning, precision agriculture, environmental monitoring (&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;e.g., &lt;/del&gt;tracking deforestation or glacial retreat), and disaster management.&amp;lt;ref name=&quot;Matrack&quot;/&amp;gt;&amp;lt;ref name=&quot;Faro&quot;&amp;gt;{{cite web |url=https://www.faro.com/en/Resource-Library/Article/Past-Present-and-Future-of-Geospatial-Mapping |title=The Past, Present and Future of Geospatial Mapping |publisher=FARO |access-date=2025-10-23}}&amp;lt;/ref&amp;gt;&amp;lt;ref name=&quot;SurveyTransfer&quot;&amp;gt;{{cite web |url=https://surveytransfer.net/geospatial-applications/ |title=10 Key Industries Using Geospatial Applications |publisher=SurveyTransfer |access-date=2025-10-23}}&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* This large-scale mapping is critical for urban planning, precision agriculture, environmental monitoring (&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;for example &lt;/ins&gt;tracking deforestation or glacial retreat), and disaster management.&amp;lt;ref name=&quot;Matrack&quot;/&amp;gt;&amp;lt;ref name=&quot;Faro&quot;&amp;gt;{{cite web |url=https://www.faro.com/en/Resource-Library/Article/Past-Present-and-Future-of-Geospatial-Mapping |title=The Past, Present and Future of Geospatial Mapping |publisher=FARO |access-date=2025-10-23}}&amp;lt;/ref&amp;gt;&amp;lt;ref name=&quot;SurveyTransfer&quot;&amp;gt;{{cite web |url=https://surveytransfer.net/geospatial-applications/ |title=10 Key Industries Using Geospatial Applications |publisher=SurveyTransfer |access-date=2025-10-23}}&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* Projects like Google&amp;#039;s AlphaEarth Foundations fuse vast quantities of satellite imagery, radar, and 3D laser mapping data into a unified digital representation of the planet, allowing scientists to track global changes with remarkable precision.&amp;lt;ref name=&amp;quot;AlphaEarth&amp;quot;&amp;gt;{{cite web |url=https://deepmind.google/discover/blog/alphaearth-foundations-helps-map-our-planet-in-unprecedented-detail/ |title=AlphaEarth Foundations helps map our planet in unprecedented detail |publisher=Google DeepMind |access-date=2025-10-23}}&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* Projects like Google&amp;#039;s AlphaEarth Foundations fuse vast quantities of satellite imagery, radar, and 3D laser mapping data into a unified digital representation of the planet, allowing scientists to track global changes with remarkable precision.&amp;lt;ref name=&amp;quot;AlphaEarth&amp;quot;&amp;gt;{{cite web |url=https://deepmind.google/discover/blog/alphaearth-foundations-helps-map-our-planet-in-unprecedented-detail/ |title=AlphaEarth Foundations helps map our planet in unprecedented detail |publisher=Google DeepMind |access-date=2025-10-23}}&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* Pokemon Go achieved unprecedented scale with 800+ million downloads and 600+ million active users, using Visual Positioning System with centimeter-level accuracy. Niantic built a Large Geospatial Model with over 50 million neural networks trained on location data comprising 150+ trillion parameters for planet-scale 3D mapping from pedestrian perspective.&amp;lt;ref name=&amp;quot;niantic&amp;quot;&amp;gt;{{cite web |url=https://nianticlabs.com/news/largegeospatialmodel |title=Large Geospatial Model |publisher=Niantic Labs |access-date=2025-10-27}}&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* Pokemon Go achieved unprecedented scale with 800+ million downloads and 600+ million active users, using Visual Positioning System with centimeter-level accuracy. Niantic built a Large Geospatial Model with over 50 million neural networks trained on location data comprising 150+ trillion parameters for planet-scale 3D mapping from pedestrian perspective.&amp;lt;ref name=&amp;quot;niantic&amp;quot;&amp;gt;{{cite web |url=https://nianticlabs.com/news/largegeospatialmodel |title=Large Geospatial Model |publisher=Niantic Labs |access-date=2025-10-27}}&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l363&quot;&gt;Line 363:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 363:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The next major frontier for spatial mapping is the shift from purely geometric understanding (knowing &amp;#039;&amp;#039;where&amp;#039;&amp;#039; a surface is) to &amp;#039;&amp;#039;&amp;#039;[[semantic understanding]]&amp;#039;&amp;#039;&amp;#039; (knowing &amp;#039;&amp;#039;what&amp;#039;&amp;#039; a surface is).&amp;lt;ref name=&amp;quot;SpatialAI&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;FutureDirections1&amp;quot;&amp;gt;{{cite web |url=https://arxiv.org/html/2508.20477v1 |title=What is Spatial Computing? A Survey on the Foundations and State-of-the-Art |publisher=arXiv |access-date=2025-10-23}}&amp;lt;/ref&amp;gt; This involves leveraging [[AI]] and [[machine learning]] algorithms to analyze the map data and automatically identify, classify, and label objects and architectural elements in real-time—for example, recognizing a surface as a &amp;quot;couch,&amp;quot; an opening as a &amp;quot;door,&amp;quot; or an object as a &amp;quot;chair.&amp;quot;&amp;lt;ref name=&amp;quot;MetaHelp&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;SpatialAI&amp;quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The next major frontier for spatial mapping is the shift from purely geometric understanding (knowing &amp;#039;&amp;#039;where&amp;#039;&amp;#039; a surface is) to &amp;#039;&amp;#039;&amp;#039;[[semantic understanding]]&amp;#039;&amp;#039;&amp;#039; (knowing &amp;#039;&amp;#039;what&amp;#039;&amp;#039; a surface is).&amp;lt;ref name=&amp;quot;SpatialAI&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;FutureDirections1&amp;quot;&amp;gt;{{cite web |url=https://arxiv.org/html/2508.20477v1 |title=What is Spatial Computing? A Survey on the Foundations and State-of-the-Art |publisher=arXiv |access-date=2025-10-23}}&amp;lt;/ref&amp;gt; This involves leveraging [[AI]] and [[machine learning]] algorithms to analyze the map data and automatically identify, classify, and label objects and architectural elements in real-time—for example, recognizing a surface as a &amp;quot;couch,&amp;quot; an opening as a &amp;quot;door,&amp;quot; or an object as a &amp;quot;chair.&amp;quot;&amp;lt;ref name=&amp;quot;MetaHelp&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;SpatialAI&amp;quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;This capability, already emerging in platforms like Meta Quest&#039;s Scene API, will enable a new generation of intelligent and context-aware XR experiences. Virtual characters could realistically interact with the environment (&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;e.g., &lt;/del&gt;sitting on a recognized couch), applications could automatically adapt their UI to the user&#039;s specific room layout, and digital assistants could understand commands related to physical objects (&quot;place the virtual screen on that wall&quot;).&amp;lt;ref name=&quot;FutureDirections1&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;This capability, already emerging in platforms like Meta Quest&#039;s Scene API, will enable a new generation of intelligent and context-aware XR experiences. Virtual characters could realistically interact with the environment (&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;for example &lt;/ins&gt;sitting on a recognized couch), applications could automatically adapt their UI to the user&#039;s specific room layout, and digital assistants could understand commands related to physical objects (&quot;place the virtual screen on that wall&quot;).&amp;lt;ref name=&quot;FutureDirections1&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;=== Neural Rendering and AI-Powered Mapping ===&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;=== Neural Rendering and AI-Powered Mapping ===&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l435&quot;&gt;Line 435:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 435:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;ref name=&amp;quot;HoloLensYouTube&amp;quot;&amp;gt;{{cite web |url=https://www.youtube.com/watch?v=zff2aQ1RaVo |title=HoloLens - What is Spatial Mapping? |publisher=Microsoft |access-date=2025-10-23}}&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;ref name=&amp;quot;HoloLensYouTube&amp;quot;&amp;gt;{{cite web |url=https://www.youtube.com/watch?v=zff2aQ1RaVo |title=HoloLens - What is Spatial Mapping? |publisher=Microsoft |access-date=2025-10-23}}&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;ref name=&amp;quot;SLAMSystems&amp;quot;&amp;gt;{{cite web |url=https://www.sbg-systems.com/glossary/slam-simultaneous-localization-and-mapping/ |title=SLAM - Simultaneous localization and mapping |publisher=SBG Systems |access-date=2025-10-23}}&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;ref name=&amp;quot;SLAMSystems&amp;quot;&amp;gt;{{cite web |url=https://www.sbg-systems.com/glossary/slam-simultaneous-localization-and-mapping/ |title=SLAM - Simultaneous localization and mapping |publisher=SBG Systems |access-date=2025-10-23}}&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;ref name=&quot;MilvusSensors&quot;&amp;gt;{{cite web |url=https://milvus.io/ai-quick-reference/what-sensors-eg-accelerometer-gyroscope-are-essential-in-ar-devices |title=What sensors (&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;e.g., &lt;/del&gt;accelerometer, gyroscope) are essential in AR devices? |publisher=Milvus |access-date=2025-10-23}}&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;ref name=&quot;MilvusSensors&quot;&amp;gt;{{cite web |url=https://milvus.io/ai-quick-reference/what-sensors-eg-accelerometer-gyroscope-are-essential-in-ar-devices |title=What sensors (&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;for example &lt;/ins&gt;accelerometer, gyroscope) are essential in AR devices? |publisher=Milvus |access-date=2025-10-23}}&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;ref name=&amp;quot;MathWorksSLAM&amp;quot;&amp;gt;{{cite web |url=https://www.mathworks.com/discovery/slam.html |title=What Is SLAM (Simultaneous Localization and Mapping)? |publisher=MathWorks |access-date=2025-10-23}}&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;ref name=&amp;quot;MathWorksSLAM&amp;quot;&amp;gt;{{cite web |url=https://www.mathworks.com/discovery/slam.html |title=What Is SLAM (Simultaneous Localization and Mapping)? |publisher=MathWorks |access-date=2025-10-23}}&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;ref name=&amp;quot;PressbooksSensors&amp;quot;&amp;gt;{{cite web |url=https://pressbooks.pub/augmentedrealitymarketing/chapter/sensors-for-arvr/ |title=Sensors for AR/VR |publisher=Pressbooks |access-date=2025-10-23}}&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;ref name=&amp;quot;PressbooksSensors&amp;quot;&amp;gt;{{cite web |url=https://pressbooks.pub/augmentedrealitymarketing/chapter/sensors-for-arvr/ |title=Sensors for AR/VR |publisher=Pressbooks |access-date=2025-10-23}}&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Xinreality</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Spatial_mapping&amp;diff=36669&amp;oldid=prev</id>
		<title>Xinreality at 22:48, 27 October 2025</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Spatial_mapping&amp;diff=36669&amp;oldid=prev"/>
		<updated>2025-10-27T22:48:31Z</updated>

		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 22:48, 27 October 2025&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l2&quot;&gt;Line 2:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 2:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[File:spatial mapping2.jpg|300px|right]]&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[File:spatial mapping2.jpg|300px|right]]&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&#039;&#039;&#039;Spatial mapping&#039;&#039;&#039;, also known as &#039;&#039;&#039;3D reconstruction&#039;&#039;&#039; in some contexts, is a core technology that enables a device to create a three-dimensional (3D) digital model of its physical environment in real-time.&amp;lt;ref name=&quot;StereolabsDocsS2&quot;&amp;gt;{{cite web |url=https://www.stereolabs.com/docs/spatial-mapping |title=Spatial Mapping Overview |publisher=Stereolabs |access-date=2025-10-23}}&amp;lt;/ref&amp;gt;&amp;lt;ref name=&quot;ZaubarLexicon&quot;&amp;gt;{{cite web |url=https://about.zaubar.com/en/xr-ai-lexicon/spatial-mapping |title=Spatial Mapping |publisher=Zaubar |access-date=2025-10-23}}&amp;lt;/ref&amp;gt; It is a fundamental component of [[augmented reality]] (AR), [[virtual reality]] (VR), [[mixed reality]] (MR), and [[robotics]], allowing systems to perceive, understand, and interact with the physical world.&amp;lt;ref name=&quot;StereolabsDocsS1&quot;&amp;gt;{{cite web |url=https://www.stereolabs.com/docs/spatial-mapping |title=Spatial Mapping Overview |publisher=Stereolabs |access-date=2025-10-23}}&amp;lt;/ref&amp;gt;&amp;lt;ref name=&quot;EducativeIO&quot;&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&amp;gt;{{cite web |url=https:&lt;/del&gt;/&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;/www.educative.io/answers/spatial-mapping-and-3d-reconstruction-in-augmented-reality |title=Spatial mapping and 3D reconstruction in augmented reality |publisher=Educative |access-date=2023}}&amp;lt;/ref&lt;/del&gt;&amp;gt; By creating a detailed digital map of surfaces, objects, and their spatial relationships, spatial mapping serves as the technological bridge between the digital and physical realms, allowing for the realistic blending of virtual and real worlds.&amp;lt;ref name=&quot;StereolabsDocsS1&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&#039;&#039;&#039;Spatial mapping&#039;&#039;&#039;, also known as &#039;&#039;&#039;3D reconstruction&#039;&#039;&#039; in some contexts, is a core technology that enables a device to create a three-dimensional (3D) digital model of its physical environment in real-time.&amp;lt;ref name=&quot;StereolabsDocsS2&quot;&amp;gt;{{cite web |url=https://www.stereolabs.com/docs/spatial-mapping |title=Spatial Mapping Overview |publisher=Stereolabs |access-date=2025-10-23}}&amp;lt;/ref&amp;gt;&amp;lt;ref name=&quot;ZaubarLexicon&quot;&amp;gt;{{cite web |url=https://about.zaubar.com/en/xr-ai-lexicon/spatial-mapping |title=Spatial Mapping |publisher=Zaubar |access-date=2025-10-23}}&amp;lt;/ref&amp;gt; It is a fundamental component of [[augmented reality]] (AR), [[virtual reality]] (VR), [[mixed reality]] (MR), and [[robotics]], allowing systems to perceive, understand, and interact with the physical world.&amp;lt;ref name=&quot;StereolabsDocsS1&quot;&amp;gt;{{cite web |url=https://www.stereolabs.com/docs/spatial-mapping |title=Spatial Mapping Overview |publisher=Stereolabs |access-date=2025-10-23}}&amp;lt;/ref&amp;gt;&amp;lt;ref name=&quot;EducativeIO&quot;/&amp;gt; By creating a detailed digital map of surfaces, objects, and their spatial relationships, spatial mapping serves as the technological bridge between the digital and physical realms, allowing for the realistic blending of virtual and real worlds.&amp;lt;ref name=&quot;StereolabsDocsS1&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The process is dynamic and continuous; a device equipped for spatial mapping constantly scans its surroundings with a suite of sensors, building and refining its 3D map over time by incorporating new depth and positional data as it moves through an environment.&amp;lt;ref name=&amp;quot;StereolabsDocsS2&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;UnityDocs&amp;quot;&amp;gt;{{cite web |url=https://docs.unity3d.com/2019.1/Documentation/Manual/SpatialMapping.html |title=Spatial Mapping concepts |publisher=Unity |access-date=2025-10-23}}&amp;lt;/ref&amp;gt; This capability is foundational to the field of [[extended reality]] (XR), enabling applications to place digital content accurately, facilitate realistic physical interactions like [[occlusion]] and collision, and provide environmental context for immersive experiences.&amp;lt;ref name=&amp;quot;ZaubarLexicon&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;EducativeIO&amp;quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The process is dynamic and continuous; a device equipped for spatial mapping constantly scans its surroundings with a suite of sensors, building and refining its 3D map over time by incorporating new depth and positional data as it moves through an environment.&amp;lt;ref name=&amp;quot;StereolabsDocsS2&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;UnityDocs&amp;quot;&amp;gt;{{cite web |url=https://docs.unity3d.com/2019.1/Documentation/Manual/SpatialMapping.html |title=Spatial Mapping concepts |publisher=Unity |access-date=2025-10-23}}&amp;lt;/ref&amp;gt; This capability is foundational to the field of [[extended reality]] (XR), enabling applications to place digital content accurately, facilitate realistic physical interactions like [[occlusion]] and collision, and provide environmental context for immersive experiences.&amp;lt;ref name=&amp;quot;ZaubarLexicon&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;EducativeIO&amp;quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Xinreality</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Spatial_mapping&amp;diff=36667&amp;oldid=prev</id>
		<title>Xinreality: /* Future Directions */</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Spatial_mapping&amp;diff=36667&amp;oldid=prev"/>
		<updated>2025-10-27T22:46:00Z</updated>

		<summary type="html">&lt;p&gt;&lt;span class=&quot;autocomment&quot;&gt;Future Directions&lt;/span&gt;&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 22:46, 27 October 2025&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l361&quot;&gt;Line 361:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 361:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;=== Semantic Spatial Understanding ===&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;=== Semantic Spatial Understanding ===&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The next major frontier for spatial mapping is the shift from purely geometric understanding (knowing &#039;&#039;where&#039;&#039; a surface is) to &#039;&#039;&#039;semantic understanding&#039;&#039;&#039; (knowing &#039;&#039;what&#039;&#039; a surface is).&amp;lt;ref name=&quot;SpatialAI&quot;/&amp;gt;&amp;lt;ref name=&quot;FutureDirections1&quot;&amp;gt;{{cite web |url=https://arxiv.org/html/2508.20477v1 |title=What is Spatial Computing? A Survey on the Foundations and State-of-the-Art |publisher=arXiv |access-date=2025-10-23}}&amp;lt;/ref&amp;gt; This involves leveraging [[AI]] and [[machine learning]] algorithms to analyze the map data and automatically identify, classify, and label objects and architectural elements in real-time—for example, recognizing a surface as a &quot;couch,&quot; an opening as a &quot;door,&quot; or an object as a &quot;chair.&quot;&amp;lt;ref name=&quot;MetaHelp&quot;/&amp;gt;&amp;lt;ref name=&quot;SpatialAI&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The next major frontier for spatial mapping is the shift from purely geometric understanding (knowing &#039;&#039;where&#039;&#039; a surface is) to &#039;&#039;&#039;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;[[&lt;/ins&gt;semantic understanding&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;]]&lt;/ins&gt;&#039;&#039;&#039; (knowing &#039;&#039;what&#039;&#039; a surface is).&amp;lt;ref name=&quot;SpatialAI&quot;/&amp;gt;&amp;lt;ref name=&quot;FutureDirections1&quot;&amp;gt;{{cite web |url=https://arxiv.org/html/2508.20477v1 |title=What is Spatial Computing? A Survey on the Foundations and State-of-the-Art |publisher=arXiv |access-date=2025-10-23}}&amp;lt;/ref&amp;gt; This involves leveraging [[AI]] and [[machine learning]] algorithms to analyze the map data and automatically identify, classify, and label objects and architectural elements in real-time—for example, recognizing a surface as a &quot;couch,&quot; an opening as a &quot;door,&quot; or an object as a &quot;chair.&quot;&amp;lt;ref name=&quot;MetaHelp&quot;/&amp;gt;&amp;lt;ref name=&quot;SpatialAI&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;This capability, already emerging in platforms like Meta Quest&amp;#039;s Scene API, will enable a new generation of intelligent and context-aware XR experiences. Virtual characters could realistically interact with the environment (e.g., sitting on a recognized couch), applications could automatically adapt their UI to the user&amp;#039;s specific room layout, and digital assistants could understand commands related to physical objects (&amp;quot;place the virtual screen on that wall&amp;quot;).&amp;lt;ref name=&amp;quot;FutureDirections1&amp;quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;This capability, already emerging in platforms like Meta Quest&amp;#039;s Scene API, will enable a new generation of intelligent and context-aware XR experiences. Virtual characters could realistically interact with the environment (e.g., sitting on a recognized couch), applications could automatically adapt their UI to the user&amp;#039;s specific room layout, and digital assistants could understand commands related to physical objects (&amp;quot;place the virtual screen on that wall&amp;quot;).&amp;lt;ref name=&amp;quot;FutureDirections1&amp;quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l367&quot;&gt;Line 367:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 367:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;=== Neural Rendering and AI-Powered Mapping ===&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;=== Neural Rendering and AI-Powered Mapping ===&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Neural Radiance Fields (NeRF) revolutionized 3D scene representation when introduced by UC Berkeley researchers in March 2020, representing continuous volumetric scene function producing photorealistic novel views through neural network. Key variants address limitations: Instant-NGP (2022) reduces training from hours to seconds through multi-resolution hash encoding, while Mip-NeRF (2021) adds anti-aliasing for better rendering at multiple scales.&amp;lt;ref name=&quot;nerf&quot;&amp;gt;{{cite web |url=https://www.matthewtancik.com/nerf |title=NeRF: Neural Radiance Fields |publisher=UC Berkeley |access-date=2025-10-27}}&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;[[&lt;/ins&gt;Neural Radiance Fields&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;]] &lt;/ins&gt;(NeRF) revolutionized 3D scene representation when introduced by UC Berkeley researchers in March 2020, representing continuous volumetric scene function producing photorealistic novel views through neural network. Key variants address limitations: Instant-NGP (2022) reduces training from hours to seconds through multi-resolution hash encoding, while Mip-NeRF (2021) adds anti-aliasing for better rendering at multiple scales.&amp;lt;ref name=&quot;nerf&quot;&amp;gt;{{cite web |url=https://www.matthewtancik.com/nerf |title=NeRF: Neural Radiance Fields |publisher=UC Berkeley |access-date=2025-10-27}}&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;3D Gaussian Splatting emerged in August 2023 as breakthrough achieving real-time performance at 30+ fps for 1080p rendering—100 to 1000 times faster than NeRF. The technique represents scenes using millions of 3D Gaussians in explicit representation versus NeRF&amp;#039;s implicit neural encoding, enabling real-time rendering crucial for interactive AR/VR applications.&amp;lt;ref name=&amp;quot;gaussian&amp;quot;&amp;gt;{{cite web |url=https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/ |title=3D Gaussian Splatting for Real-Time Radiance Field Rendering |publisher=INRIA |access-date=2025-10-27}}&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;3D Gaussian Splatting emerged in August 2023 as breakthrough achieving real-time performance at 30+ fps for 1080p rendering—100 to 1000 times faster than NeRF. The technique represents scenes using millions of 3D Gaussians in explicit representation versus NeRF&amp;#039;s implicit neural encoding, enabling real-time rendering crucial for interactive AR/VR applications.&amp;lt;ref name=&amp;quot;gaussian&amp;quot;&amp;gt;{{cite web |url=https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/ |title=3D Gaussian Splatting for Real-Time Radiance Field Rendering |publisher=INRIA |access-date=2025-10-27}}&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l373&quot;&gt;Line 373:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 373:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;=== The Role of Edge Computing and the Cloud ===&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;=== The Role of Edge Computing and the Cloud ===&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;To overcome the processing and power limitations of mobile XR devices, computationally intensive spatial mapping tasks will increasingly be offloaded to the network edge or the cloud.&amp;lt;ref name=&quot;AdeiaBlog&quot;&amp;gt;{{cite web |url=https://adeia.com/blog/spatial-mapping-empowering-the-future-of-ar |title=Spatial Mapping: Empowering the Future of AR |publisher=Adeia |access-date=2025-10-23}}&amp;lt;/ref&amp;gt; In this &#039;&#039;&#039;split-compute&#039;&#039;&#039; model, a lightweight headset would be responsible for capturing raw sensor data and sending it to a powerful nearby edge server. The server would then perform the heavy lifting—running SLAM algorithms, generating the mesh, and performing semantic analysis—and stream the resulting map data back to the device with extremely low latency.&amp;lt;ref name=&quot;AdeiaBlog&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;To overcome the processing and power limitations of mobile XR devices, computationally intensive spatial mapping tasks will increasingly be offloaded to the network edge or the cloud.&amp;lt;ref name=&quot;AdeiaBlog&quot;&amp;gt;{{cite web |url=https://adeia.com/blog/spatial-mapping-empowering-the-future-of-ar |title=Spatial Mapping: Empowering the Future of AR |publisher=Adeia |access-date=2025-10-23}}&amp;lt;/ref&amp;gt; In this &#039;&#039;&#039;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;[[&lt;/ins&gt;split-compute&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;]]&lt;/ins&gt;&#039;&#039;&#039; model, a lightweight headset would be responsible for capturing raw sensor data and sending it to a powerful nearby edge server. The server would then perform the heavy lifting—running SLAM algorithms, generating the mesh, and performing semantic analysis—and stream the resulting map data back to the device with extremely low latency.&amp;lt;ref name=&quot;AdeiaBlog&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Furthermore, the cloud will play a crucial role in creating and hosting large-scale, persistent spatial maps, often referred to as &#039;&#039;&#039;[[digital twin]]s&#039;&#039;&#039; or the &#039;&#039;&#039;AR Cloud&#039;&#039;&#039;. By aggregating and merging map data from many users, it will be possible to build and maintain a shared, persistent digital replica of real-world locations, enabling multi-user experiences at an unprecedented scale.&amp;lt;ref name=&quot;MagicLeapLegal&quot;/&amp;gt;&amp;lt;ref name=&quot;AdeiaBlog&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Furthermore, the cloud will play a crucial role in creating and hosting large-scale, persistent spatial maps, often referred to as &#039;&#039;&#039;[[digital twin]]s&#039;&#039;&#039; or the &#039;&#039;&#039;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;[[&lt;/ins&gt;AR Cloud&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;]]&lt;/ins&gt;&#039;&#039;&#039;. By aggregating and merging map data from many users, it will be possible to build and maintain a shared, persistent digital replica of real-world locations, enabling multi-user experiences at an unprecedented scale.&amp;lt;ref name=&quot;MagicLeapLegal&quot;/&amp;gt;&amp;lt;ref name=&quot;AdeiaBlog&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;=== Standardization and Interoperability ===&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;=== Standardization and Interoperability ===&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Xinreality</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Spatial_mapping&amp;diff=36666&amp;oldid=prev</id>
		<title>Xinreality: /* Sensor and Algorithmic Constraints */</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Spatial_mapping&amp;diff=36666&amp;oldid=prev"/>
		<updated>2025-10-27T22:45:08Z</updated>

		<summary type="html">&lt;p&gt;&lt;span class=&quot;autocomment&quot;&gt;Sensor and Algorithmic Constraints&lt;/span&gt;&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 22:45, 27 October 2025&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l337&quot;&gt;Line 337:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 337:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &amp;#039;&amp;#039;&amp;#039;Problematic Surfaces&amp;#039;&amp;#039;&amp;#039;: Onboard sensors often struggle with certain types of materials. Transparent surfaces like glass, highly reflective surfaces like mirrors, and textureless or dark, light-absorbing surfaces can fail to return usable data to depth sensors, resulting in gaps or inaccuracies in the map.&amp;lt;ref name=&amp;quot;UnityDocs&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;HoloLensSpaces&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;MagicLeapMappingDocs&amp;quot;&amp;gt;{{cite web |url=https://developer-docs.magicleap.cloud/docs/guides/features/spatial-mapping/ |title=Real-time World Sensing |publisher=Magic Leap |access-date=2025-10-23}}&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &amp;#039;&amp;#039;&amp;#039;Problematic Surfaces&amp;#039;&amp;#039;&amp;#039;: Onboard sensors often struggle with certain types of materials. Transparent surfaces like glass, highly reflective surfaces like mirrors, and textureless or dark, light-absorbing surfaces can fail to return usable data to depth sensors, resulting in gaps or inaccuracies in the map.&amp;lt;ref name=&amp;quot;UnityDocs&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;HoloLensSpaces&amp;quot;/&amp;gt;&amp;lt;ref name=&amp;quot;MagicLeapMappingDocs&amp;quot;&amp;gt;{{cite web |url=https://developer-docs.magicleap.cloud/docs/guides/features/spatial-mapping/ |title=Real-time World Sensing |publisher=Magic Leap |access-date=2025-10-23}}&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &#039;&#039;&#039;Drift&#039;&#039;&#039;: Tracking systems that rely on [[odometry]] (estimating motion from sensor data) are susceptible to small, accumulating errors over time. This phenomenon, known as &#039;&#039;&#039;drift&#039;&#039;&#039;, can cause the digital map to become misaligned with the real world. While algorithms use techniques like [[loop closure]] to correct for drift, it can still be a significant problem in large, feature-poor environments (like a long, white hallway).&amp;lt;ref name=&quot;MilvusSLAM&quot;/&amp;gt;&amp;lt;ref name=&quot;SLAMSystems&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &#039;&#039;&#039;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;[[&lt;/ins&gt;Drift&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;]]&lt;/ins&gt;&#039;&#039;&#039;: Tracking systems that rely on [[odometry]] (estimating motion from sensor data) are susceptible to small, accumulating errors over time. This phenomenon, known as &#039;&#039;&#039;drift&#039;&#039;&#039;, can cause the digital map to become misaligned with the real world. While algorithms use techniques like [[loop closure]] to correct for drift, it can still be a significant problem in large, feature-poor environments (like a long, white hallway).&amp;lt;ref name=&quot;MilvusSLAM&quot;/&amp;gt;&amp;lt;ref name=&quot;SLAMSystems&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &amp;#039;&amp;#039;&amp;#039;Scale and Boundaries&amp;#039;&amp;#039;&amp;#039;: The way spatial data is aggregated and defined can influence analytical results, a concept known in geography as the [[Modifiable Areal Unit Problem]] (MAUP). This problem highlights that statistical outcomes can change based on the shape and scale of the zones used for analysis, which has parallels in how room-scale maps are chunked and interpreted.&amp;lt;ref name=&amp;quot;MAUP1&amp;quot;&amp;gt;{{cite web |url=https://pmc.ncbi.nlm.nih.gov/articles/PMC7254930/ |title=The modifiable areal unit problem in ecological community data |publisher=PLOS ONE |access-date=2025-10-23}}&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;MAUP2&amp;quot;&amp;gt;{{cite web |url=https://zenn-wong.medium.com/the-challenges-of-using-maps-in-policy-making-510e3fcb8eb3 |title=The Challenges of Using Maps in Policy-Making |publisher=Medium |access-date=2025-10-23}}&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &amp;#039;&amp;#039;&amp;#039;Scale and Boundaries&amp;#039;&amp;#039;&amp;#039;: The way spatial data is aggregated and defined can influence analytical results, a concept known in geography as the [[Modifiable Areal Unit Problem]] (MAUP). This problem highlights that statistical outcomes can change based on the shape and scale of the zones used for analysis, which has parallels in how room-scale maps are chunked and interpreted.&amp;lt;ref name=&amp;quot;MAUP1&amp;quot;&amp;gt;{{cite web |url=https://pmc.ncbi.nlm.nih.gov/articles/PMC7254930/ |title=The modifiable areal unit problem in ecological community data |publisher=PLOS ONE |access-date=2025-10-23}}&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;MAUP2&amp;quot;&amp;gt;{{cite web |url=https://zenn-wong.medium.com/the-challenges-of-using-maps-in-policy-making-510e3fcb8eb3 |title=The Challenges of Using Maps in Policy-Making |publisher=Medium |access-date=2025-10-23}}&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Xinreality</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Spatial_mapping&amp;diff=36665&amp;oldid=prev</id>
		<title>Xinreality: /* Meta Quest */</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Spatial_mapping&amp;diff=36665&amp;oldid=prev"/>
		<updated>2025-10-27T22:44:25Z</updated>

		<summary type="html">&lt;p&gt;&lt;span class=&quot;autocomment&quot;&gt;Meta Quest&lt;/span&gt;&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 22:44, 27 October 2025&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l301&quot;&gt;Line 301:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 301:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;=== Meta Quest ===&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;=== Meta Quest ===&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Meta Quest evolved from pure virtual reality to sophisticated mixed reality through progressive spatial mapping capabilities. Quest 3 launched in 2023 with revolutionary spatial capabilities, featuring Snapdragon XR2 Gen 2 providing 2× GPU performance versus Quest 2, dual LCD displays at 2064×2208 per eye, and sophisticated sensor array including two 4MP RGB color cameras for full-color passthrough, four hybrid monochrome/IR cameras for tracking, and one IR patterned light emitter serving as depth sensor.&amp;lt;ref name=&quot;MetaHelp&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;[[&lt;/ins&gt;Meta Quest&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;]] &lt;/ins&gt;evolved from pure virtual reality to sophisticated mixed reality through progressive spatial mapping capabilities. Quest 3 launched in 2023 with revolutionary spatial capabilities, featuring Snapdragon XR2 Gen 2 providing 2× GPU performance versus Quest 2, dual LCD displays at 2064×2208 per eye, and sophisticated sensor array including two 4MP RGB color cameras for full-color passthrough, four hybrid monochrome/IR cameras for tracking, and one IR patterned light emitter serving as depth sensor.&amp;lt;ref name=&quot;MetaHelp&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The Scene API enables semantic understanding of physical environments through system-generated scene models with semantic labels including floor, ceiling, walls, desk, couch, table, window, and lamp. The API provides bounded 2D entities defining surfaces like walls and floors with 2D boundaries and bounding boxes, bounded 3D entities for objects like furniture with 3D bounding boxes, and room layout with automatic room structure detection and classification.&amp;lt;ref name=&amp;quot;MetaHelp&amp;quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The Scene API enables semantic understanding of physical environments through system-generated scene models with semantic labels including floor, ceiling, walls, desk, couch, table, window, and lamp. The API provides bounded 2D entities defining surfaces like walls and floors with 2D boundaries and bounding boxes, bounded 3D entities for objects like furniture with 3D bounding boxes, and room layout with automatic room structure detection and classification.&amp;lt;ref name=&amp;quot;MetaHelp&amp;quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Xinreality</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Spatial_mapping&amp;diff=36664&amp;oldid=prev</id>
		<title>Xinreality: /* Google ARCore */</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Spatial_mapping&amp;diff=36664&amp;oldid=prev"/>
		<updated>2025-10-27T22:44:09Z</updated>

		<summary type="html">&lt;p&gt;&lt;span class=&quot;autocomment&quot;&gt;Google ARCore&lt;/span&gt;&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 22:44, 27 October 2025&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l291&quot;&gt;Line 291:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 291:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;=== Google ARCore ===&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;=== Google ARCore ===&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Google ARCore launched in 2017 as the company&#039;s platform-agnostic augmented reality SDK, providing cross-platform APIs for Android, iOS, Unity, and Web after discontinuing the hardware-dependent Project Tango. ARCore achieves spatial understanding without specialized sensors through depth-from-motion algorithms that compare multiple device images from different angles, combining visual information with IMU measurements running at 1000 Hz. The system performs motion tracking at 60 fps using Simultaneous Localization and Mapping with visual and inertial data fusion.&amp;lt;ref name=&quot;arcore&quot;&amp;gt;{{cite web |url=https://developers.google.com/ar |title=ARCore Overview |publisher=Google Developers |access-date=2025-10-27}}&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;[[&lt;/ins&gt;Google ARCore&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;]] &lt;/ins&gt;launched in 2017 as the company&#039;s platform-agnostic augmented reality SDK, providing cross-platform APIs for Android, iOS, Unity, and Web after discontinuing the hardware-dependent Project Tango. ARCore achieves spatial understanding without specialized sensors through depth-from-motion algorithms that compare multiple device images from different angles, combining visual information with IMU measurements running at 1000 Hz. The system performs motion tracking at 60 fps using Simultaneous Localization and Mapping with visual and inertial data fusion.&amp;lt;ref name=&quot;arcore&quot;&amp;gt;{{cite web |url=https://developers.google.com/ar |title=ARCore Overview |publisher=Google Developers |access-date=2025-10-27}}&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The Depth API public launch in ARCore 1.18 (June 2020) brought occlusion capabilities to hundreds of millions of compatible Android devices. The depth-from-motion algorithm creates depth images using RGB camera and device movement, selectively using machine learning to increase depth processing even with minimal motion. Depth images store 16-bit unsigned integers per pixel representing distance from camera to environment, with depth range of 0 to 65 meters and most accurate results from 0.5 to 5 meters from real-world scenes.&amp;lt;ref name=&amp;quot;RoadtoVR&amp;quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The Depth API public launch in ARCore 1.18 (June 2020) brought occlusion capabilities to hundreds of millions of compatible Android devices. The depth-from-motion algorithm creates depth images using RGB camera and device movement, selectively using machine learning to increase depth processing even with minimal motion. Depth images store 16-bit unsigned integers per pixel representing distance from camera to environment, with depth range of 0 to 65 meters and most accurate results from 0.5 to 5 meters from real-world scenes.&amp;lt;ref name=&amp;quot;RoadtoVR&amp;quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Xinreality</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Spatial_mapping&amp;diff=36663&amp;oldid=prev</id>
		<title>Xinreality: /* Apple ARKit */</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Spatial_mapping&amp;diff=36663&amp;oldid=prev"/>
		<updated>2025-10-27T22:43:17Z</updated>

		<summary type="html">&lt;p&gt;&lt;span class=&quot;autocomment&quot;&gt;Apple ARKit&lt;/span&gt;&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 22:43, 27 October 2025&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l279&quot;&gt;Line 279:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 279:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;=== Apple ARKit ===&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;=== Apple ARKit ===&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Apple ARKit democratized spatial mapping by bringing sophisticated AR capabilities to hundreds of millions of iOS devices without requiring specialized hardware. ARKit 1 launched in 2017 with iOS 11, providing basic horizontal plane detection, Visual Inertial Odometry for tracking, and scene understanding on devices with A9 processors or later.&amp;lt;ref name=&quot;AndreasJakl&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;[[&lt;/ins&gt;Apple ARKit&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;]] &lt;/ins&gt;democratized spatial mapping by bringing sophisticated AR capabilities to hundreds of millions of iOS devices without requiring specialized hardware. ARKit 1 launched in 2017 with iOS 11, providing basic horizontal plane detection, Visual Inertial Odometry for tracking, and scene understanding on devices with A9 processors or later.&amp;lt;ref name=&quot;AndreasJakl&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;ARKit 3.5 in iOS 13.4 (March 2020) marked a revolutionary leap with the Scene Geometry API, powered by LiDAR on iPad Pro 4th generation. This first LiDAR-powered spatial mapping provided instant plane detection without scanning, triangle mesh reconstruction with classification into semantic categories (wall, floor, ceiling, table, seat, window, door), enhanced raycasting with scene geometry, and per-pixel depth information through the Depth API. The system could exclude people from reconstructed meshes and provided effective range up to 5 meters.&amp;lt;ref name=&amp;quot;AppleDeveloper&amp;quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;ARKit 3.5 in iOS 13.4 (March 2020) marked a revolutionary leap with the Scene Geometry API, powered by LiDAR on iPad Pro 4th generation. This first LiDAR-powered spatial mapping provided instant plane detection without scanning, triangle mesh reconstruction with classification into semantic categories (wall, floor, ceiling, table, seat, window, door), enhanced raycasting with scene geometry, and per-pixel depth information through the Depth API. The system could exclude people from reconstructed meshes and provided effective range up to 5 meters.&amp;lt;ref name=&amp;quot;AppleDeveloper&amp;quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Xinreality</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Spatial_mapping&amp;diff=36662&amp;oldid=prev</id>
		<title>Xinreality: /* Foundational Algorithms */</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Spatial_mapping&amp;diff=36662&amp;oldid=prev"/>
		<updated>2025-10-27T22:42:12Z</updated>

		<summary type="html">&lt;p&gt;&lt;span class=&quot;autocomment&quot;&gt;Foundational Algorithms&lt;/span&gt;&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 22:42, 27 October 2025&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l151&quot;&gt;Line 151:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 151:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Depending on the primary sensors used, SLAM can be categorized into several types:&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Depending on the primary sensors used, SLAM can be categorized into several types:&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &#039;&#039;&#039;Visual SLAM (vSLAM)&#039;&#039;&#039;: Uses one or more cameras to track visual features.&amp;lt;ref name=&quot;MathWorksSLAM&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &#039;&#039;&#039;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;[[&lt;/ins&gt;Visual SLAM&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;]] &lt;/ins&gt;(vSLAM)&#039;&#039;&#039;: Uses one or more cameras to track visual features.&amp;lt;ref name=&quot;MathWorksSLAM&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &#039;&#039;&#039;LiDAR SLAM&#039;&#039;&#039;: Uses a LiDAR sensor to build a precise geometric map.&amp;lt;ref name=&quot;MathWorksSLAM&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &#039;&#039;&#039;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;[[&lt;/ins&gt;LiDAR SLAM&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;]]&lt;/ins&gt;&#039;&#039;&#039;: Uses a LiDAR sensor to build a precise geometric map.&amp;lt;ref name=&quot;MathWorksSLAM&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &#039;&#039;&#039;Multi-Sensor SLAM&#039;&#039;&#039;: Fuses data from various sources (e.g., cameras, IMU, LiDAR) for enhanced robustness and accuracy.&amp;lt;ref name=&quot;MathWorksSLAM&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* &#039;&#039;&#039;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;[[&lt;/ins&gt;Multi-Sensor SLAM&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;]]&lt;/ins&gt;&#039;&#039;&#039;: Fuses data from various sources (e.g., cameras, IMU, LiDAR) for enhanced robustness and accuracy.&amp;lt;ref name=&quot;MathWorksSLAM&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Spatial mapping is typically accomplished via SLAM algorithms, which build a map of the environment in real time while tracking the device&amp;#039;s position within it.&amp;lt;ref name=&amp;quot;Adeia&amp;quot;&amp;gt;{{cite web |url=https://adeia.com/blog/spatial-mapping-empowering-the-future-of-ar |title=Spatial Mapping: Empowering the Future of AR |publisher=Adeia |date=2022-03-02 |access-date=2025-10-27}}&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Spatial mapping is typically accomplished via SLAM algorithms, which build a map of the environment in real time while tracking the device&amp;#039;s position within it.&amp;lt;ref name=&amp;quot;Adeia&amp;quot;&amp;gt;{{cite web |url=https://adeia.com/blog/spatial-mapping-empowering-the-future-of-ar |title=Spatial Mapping: Empowering the Future of AR |publisher=Adeia |date=2022-03-02 |access-date=2025-10-27}}&amp;lt;/ref&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Xinreality</name></author>
	</entry>
	<entry>
		<id>https://vrarwiki.com/index.php?title=Spatial_mapping&amp;diff=36660&amp;oldid=prev</id>
		<title>Xinreality: /* RGB Cameras */</title>
		<link rel="alternate" type="text/html" href="https://vrarwiki.com/index.php?title=Spatial_mapping&amp;diff=36660&amp;oldid=prev"/>
		<updated>2025-10-27T22:40:51Z</updated>

		<summary type="html">&lt;p&gt;&lt;span class=&quot;autocomment&quot;&gt;RGB Cameras&lt;/span&gt;&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 22:40, 27 October 2025&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l127&quot;&gt;Line 127:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 127:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;==== RGB Cameras ====&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;==== RGB Cameras ====&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Standard color cameras capture visual information such as color and texture. This data is crucial for texturing the 3D mesh to create a photorealistic model.&amp;lt;ref name=&quot;ZaubarLexicon&quot;/&amp;gt; Additionally, the images from RGB cameras are used by [[computer vision]] algorithms to track visual features in the environment, which is the basis of &#039;&#039;&#039;Visual SLAM (vSLAM)&#039;&#039;&#039;.&amp;lt;ref name=&quot;MathWorksSLAM&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Standard color cameras capture visual information such as color and texture. This data is crucial for texturing the 3D mesh to create a photorealistic model.&amp;lt;ref name=&quot;ZaubarLexicon&quot;/&amp;gt; Additionally, the images from RGB cameras are used by [[computer vision]] algorithms to track visual features in the environment, which is the basis of &#039;&#039;&#039;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;[[&lt;/ins&gt;Visual SLAM&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;]] &lt;/ins&gt;(vSLAM)&#039;&#039;&#039;.&amp;lt;ref name=&quot;MathWorksSLAM&quot;/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;==== Inertial Measurement Units (IMUs) ====&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;==== Inertial Measurement Units (IMUs) ====&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Xinreality</name></author>
	</entry>
</feed>