#SpatialMapping
Explore tagged Tumblr posts
Text
🎯 Ultimate ArcGIS/ArcMap Course – Master GIS Fast!
Want to master ArcGIS/ArcMap? This Udemy course provides step-by-step tutorials to help you build GIS skills fast!
💰 Discount: ONLY $12.99 (Save 48%, from $19.99)!
#ArcGISArcMapOnline#LearnMapping#GISVisualization#RemoteSensingGIS#OpenSourceMapping#GISSoftware#GISExpert#CartographySkills#SpatialMapping#GISMastery
0 notes
Photo

Charlie's Freewheels | Co-Design with Youth Cyan Station worked with youth cycling non-profit, Charlie's Freewheels, to design the interior of a new shop space at the ground level of a 40 storey condo in Regent Park. Through co-design & UX research with an engaged Youth Advisory Group (YAG), we built an architectural brief and a series of spatial concepts. The collaborative process revealed that the the new space was out of line with the organization's real needs. The decision was made to remain in and modify the existing space affordably. Sometimes building is not the best solution 😲!!! Image Description: 1. Photo of a bike with a child's hands touching the gears 2. Diagram with circles visualizing the network of users in a cycling non-profit organization (internal and external) 3. Photo of an adult and child looking at a bike together 4. Diagram of the organization's history with a bike going up and down hills, text at different points 5. Drawings of three architectural plans with text labelling different spatial zones and needs #codesign #charliesfreewheels #youthdesign #empoweryouth #codesignwithyouth #nonprofit #cyclingnonprofit #cyanstation #carmartin #participartorydesign #architecture #youthnonprofit #youthadvisorygroup #youthadvisorycouncil #youthadvisoryboard #collaborativedesign #collaborativearchitecture #spatialmapping #spatial #datavisualization #architecturevisualization #architectureplans #youthcycling #communitydesign #bikerepairshop #cyclinglessons #bicycleschangelives #bicyclerepairshop #nonprofitorg #timeline @charliesfreewheels https://www.instagram.com/p/CgwwdenASY9/?igshid=NGJjMDIxMWI=
#codesign#charliesfreewheels#youthdesign#empoweryouth#codesignwithyouth#nonprofit#cyclingnonprofit#cyanstation#carmartin#participartorydesign#architecture#youthnonprofit#youthadvisorygroup#youthadvisorycouncil#youthadvisoryboard#collaborativedesign#collaborativearchitecture#spatialmapping#spatial#datavisualization#architecturevisualization#architectureplans#youthcycling#communitydesign#bikerepairshop#cyclinglessons#bicycleschangelives#bicyclerepairshop#nonprofitorg#timeline
0 notes
Photo

Queensbury Street Carlton • • • • • #language #cuisine #misspelled #fujifilm #fujipro400h #pointandshoot #pointandshootcamera #120film #120filmcamera #fujiga645 #urbanexploration #psychogeography #spatialmapping #urbanmapping
#psychogeography#language#120film#pointandshoot#misspelled#fujipro400h#fujifilm#pointandshootcamera#120filmcamera#fujiga645#spatialmapping#urbanexploration#urbanmapping#cuisine
3 notes
·
View notes
Photo

Learn Data Visualization with Tableau 10.Tableau 10 for Business Intelligence, Data Analytics and Data Science.Tableau Worksheets and Create Professional Dashboards.Get Tableau Certification and dwarf.Understand Heatmap, Geographic Mapping, Impressive Barchart, Bullet Graph, Gantt Chart, Data Calendar, Circle View, General Operation. Today @ 8:30pm Check our Info : www.incegna.com Reg Link for Programs : http://www.incegna.com/contact-us Follow us on Facebook : www.facebook.com/INCEGNA/? Follow us on Instagram : https://www.instagram.com/_incegna/ For Queries : [email protected] #tableau,#datavisualization,#businessintelligence,#Analytics,#heatmaps,#overlappingdata,#datasource,#dataprocessing,#dualaxis,#wafflecharts,#bargraphs,#dimensionfilter,#GeographicalData,#SpatialMapping,#worksheets,#Linechart,#datascience,#datatype,#operation,#datadriven,#businessanalyst,#tablecalculation,#WMS,#dualaxis,#tableaufilter,#patterns,#bulletgraph https://www.instagram.com/p/B62oWglgjKb/?igshid=1qc6lxs73aukq
#tableau#datavisualization#businessintelligence#analytics#heatmaps#overlappingdata#datasource#dataprocessing#dualaxis#wafflecharts#bargraphs#dimensionfilter#geographicaldata#spatialmapping#worksheets#linechart#datascience#datatype#operation#datadriven#businessanalyst#tablecalculation#wms#tableaufilter#patterns#bulletgraph
0 notes
Video
youtube
Artificial Intelligence has a huge role to play in tackling the climate crisis - @mehardahiya https://youtu.be/cUQNjf0OgiU #AI #GIS #ClimateCrisis #spatialmapping #science #podcastclip #podcast #climatechange #earth
0 notes
Text
Holiday Holograms: #2 Initial project setup tutorial
This post is all about the necessary tools and project set up that you have to do in order to get your project ready to develop hololens apps and games, depending on how good I am at explaining stuff this will either be super handy or really useless.
What you’ll need to get started
To follow along with this you’ll need a few things to get started! They are:
A controller compatible with Unity (I’m using an Xbox One Controller)
Unity 2017.1.0f3, while you may be able to use older versions this is the one I’ll be using
Basic understanding of how Unity works and how to navigate it
1. HoloToolKit
The first step to start developing Hololens stuff in Unity is to get the HoloToolKit from GitHub, it contains all the necessary scripts and prefabs you’ll need to get started. The GitHub also has a HoloToolKit Example package you can download and use to work out how specific things work. Download both of them as we’ll use them later on.
Link to HoloToolKit GitHub: https://github.com/Microsoft/MixedRealityToolkit-Unity/releases/tag/v1.2017.1.1
2. Project setup
There are a few things you’ll need to do to your Unity project in order to get started with Hololens development, these are:
Import both the HoloToolKit and HoloToolKitExample packages from the GitHub into your project
In the build settings change the target platform to Universal Windows Platform, set your target device to Hololens, build type to D3D (Stands for Direct 3D) and tick Unity C# Project.
In the player settings tick Virtual Reality supported and make sure your Virtual Reality SDK is set to Windows Holographic
Make sure that the following options are ticked in capabilities section are ticked:
MusicLibrary
PicturesLibrary
VideosLibrary
WebCam
Microphone
SpatialPerception
You will also need to open up a holographic emulation window, set it to simulate in editor and select a room to emulate.
3. HoloToolKit Examples
The next step is to make sure all this stuff actually worked! We can do this by checking some of the examples from the HoloToolKit Examples we downloaded earlier. Simply navigate to the HoloToolKit Examples folder, select prototyping, then scenes and go to the scene called “PositionAnObject”. If you run this scene you should see a floating object, if you use your controller and press A or the equivalent button on the object it should start moving around with you!
If you want to see the room you are supposed to be emulating you’ll need to drop in a prefab called “SpatialMapping” from HoloToolKit under the prefabs folder in the Spatial Mapping folder. It’ll take a while to load in the geometry of the room when you run the scene using the SpatialMapping prefab. If I were you I’d play around with all the different scenes and try to work out how they work.
4. Thats all folks!
That should be enough to get you started! It’s a wild world out there and I’ve still got a lot to learn but that is the absolute basics of how to get started developing for the HoloLens! If you found this post at all helpful please share it around as it took me longer than usual to write it! Also if I missed anything please let me know so I can fix it.
#Unity3d#Hololens#microsoft#unity tutorial#gamedev#tutorial#Holograms#VR#AR#MR#virtual reality#augmented reality#mixed reality
0 notes
Text
Getting Started with a Mixed Reality Platformer Using Microsoft HoloLens
The platform game genre has undergone constant evolution, from its earliest incarnations in Donkey Kong and Pitfall to recent variations like Flappy Bird. Shigeru Miyamoto’s Super Mario Bros. is recognized as the best platform game of all time, setting a high bar for everyone who came after. The Lara Croft series built on Shigeru’s innovations by taking the standard side-scrolling platformer and expanding it into a 3D world. With mixed reality and HoloLens, we all have the opportunity to expand the world of the platform game yet again.
Standard video game conventions undergo a profound change when you put a platformer in a mixed reality environment. First of all, instead of sitting in a chair and moving your character inside your display screen, you physically follow your character as he moves around the real world. Second, the obstacles your protagonist encounters aren’t just digital ones but also physical objects in the real world, like tables and chairs and stacks of books. Third, because every room you play in effectively becomes a new level, the mixed reality platform game never runs out of levels and every level presents unique challenges. Instead of comparing scores for a certain game stage, you will need to compare how well you did in the living room—or in Jane’s kitchen or in Shigeru’s basement.
In this post, you will learn how to get started building a platform game for HoloLens using all free assets. In doing so, you will learn the basics of using Spatial Mapping to scan a room so your player character can interact with it. You will also use the slightly more advanced features of Spatial Understanding to determine characteristics of the game environment. Finally, all of this will be done in the Unity IDE (currently 5.5.0f3) with the open source HoloToolkit.
Creating your game world with Spatial Mapping
How does HoloLens make it possible for virtual objects and physical objects to interact? The HoloLens is equipped with a depth camera, similar to the Kinect v2’s depth camera, that progressively scans a room in order to create a spatial map through a technique known as spatial mapping. It uses this data about the real world to create 3D surfaces in the virtual world. Then, using its four environment-aware cameras, it positions and orients the 3D reconstruction of the room in correct relation to the player. This map is often visualized at the start of HoloLens applications as a web of lines blanketing the room the player is in. You can also sometimes trigger this visualization by simply tapping in the air in front of you while wearing the HoloLens.
To play with spatial mapping, create a new 3D project in Unity. You can call the project “3D Platform Game.” Create a new scene for this game called “main.”
Next, add the HoloToolkit unity package to your app. You can download the package from the HoloToolkit project’s GitHub repository. This guide uses HoloToolkit-Unity-v1.5.5.0.unitypackage. In the Unity IDE, select the Assets tab. Then click on Import Package -> Custom Package and find the download location of the HoloTookit to import it into the scene.
The HoloToolkit provides lots of useful helpers and shortcuts for developing a HoloLens app. Under the HoloToolkit menu, there is a Configure option that lets you correctly rig your game for HoloLens. After being sure to save your scene and project, click on each of these options to configure your scene, your project and your capability settings. Under capabilities, you must make sure to check off SpatialPerception—otherwise spatial mapping will not work. Also, be sure to save your project after each change. If for some reason you would prefer to do this step manually, there is documentation available to walk you through it.
To add spatial mapping functionality to your game, all you need to do is drag the SpatialMapping prefab into your scene from HoloToolkit -> SpatialMapping -> Prefabs. If you build and deploy the game to your HoloLens or HoloLens Emulator now, you will be able to see the web mesh of surface reconstruction occurring.
Congratulations! You’ve created your first level.
Adding a protagonist and an Xbox Controller
The next step is to create your protagonist. If you are lucky enough to have a Mario or a Luigi rigged model, you should definitely use that. In keeping with the earlier promise to use only free assets, however, this guide will use the complimentary Ethan asset.
Go to the Unity menu and select Assets -> Import Package -> Characters. Copy the whole package into your game by clicking Import. Finally, drag the ThirdPersonController prefab from Assets -> Standard Assets -> Characters -> ThirdPersonCharacter -> Prefabs into your scene.
Next, you’ll want a Bluetooth controller to steer your character. Newer Xbox One controllers support Bluetooth. To get one to work with HoloLens, you’ll need to closely follow these directions in order to update the firmware on your controller. Then pair the controller to your HoloLens through the Settings -> Devices menu.
To support the Xbox One controller in your game, you should add another free asset. Open the Asset Store by clicking on Window -> Asset Store and search for Xbox Controller Input for HoloLens. Import this package into your project.
You can this up to your character with a bit of custom script. In your scene, select the ThirdPersonController prefab. Find the Third Person User Control script in the Inspector window and delete it. You’re going to write your own custom Control that depends on the Xbox Controller package you just imported.
In the Inspector window again, go to the bottom and click on Add Component -> New Script. Name your script ThirdPersonHoloLensControl and copy/paste the following code into it:
using UnityEngine; using HoloLensXboxController; using UnityStandardAssets.Characters.ThirdPerson; public class ThirdPersonHoloLensControl : MonoBehaviour { private ControllerInput controllerInput; private ThirdPersonCharacter m_Character; private Transform m_Cam; private Vector3 m_CamForward; private Vector3 m_Move; private bool m_Jump; public float RotateAroundYSpeed = 2.0f; public float RotateAroundXSpeed = 2.0f; public float RotateAroundZSpeed = 2.0f; public float MoveHorizontalSpeed = 1f; public float MoveVerticalSpeed = 1f; public float ScaleSpeed = 1f; void Start() { controllerInput = new ControllerInput(0, 0.19f); // get the transform of the main camera if (Camera.main != null) { m_Cam = Camera.main.transform; } m_Character = GetComponent<ThirdPersonCharacter>(); } // Update is called once per frame void Update() { controllerInput.Update(); if (!m_Jump) { m_Jump = controllerInput.GetButton(ControllerButton.A); } } private void FixedUpdate() { // read inputs float h = MoveHorizontalSpeed * controllerInput.GetAxisLeftThumbstickX(); float v = MoveVerticalSpeed * controllerInput.GetAxisLeftThumbstickY(); bool crouch = controllerInput.GetButton(ControllerButton.B); // calculate move direction to pass to character if (m_Cam != null) { // calculate camera relative direction to move: m_CamForward = Vector3.Scale(m_Cam.forward, new Vector3(1, 0, 1)).normalized; m_Move = v * m_CamForward + h * m_Cam.right; } // pass all parameters to the character control script m_Character.Move(m_Move, crouch, m_Jump); m_Jump = false; } }
This code is a variation on the standard controller code. Now that it is attached, it will let you use a Bluetooth enabled Xbox One controller to move your character. Use the A button to jump. Use the B button to crouch.
You now have a first level and a player character you can move with a controller: pretty much all the necessary components for a platform game. If you deploy the project as is, however, you will find that there is a small problem. Your character falls through the floor.
This happens because, while the character appears as soon as the scene starts, it actually takes a bit of time to scan the room and create meshes for the floor. If the character shows up before those meshes are placed in the scene, he will simply fall through the floor and keep falling indefinitely because there are no meshes to catch him.
How ‘bout some spatial understanding
In order to avoid this, the app needs a bit of spatial smarts. It needs to wait until the spatial meshes are mostly completed before adding the character to the scene. It should also scan the room and find the floor so the character can be added gently rather than dropped into the room. The spatial understand prefab will help you to accomplish both of these requirements.
Add the Spatial Understanding prefab to your scene. It can be found in Assets -> HoloToolkit -> SpatialUnderstanding -> Prefabs.
Because the SpatialUnderstanding game object also draws a wireframe during scanning, you should disable the visual mesh used by the SpatialMapping game object by deselecting Draw Visual Mesh in its Spatial Mapping Manager script. To do this, select the SpatialMapping game object, find the Spatial Mapping Manager in the Inspector window and uncheck Draw Visual Mesh.
You now need to add some orchestration to the game to prevent the third person character from being added too soon. Select ThirdPersonController in your scene. Then go to the Inspector panel and click on Add Component -> New Script. Call your script OrchestrateGame. While this script could really be placed anywhere, attaching it to the ThirdPersonController will make it easier to manipulate your character’s properties.
Start by adding HideCharacter and ShowCharacter methods to the OrchestrateGame class. This allows you to make the character invisible until you are ready to add him to the game level (the room).
private void ShowCharacter(Vector3 placement) { var ethanBody = GameObject.Find("EthanBody"); ethanBody.GetComponent<SkinnedMeshRenderer>().enabled = true; m_Character.transform.position = placement; var rigidBody = GetComponent<Rigidbody>(); rigidBody.angularVelocity = Vector3.zero; rigidBody.velocity = Vector3.zero; } private void HideCharacter() { var ethanBody = GameObject.Find("EthanBody"); ethanBody.GetComponent<SkinnedMeshRenderer>().enabled = false; }
When the game starts, you will initially hide the character from view. More importantly, you will hook into the SpatialUnderstanding singleton and handle it’s ScanStateChanged event. Once the scan is done, you will use spatial understanding to correctly place the character.
private ThirdPersonCharacter m_Character; void Start() { m_Character = GetComponent<ThirdPersonCharacter>(); SpatialUnderstanding.Instance.ScanStateChanged += Instance_ScanStateChanged; HideCharacter(); } private void Instance_ScanStateChanged() { if ((SpatialUnderstanding.Instance.ScanState == SpatialUnderstanding.ScanStates.Done) && SpatialUnderstanding.Instance.AllowSpatialUnderstanding) { PlaceCharacterInGame(); } }
How do you decide when the scan is completed? You could set up a timer and wait for a predetermined length of time to pass. But this might provide inconsistent results. A better way is to take advantage of the spatial understanding functionality in the HoloToolkit.
Spatial understanding is constantly evaluating surfaces picked up by the spatial mapping component. You will set a threshold to decide when you have retrieved enough spatial information. Every time the Update method is called, you will evaluate whether the threshold has been met, as determined by the spatial understanding module. If it is, you call the RequestFinishScan method on SpatialUnderstanding to get it to finish scanning and set its ScanState to Done.
private bool m_isInitialized; public float kMinAreaForComplete = 50.0f; public float kMinHorizAreaForComplete = 25.0f; public float kMinWallAreaForComplete = 10.0f; // Update is called once per frame void Update() { // check if enough of the room is scanned if (!m_isInitialized && DoesScanMeetMinBarForCompletion) { // let service know we're done scanning SpatialUnderstanding.Instance.RequestFinishScan(); m_isInitialized = true; } } public bool DoesScanMeetMinBarForCompletion { get { // Only allow this when we are actually scanning if ((SpatialUnderstanding.Instance.ScanState != SpatialUnderstanding.ScanStates.Scanning) || (!SpatialUnderstanding.Instance.AllowSpatialUnderstanding)) { return false; } // Query the current playspace stats IntPtr statsPtr = SpatialUnderstanding.Instance.UnderstandingDLL.GetStaticPlayspaceStatsPtr(); if (SpatialUnderstandingDll.Imports.QueryPlayspaceStats(statsPtr) == 0) { return false; } SpatialUnderstandingDll.Imports.PlayspaceStats stats = SpatialUnderstanding.Instance.UnderstandingDLL.GetStaticPlayspaceStats(); // Check our preset requirements if ((stats.TotalSurfaceArea > kMinAreaForComplete) || (stats.HorizSurfaceArea > kMinHorizAreaForComplete) || (stats.WallSurfaceArea > kMinWallAreaForComplete)) { return true; } return false; } }
Once spatial understanding has determined that enough of the room has been scanned to start the level, you can use spatial understanding one more time to determine where to place your protagonist. First, the PlaceCharacterInGame method, show below, tries to determine the Y coordinate of the room floor. Next, the main camera object is used to determine the direction the HoloLens is facing in order to find a coordinate position two meters in front of the HoloLens. This position is combined with the Y coordinate of the floor in order to place the character gently on the ground in front of the player.
private void PlaceCharacterInGame() { // use spatial understanding to find floor SpatialUnderstandingDll.Imports.QueryPlayspaceAlignment(SpatialUnderstanding.Instance.UnderstandingDLL.GetStaticPlayspaceAlignmentPtr()); SpatialUnderstandingDll.Imports.PlayspaceAlignment alignment = SpatialUnderstanding.Instance.UnderstandingDLL.GetStaticPlayspaceAlignment(); // find 2 meters in front of camera position var inFrontOfCamera = Camera.main.transform.position + Camera.main.transform.forward * 2.0f; // place character on floor 2 meters ahead ShowCharacter(new Vector3(inFrontOfCamera.x, alignment.FloorYValue, 2.69f)); // hide mesh var customMesh = SpatialUnderstanding.Instance.GetComponent<SpatialUnderstandingCustomMesh>(); customMesh.DrawProcessedMesh = false; }
You complete the PlaceCharacterInGame method by making the meshes invisible to the player. This reinforces the illusion that your protagonist is running into and jumping over objects in the real world. The last thing needed to finish this game, level design, is something that is unfortunately too complex to cover in this platform.
Because this platform game has been developed in mixed reality, you have an interesting choice to make, however, as you design your level. You can do level design the traditional way using 3D models. Alternatively, you can also do it using real world objects which the character must run between and jump over. Finally, the best approach may involve even mixing the two.
Conclusion
To paraphrase Shakespeare, all the world’s a stage and every room in it is a level. Mixed reality has the power to create new worlds for us—but it also has the power to make us look at the cultural artifacts and conventions we already have, like the traditional platform game, in entirely new ways. Where virtual reality is largely about escapism, the secret of mixed reality may simply be that it makes us appreciate what we already have by giving us fresh eyes with which to look at them.
from DIYS http://ift.tt/2lNsLlw
0 notes
Text
Getting Started with a Mixed Reality Platformer Using Microsoft HoloLens
The platform game genre has undergone constant evolution, from its earliest incarnations in Donkey Kong and Pitfall to recent variations like Flappy Bird. Shigeru Miyamoto’s Super Mario Bros. is recognized as the best platform game of all time, setting a high bar for everyone who came after. The Lara Croft series built on Shigeru’s innovations by taking the standard side-scrolling platformer and expanding it into a 3D world. With mixed reality and HoloLens, we all have the opportunity to expand the world of the platform game yet again.
Standard video game conventions undergo a profound change when you put a platformer in a mixed reality environment. First of all, instead of sitting in a chair and moving your character inside your display screen, you physically follow your character as he moves around the real world. Second, the obstacles your protagonist encounters aren’t just digital ones but also physical objects in the real world, like tables and chairs and stacks of books. Third, because every room you play in effectively becomes a new level, the mixed reality platform game never runs out of levels and every level presents unique challenges. Instead of comparing scores for a certain game stage, you will need to compare how well you did in the living room—or in Jane’s kitchen or in Shigeru’s basement.
In this post, you will learn how to get started building a platform game for HoloLens using all free assets. In doing so, you will learn the basics of using Spatial Mapping to scan a room so your player character can interact with it. You will also use the slightly more advanced features of Spatial Understanding to determine characteristics of the game environment. Finally, all of this will be done in the Unity IDE (currently 5.5.0f3) with the open source HoloToolkit.
Creating your game world with Spatial Mapping
How does HoloLens make it possible for virtual objects and physical objects to interact? The HoloLens is equipped with a depth camera, similar to the Kinect v2’s depth camera, that progressively scans a room in order to create a spatial map through a technique known as spatial mapping. It uses this data about the real world to create 3D surfaces in the virtual world. Then, using its four environment-aware cameras, it positions and orients the 3D reconstruction of the room in correct relation to the player. This map is often visualized at the start of HoloLens applications as a web of lines blanketing the room the player is in. You can also sometimes trigger this visualization by simply tapping in the air in front of you while wearing the HoloLens.
To play with spatial mapping, create a new 3D project in Unity. You can call the project “3D Platform Game.” Create a new scene for this game called “main.”
Next, add the HoloToolkit unity package to your app. You can download the package from the HoloToolkit project’s GitHub repository. This guide uses HoloToolkit-Unity-v1.5.5.0.unitypackage. In the Unity IDE, select the Assets tab. Then click on Import Package -> Custom Package and find the download location of the HoloTookit to import it into the scene.
The HoloToolkit provides lots of useful helpers and shortcuts for developing a HoloLens app. Under the HoloToolkit menu, there is a Configure option that lets you correctly rig your game for HoloLens. After being sure to save your scene and project, click on each of these options to configure your scene, your project and your capability settings. Under capabilities, you must make sure to check off SpatialPerception—otherwise spatial mapping will not work. Also, be sure to save your project after each change. If for some reason you would prefer to do this step manually, there is documentation available to walk you through it.
To add spatial mapping functionality to your game, all you need to do is drag the SpatialMapping prefab into your scene from HoloToolkit -> SpatialMapping -> Prefabs. If you build and deploy the game to your HoloLens or HoloLens Emulator now, you will be able to see the web mesh of surface reconstruction occurring.
Congratulations! You’ve created your first level.
Adding a protagonist and an Xbox Controller
The next step is to create your protagonist. If you are lucky enough to have a Mario or a Luigi rigged model, you should definitely use that. In keeping with the earlier promise to use only free assets, however, this guide will use the complimentary Ethan asset.
Go to the Unity menu and select Assets -> Import Package -> Characters. Copy the whole package into your game by clicking Import. Finally, drag the ThirdPersonController prefab from Assets -> Standard Assets -> Characters -> ThirdPersonCharacter -> Prefabs into your scene.
Next, you’ll want a Bluetooth controller to steer your character. Newer Xbox One controllers support Bluetooth. To get one to work with HoloLens, you’ll need to closely follow these directions in order to update the firmware on your controller. Then pair the controller to your HoloLens through the Settings -> Devices menu.
To support the Xbox One controller in your game, you should add another free asset. Open the Asset Store by clicking on Window -> Asset Store and search for Xbox Controller Input for HoloLens. Import this package into your project.
You can this up to your character with a bit of custom script. In your scene, select the ThirdPersonController prefab. Find the Third Person User Control script in the Inspector window and delete it. You’re going to write your own custom Control that depends on the Xbox Controller package you just imported.
In the Inspector window again, go to the bottom and click on Add Component -> New Script. Name your script ThirdPersonHoloLensControl and copy/paste the following code into it:
using UnityEngine; using HoloLensXboxController; using UnityStandardAssets.Characters.ThirdPerson; public class ThirdPersonHoloLensControl : MonoBehaviour { private ControllerInput controllerInput; private ThirdPersonCharacter m_Character; private Transform m_Cam; private Vector3 m_CamForward; private Vector3 m_Move; private bool m_Jump; public float RotateAroundYSpeed = 2.0f; public float RotateAroundXSpeed = 2.0f; public float RotateAroundZSpeed = 2.0f; public float MoveHorizontalSpeed = 1f; public float MoveVerticalSpeed = 1f; public float ScaleSpeed = 1f; void Start() { controllerInput = new ControllerInput(0, 0.19f); // get the transform of the main camera if (Camera.main != null) { m_Cam = Camera.main.transform; } m_Character = GetComponent<ThirdPersonCharacter>(); } // Update is called once per frame void Update() { controllerInput.Update(); if (!m_Jump) { m_Jump = controllerInput.GetButton(ControllerButton.A); } } private void FixedUpdate() { // read inputs float h = MoveHorizontalSpeed * controllerInput.GetAxisLeftThumbstickX(); float v = MoveVerticalSpeed * controllerInput.GetAxisLeftThumbstickY(); bool crouch = controllerInput.GetButton(ControllerButton.B); // calculate move direction to pass to character if (m_Cam != null) { // calculate camera relative direction to move: m_CamForward = Vector3.Scale(m_Cam.forward, new Vector3(1, 0, 1)).normalized; m_Move = v * m_CamForward + h * m_Cam.right; } // pass all parameters to the character control script m_Character.Move(m_Move, crouch, m_Jump); m_Jump = false; } }
This code is a variation on the standard controller code. Now that it is attached, it will let you use a Bluetooth enabled Xbox One controller to move your character. Use the A button to jump. Use the B button to crouch.
You now have a first level and a player character you can move with a controller: pretty much all the necessary components for a platform game. If you deploy the project as is, however, you will find that there is a small problem. Your character falls through the floor.
This happens because, while the character appears as soon as the scene starts, it actually takes a bit of time to scan the room and create meshes for the floor. If the character shows up before those meshes are placed in the scene, he will simply fall through the floor and keep falling indefinitely because there are no meshes to catch him.
How ‘bout some spatial understanding
In order to avoid this, the app needs a bit of spatial smarts. It needs to wait until the spatial meshes are mostly completed before adding the character to the scene. It should also scan the room and find the floor so the character can be added gently rather than dropped into the room. The spatial understand prefab will help you to accomplish both of these requirements.
Add the Spatial Understanding prefab to your scene. It can be found in Assets -> HoloToolkit -> SpatialUnderstanding -> Prefabs.
Because the SpatialUnderstanding game object also draws a wireframe during scanning, you should disable the visual mesh used by the SpatialMapping game object by deselecting Draw Visual Mesh in its Spatial Mapping Manager script. To do this, select the SpatialMapping game object, find the Spatial Mapping Manager in the Inspector window and uncheck Draw Visual Mesh.
You now need to add some orchestration to the game to prevent the third person character from being added too soon. Select ThirdPersonController in your scene. Then go to the Inspector panel and click on Add Component -> New Script. Call your script OrchestrateGame. While this script could really be placed anywhere, attaching it to the ThirdPersonController will make it easier to manipulate your character’s properties.
Start by adding HideCharacter and ShowCharacter methods to the OrchestrateGame class. This allows you to make the character invisible until you are ready to add him to the game level (the room).
private void ShowCharacter(Vector3 placement) { var ethanBody = GameObject.Find("EthanBody"); ethanBody.GetComponent<SkinnedMeshRenderer>().enabled = true; m_Character.transform.position = placement; var rigidBody = GetComponent<Rigidbody>(); rigidBody.angularVelocity = Vector3.zero; rigidBody.velocity = Vector3.zero; } private void HideCharacter() { var ethanBody = GameObject.Find("EthanBody"); ethanBody.GetComponent<SkinnedMeshRenderer>().enabled = false; }
When the game starts, you will initially hide the character from view. More importantly, you will hook into the SpatialUnderstanding singleton and handle it’s ScanStateChanged event. Once the scan is done, you will use spatial understanding to correctly place the character.
private ThirdPersonCharacter m_Character; void Start() { m_Character = GetComponent<ThirdPersonCharacter>(); SpatialUnderstanding.Instance.ScanStateChanged += Instance_ScanStateChanged; HideCharacter(); } private void Instance_ScanStateChanged() { if ((SpatialUnderstanding.Instance.ScanState == SpatialUnderstanding.ScanStates.Done) && SpatialUnderstanding.Instance.AllowSpatialUnderstanding) { PlaceCharacterInGame(); } }
How do you decide when the scan is completed? You could set up a timer and wait for a predetermined length of time to pass. But this might provide inconsistent results. A better way is to take advantage of the spatial understanding functionality in the HoloToolkit.
Spatial understanding is constantly evaluating surfaces picked up by the spatial mapping component. You will set a threshold to decide when you have retrieved enough spatial information. Every time the Update method is called, you will evaluate whether the threshold has been met, as determined by the spatial understanding module. If it is, you call the RequestFinishScan method on SpatialUnderstanding to get it to finish scanning and set its ScanState to Done.
private bool m_isInitialized; public float kMinAreaForComplete = 50.0f; public float kMinHorizAreaForComplete = 25.0f; public float kMinWallAreaForComplete = 10.0f; // Update is called once per frame void Update() { // check if enough of the room is scanned if (!m_isInitialized && DoesScanMeetMinBarForCompletion) { // let service know we're done scanning SpatialUnderstanding.Instance.RequestFinishScan(); m_isInitialized = true; } } public bool DoesScanMeetMinBarForCompletion { get { // Only allow this when we are actually scanning if ((SpatialUnderstanding.Instance.ScanState != SpatialUnderstanding.ScanStates.Scanning) || (!SpatialUnderstanding.Instance.AllowSpatialUnderstanding)) { return false; } // Query the current playspace stats IntPtr statsPtr = SpatialUnderstanding.Instance.UnderstandingDLL.GetStaticPlayspaceStatsPtr(); if (SpatialUnderstandingDll.Imports.QueryPlayspaceStats(statsPtr) == 0) { return false; } SpatialUnderstandingDll.Imports.PlayspaceStats stats = SpatialUnderstanding.Instance.UnderstandingDLL.GetStaticPlayspaceStats(); // Check our preset requirements if ((stats.TotalSurfaceArea > kMinAreaForComplete) || (stats.HorizSurfaceArea > kMinHorizAreaForComplete) || (stats.WallSurfaceArea > kMinWallAreaForComplete)) { return true; } return false; } }
Once spatial understanding has determined that enough of the room has been scanned to start the level, you can use spatial understanding one more time to determine where to place your protagonist. First, the PlaceCharacterInGame method, show below, tries to determine the Y coordinate of the room floor. Next, the main camera object is used to determine the direction the HoloLens is facing in order to find a coordinate position two meters in front of the HoloLens. This position is combined with the Y coordinate of the floor in order to place the character gently on the ground in front of the player.
private void PlaceCharacterInGame() { // use spatial understanding to find floor SpatialUnderstandingDll.Imports.QueryPlayspaceAlignment(SpatialUnderstanding.Instance.UnderstandingDLL.GetStaticPlayspaceAlignmentPtr()); SpatialUnderstandingDll.Imports.PlayspaceAlignment alignment = SpatialUnderstanding.Instance.UnderstandingDLL.GetStaticPlayspaceAlignment(); // find 2 meters in front of camera position var inFrontOfCamera = Camera.main.transform.position + Camera.main.transform.forward * 2.0f; // place character on floor 2 meters ahead ShowCharacter(new Vector3(inFrontOfCamera.x, alignment.FloorYValue, 2.69f)); // hide mesh var customMesh = SpatialUnderstanding.Instance.GetComponent<SpatialUnderstandingCustomMesh>(); customMesh.DrawProcessedMesh = false; }
You complete the PlaceCharacterInGame method by making the meshes invisible to the player. This reinforces the illusion that your protagonist is running into and jumping over objects in the real world. The last thing needed to finish this game, level design, is something that is unfortunately too complex to cover in this platform.
Because this platform game has been developed in mixed reality, you have an interesting choice to make, however, as you design your level. You can do level design the traditional way using 3D models. Alternatively, you can also do it using real world objects which the character must run between and jump over. Finally, the best approach may involve even mixing the two.
Conclusion
To paraphrase Shakespeare, all the world’s a stage and every room in it is a level. Mixed reality has the power to create new worlds for us—but it also has the power to make us look at the cultural artifacts and conventions we already have, like the traditional platform game, in entirely new ways. Where virtual reality is largely about escapism, the secret of mixed reality may simply be that it makes us appreciate what we already have by giving us fresh eyes with which to look at them.
from DIYS http://ift.tt/2lNsLlw
0 notes
Text
🎯 Ultimate ArcGIS/ArcMap Course – Master GIS Fast!
Want to master ArcGIS/ArcMap? This Udemy course provides step-by-step tutorials to help you build GIS skills fast!
💰 Discount: ONLY $12.99 (Save 48%, from $19.99)! 👉 Join Here:
#ArcGISArcMapOnline#LearnMapping#GISVisualization#RemoteSensingGIS#OpenSourceMapping#GISSoftware#GISExpert#CartographySkills#SpatialMapping#GISMastery
0 notes
Text
🎯 Ultimate ArcGIS/ArcMap Course – Master GIS Fast!
Want to master ArcGIS/ArcMap? This Udemy course provides step-by-step tutorials to help you build GIS skills fast!
💰 Discount: ONLY $12.99 (Save 48%, from $19.99)!
👉 Join Here:
#ArcGISArcMapOnline#LearnMapping#GISVisualization#RemoteSensingGIS#OpenSourceMapping#GISSoftware#GISExpert#CartographySkills#SpatialMapping#GISMastery
0 notes
Text
Ultimate QGIS 3 Course – Master GIS Fast!
📝 Description: Want to master QGIS 3? This Udemy course provides step-by-step tutorials to help you build GIS skills fast!
💰 Discount: ONLY $12.99 (Save 48%)! 👉 Join Now!
#QGISOnline#LearnMapping#GISVisualization#RemoteSensingGIS#OpenSourceMapping#GISSoftware#GISExpert#CartographySkills#SpatialMapping#GISMastery
0 notes
Photo

Spatial Consequences - The Hidden Blueprint - DD
Alvin Toffler’s Third Wave chapter, The Hidden Blueprint articulates the extensive influence of the industrial revolution on the political systems of representation and the duality of implementing principles explicit to the second wave structured on systems of the first wave. The systems of representation consist of segmented, centralized organization that gave priority to an overarching singular intention as opposed to the process leading up to consequence. This in turn established an obscure relationship between the sum of its individual parts and its final verdict.
https://spatial-consequences.squarespace.com
(at MADA Monash University Art Design & Architecture)
#spatialconsequence#alvintoffler#thethirdwave#mapping#architecturestudent#diagram#makanbadawi#spatialmapping#architecture#drawing
2 notes
·
View notes
Photo

Spatial Consequences - Vitra Fire Station [Zaha Hadid] - DD • • “Zaha Hadid’s first realized project: the Vitra Fire Station, was a manifestation of the deconstructivist language explored through her paintings, establishing a relationship between form and space. The structure consists of sequential formal manipulations that conceive a cinematic experience of space through the contraction and expansion of physical and visual spatial boundaries. This in turn creates distinct spatial characteristics respective to an individual's position within space.” • https://spatial-consequences.squarespace.com/ • #makanbadawi #zahahadid #architecture #mapping #drawing #diagram #spatialconsequence #spatialmapping #performative #configuration @monasharchitecture @monashada (at MADA Monash University Art Design & Architecture)
#architecture#zahahadid#drawing#diagram#spatialmapping#performative#makanbadawi#configuration#spatialconsequence#mapping
0 notes
Photo

Spatial Consequences - John Cage 4’33” - DD
“The nature of the performance investigates the presumption that every performance would bear a unique consequence in relation to time and space.”
https://spatial-consequences.squarespace.com/
(at MADA Monash University Art Design & Architecture)
#diagram#performative#configuration#spatialmapping#mapping#spatialconsequence#architecture#drawing#makanbadawi#composer
0 notes
Photo

Spatial Consequences - Dance Steps - SN
Brass shoe prints with allocated dance steps are inlaid into the footpath inviting the public to engage in dance which in turn creates a flux of interactions throughout the boulevard. This mapping shows the individuals degree of engagement through opacity and sensitivity to various stimuli.
https://spatial-consequences.squarespace.com
(at MADA Monash University Art Design & Architecture)
#dance#jackmackie#spatialmapping#diagrams#spatialconsequence#makanbadawi#drawing#social#architecture
0 notes