This research develops the science needed to enhance mobile augmented reality applications with (a) Spatial Analysis at UNT, (b) Navigation, (c) Geospatial Analysis, (d) Situational awareness, (e) Intelligent Signs, (f) Evacuation, and (g) Emergency response. The projects will advance discovery of visualization techniques to permit mobile applications to enhance the viewing of the physical world, while promoting contextualized 3D visualizations, spatial knowledge acquisition and cognitive mapping thereby enhancing situational awareness. The Mobile AR application provides information to support effective decision-making during emergencies for both building occupants and emergency responders. A range of use cases are tested, including data visualization and immersive data spaces, in-situ visualization of 3D models and full-scale architectural form visualization. The objective of the effort is to use the AR displays on mobile devices and headsets to guide the users with virtual overlays, paths, and way-points. It will also involve development of algorithms for layering, situational awareness, and location sensing.
The goal of this NSF funded project is to develop and evaluate a collaborative immersive environment in VR for active shooter response for UNT campus and BSU Campus. Immersive collaborative virtual reality environment also offers a unique way for training in the emergencies for campus safety. The contribution lies in our approach to combining computer simulated agents (AI agents) and user-controlled autonomous agents in a collaborative virtual environment for conducting emergency response training for civilians and security personnel's. The immersive collaborative VR environment offers a unique method for training in emergencies for campus safety.
The goal of this project is to develop intelligent data-driven techniques, accurate labeled data is needed for training and evaluation of deep learning techniques. To better engage scientists in the labeling and to facilitate the labeling we will use AR/VR tools. This work seeks to build upon the previous efforts by creating an integrated tool suite for enhancing the value, situational awareness, accessibility, and understanding of science data connected with Arctic science using virtual reality (VR) and augmented reality (AR) technologies. This work designs, develops, and evaluates AR/VR tools to explore and annotate, using tablets/mobile devices, and HoloLens. The objective of the effort is to use the AR displays on mobile devices and headsets to guide the users with virtual overlays, paths, and way-points. It will also involve development of algorithms for layering, situational awareness, and location sensing.
The goal of this project is to develop and evaluate an AI virtual tutor using generative AI. We are creating a realistic VR environment to provide offline tutoring assistance to students for courses such as Introduction to Data Science and Introduction to Computation with Python. The AI tutor integrates Chat GPT and provides interactivity capability of lip synchronization and hand movement. The user can ask the AI virtual tutor any question. Our research will allow artificial intelligence-powered tutors to become more versatile and accessible through voice interaction. Our aim is to create a multi-user VR environment with tutorial mode and interacting mode. We are also exploring the VRI module to work on other hardware devices such as HTC Vive, Hololens2, Magic leap 2, Apple Vision Pro, and Meta Quest 3.
The goal of this project is to develop and evaluate a Geospatial Mobile Application for Navigation and Emergency Response using Google Photorealistic 3D Tiles and Cesium for Unity. The Mobile AR Application can be used for navigation and emergency response at UNT campus. Cesium for Unity combines the 3D geospatial capability of Cesium and 3D Tiles with the Unity ecosystem. The following objectives are being explored: 1) Location-based augmented reality (AR) applications using Google Photorealistic 3D Tiles and Cesium for Unity to provide more immersive navigation experiences and emergency information to users on campus. 2) Geospatial AR Navigation: Develop mobile AR phone application that makes navigation more intuitive with a 3D model of the user’s location at UNT campus by displaying location-based content and integrating Google Photorealistic 3D Tiles perfectly with the real world. We are also exploring integration with hardware devices such as Hololens2, Magic leap 2, Apple Vision Pro, and Meta Quest 3.
We implemented a MARA for an indoor navigation that uses Augmented Reality for localization and showing the way. The position and orientation of the device is determined by the view of the camera only. For localization we have used multiple points clouds and nav mesh from the Vuforia SDK. A navigation can be started from any position of the supported area, which is on two floors at Discovery park building at UNT. The MARA can be extended to support much larger areas by adding more Area Targets. The Mobile AR application provides information to support effective decision-making during emergencies for both building occupants and emergency responders. We are also exploring integration with hardware devices such as Hololens2, Magic leap 2, Apple Vision Pro, and Meta Quest 3.
The goal of this project is to make human biomechanics more accessible for a wider audience through the use of Unity for data visualization. Numerous sensors are still being used, however these sensors do not require cameras. This radically improves their utility for populations that cannot dedicate the space and funding to video motion capture endeavors. The objectives of this study is to assess the validity and reliability of machine learning models’ ability to correctly recognize squat exercise movements.
Data analysis and crime data visualization offer a powerful approach to unraveling the complex dynamics of criminal behavior. Analyzing crime data involves a multifaceted study of different dimensions of crime. This involves examining the types of crimes committed, their frequency, and distribution in different geographic areas. By analyzing these factors, we can find patterns and hot spots that can show areas of criminal activity. The projects include: 1) Analysis of Crime 2) Common links between COVID-19 data and crime data in Baltimore. 3) COVID-19 Data Visualization, 4) Crime Data in Baltimore Visualization, 5) Scientific Data Visualization , 6) Data Analytics: Improve the Quality of Life in Urban Areas.
Early hands-on experiences with the Microsoft HoloLens augmented/mixed reality device have given promising results for building evacuation & crime analysis applications. A range of use cases are tested, including data visualization and immersive data spaces, in-situ visualization of 3D models and full-scale architectural form visualization. We present how the mixed reality technology can provide spatial contextualized 3D visualization that promotes knowledge acquisition and support cognitive mapping.
MUVR environments for emergency evacuation drills are developed that include: Subway evacuation, airplane evacuation, school bus evacuation, VR city, night club disaster evacuation, building evacuation, and university campus evacuation. Our developed applications show an immersive collaborative virtual reality environment for performing virtual evacuation drills using head displays. Immersive collaborative virtual reality environment offers a unique way for training for emergencies situations. The participant can enter the collaborative virtual reality environment setup on the cloud and participate in the evacuation drill or a tour which leads to considerable cost advantages over large scale real life exercises.
The goal of this research work is to develop virtual reality instructional (VRI) modules for Teaching, Health Care, Training, and Manufacturing. The projects include 1) create instructional course curriculum modules with more inquiry based problem-solving activities and hand-on experiences based on Gaming and Virtual Reality for teaching complex topics, 2) train integrated care team members to engage patients from vulnerable populations safely and efficiently. 3) development of training modules geared for COVID-19 testing.
Two MAS and models are developed and evaluated namely AvatarSim and AvatarSim2. AvatarSim was developed in Java and AvatarSim2 was developed in C# language. The AvatarSim model comprises of three models which are: a) Geometrical Model, b) Social Force Model, and c) Fuzzy behavioral Model. AvatarSim2 model further combines genetic algorithm (GA) with neural networks (NNs) and fuzzy logic (FL) to explore how intelligent agents can learn and adapt their behavior during an evacuation. The adaptive behavior focuses on the specific agents changing their behavior in the environment. The shared behavior of the agent places an emphasis on the crowd-modeling and emergency behavior in the multi-agent system. The result of this simulation was very promising as we are able to observe the agents use GA and NN to learn how to find the various exits.
The goal of this research is to use game creation as a metaphor for creating an experimental setup to study human behavior in a megacity for emergency response, decision-making strategies, and what-if scenarios. It incorporates user-controlled characters as avatars and computer-controlled characters as agents in the megacity CVE (Collaborative Virtual Reality Environment). Virtual crowds for non-combative environments play an important role in modern military operations and often create complications for the combatant forces involved. To address this problem, we are developing crowd simulation capable of generating crowds of non-combative civilians that exhibit a variety of individual and group behaviors at a different level of fidelity.
The goal of this project is to explore ways to visualize the Cyber Situational Awareness capability of an enterprise to the next level by developing holistic human centric situational awareness approaches into new systems that can achieve self-awareness. This research effort aims to identify how graphical objects (such as data-shapes) developed in accordance with an analyst's mental model can enhance analyst's situation awareness. The humans are more adept at inferring meaning from graphical objects, links and associations in a data element. The project aims to use virtual reality techniques to visualize the XML data through the use of a Force Directed Node Graph in 3D which renders and updates in real-time. It can be used to visualize computer networks for cyber-attacks.
This work presents cutting edge Augmented Reality Instructional (ARI) modules that overcome the visual limitations associated with the traditional, static 2D methods of communicating evacuation plans for multilevel buildings. Using existing building features, we demonstrate how the ARI modules provide contextualized 3D visualizations that promote and support spatial knowledge acquisition and cognitive mapping thereby enhancing situational awareness. These ARI visualizations are developed for first responders and building occupants to help increase emergency preparedness and mitigate the evacuation related risks in multilevel building rescues and safety management.