Of all the environmental hazards, flooding from high impact-storm events such as hurricanes, typhoons, and precipitation, is the leading cause of damage and economic loss across the globe. Coastal communities frequently impacted by flooding need high-quality geospatial models of the built and natural environment to devise sound mitigation plans. In this session, we will discuss how to use a combination of LiDAR technologies and AI tools to create geospatial digital twins with rich flood-related semantic information for building resilient coastal communities.
Many places in the United States are embracing scooters as last-mile carbon-neutral transportation solutions. This is a particularly appealing solution considering the repercussions of the pandemic on mass transportation. However, it is questionable that current city designs and configurations are ready. Pedestrians, especially those with disabilities, could become increasingly vulnerable to these new modes of transportation. This smart micro-mobility presentation will introduce our work with a number of community partners to investigate connected technological solutions (such as apps for collision warnings and computer vision), VR simulations, road infrastructure changes, and human factors to improve the safety of micro-mobility solutions.
Manually annotating complex scene point-cloud datasets is both costly and error-prone. To reduce the reliance on labeled data, a new model called SnapshotNet is proposed as a self-supervised feature learning approach, joined with the downstream task of performing semantic segmentation under minimal supervision. This talk will elaborate on the three-stage pipeline of the proposed model: 1) Snapshot capturing from the point cloud scene; 2) Self-supervised feature learning using a new pre-text task called multi-fields-of-view contrasting; and 3) Poorly-supervised segmentation using a voting procedure. The effectiveness of the proposed model comparing to the state-of-the-art method on poorly-supervised point-cloud semantic segmentation will be discussed in detail.
A scalable and efficient visualization pipeline is essential to any interactive point -cloud application. However, the development of a point cloud rendering pipeline is both difficult and time consuming due to high performance requirements for interactive applications. This talk will present a Unity Component that generalizes the point cloud visualization pipeline in a black box for the Unity Development platform. The work features novel algorithmic improvements that further improve graphics performance. Also, the resulting pipeline is evaluated quantitatively using large point cloud datasets to ensure good interactive performance.
Leveraging an infrastructure digital twin with IoT data and the dynamic location of human agents, two-way information flow can be established for indoor navigation with accurate location tracking, personalized planned routes, and turn-by-turn voice instruction for visually impaired persons and other users. This talk will showcase mobile visualizations compartmentalized based on human agent tasks and responsibilities utilizing AR and mixed reality. The technology is designed to integrate with infrastructure digital twins (iTwin) with accessible services (sensing and computing) to enable real-time 3D virtual representation and monitoring of the building status and human locations, as well as compliance with codes and safety regulations.