모바일 메뉴 닫기
 
제목
[BK세미나] 2/3(금) 정호암 교수(Monash University) "Flying Drones in Confined Spaces – aerodynamics, control, and shared autonomy"
작성일
2023.01.27
작성자
기계공학부
게시글 내용

기계공학부 구성원들의 많은 관심과 참여 부탁드립니다.


▣ 주   제: Flying Drones in Confined Spaces – aerodynamics, control, and shared autonomy

연   사: 정호암 교수

소   속: Monash University

일   시: 2023. 2. 3.(Fri) 17:00

장   소: 제4공학관 D508호

초   청: 김종백 교수

▣ 초   록

Drones become an indispensable tool in many industrial inspection tasks thanks to their affordability and versatility. Unlike full-size aircraft, drones are often required to fly in close proximity to large structures such as buildings and bridges, and sometimes in enclosed environments such as tunnels and caves. When flying near structures, the flow of rotor downwash is altered by such structures, creating aerodynamic disturbances. In this research, we use computational fluid dynamics (CFD) and physical experimental methods to investigate fluid interactions and aerodynamic disturbances. The results confirm significant changes in the size of thrust coefficient, blade-by-blade and rotor-by-rotor, depending on the location of the drone with respect to a structure. In order to mitigate such disturbances in a tunnel, an integral backstepping (IBS) controller is designed and implemented to enable quadcopters to robustly fly in tunnel-like confined environments. The localisation is provided by a cross-sectional Hough scan matching with a simple kinematic Kalman filter producing reliable vertical and lateral position information in real-time.

Pilots face difficulties when operating drones in confined spaces due to the complex rotorcraft dynamics, challenges in depth perception, and additional aerodynamic disturbances. We propose a shared autonomy approach to assist pilots in safely landing a UAV under conditions where depth perception is difficult and safe landing zones are limited. Our approach is comprised of two modules: a perception module that encodes information onto a compressed latent representation using two RGB-D cameras and a policy module that is trained with the reinforcement learning algorithm TD3 to discern the pilot's intent and to provide control inputs that augment the user's input to safely land the UAV. The policy module is trained in simulation using a population of simulated users. Simulated users are sampled from a parametric model with four parameters, which model a pilot's tendency to conform to the assistant, proficiency, aggressiveness and speed. We conduct a user study (n = 28) where human participants were tasked with landing a physical UAV on one of several platforms under challenging viewing conditions. The assistant, trained with only simulated user data, improved task success rate from 51.4% to 98.2% despite being unaware of the human participants' goal or the structure of the environment a priori. With the proposed assistant, regardless of prior piloting experience, participants performed with a proficiency greater than the most experienced unassisted participants.

첨부
20230203_BK21_정호암 교수님(김종백 교수님).jpg