内容

学术讲座:Dip Transform for 3D Shape Acquisition

阅读数:191    发布:2017-11-20 12:18    

Title: Dip Transform for 3D Shape Acquisition

Time:10:30-12:00 AM, 23 November, 2017

Location:计算机与软件学院938


Abstract:

3D shape acquisition and reconstruction methods are mostly based on optical devices that yield point clouds. Notably, these techniques fall short in cases where the shapes contain highly occluded parts that are inaccessible to the scanner’s line-of-sight. In this talk we present a novel three-dimensional shape acquisition and reconstruction method based on the well-known Archimedes equality between fluid displacement and the submerged volume. By repeatedly dipping a shape in a liquid in different angles and measuring its volume displacement, we generate the dip transform: a novel volumetric shape representation that defines its surface. The key feature of our method is that it employs fluid displacements as the shape sensor. Unlike optical sensors, the liquid has no line-of-sight requirements, it penetrates cavities and hidden parts of the object, as well as transparent and glossy materials, thus bypassing all visibility and optical limitations of conventional scanning devices. Our new scanning approach is implemented using a dippingrobot arm and a bath of water, by which it measures the water elevation.


Bio:

Kfir Aberman received the B.Sc. (summa cum laude) degree in electrical engineering and the M.Sc. (cum laude) degree in electrical engineering from the Technion—Israel Institute of Technology, Haifa, Israel, in 2010 and 2016, respectively. He is currently a Researcher with the Advanced Innovation Center for Future Visual Entertainment in the Beijing Film Academy in China. Research interests include non-optical 3D shape acquisition methods, sampling theory and deep learning.

深圳大学计算机与软件学院 2009-2016