Welcome to the course!
Please sign up with Piazza
General InformationTimes & Places
TuTh 3:30PM - 4:50PM, EBU3B 4140
|Instructor||Hao Suemail@example.com||2:00PM-3:00PM, Mo||CSE Building 4114|
|Course Assistant||Vignesh Gokulfirstname.lastname@example.org||5:00PM-6:30PM, MoFr||CSE Building B250A|
|6:00PM-7:30PM, We||CSE Building B270A|
ObjectivesThis course will explore the state of the art algorithms for both supervised and unsupervised machine learning on 3D data - analysis as well as synthesis. After a brief introduction to geometry foundations and representations, the focus of the course will be machine learning methods for 3D shape classification, segmentation, and symmetry detection, as well as new shape synthesis. Techniques for analyzing not only individual 3D models but entire collections of such through computing alignments, and maps or correspondences, will be discussed. Deep neural architectures appropriate for data in the form of point clouds or graphs will also be studied, as well as architectures that can associate semantic information with object models, including functionality. Finally generative models for 3D shape design will be covered, for example adaptations of generative adversarial networks (GANs). Data sources for the course include public 3D model repositories such a the Trimble 3D Warehouse or Yobi3D and semantic annotation knowledge bases such as ShapeNet.
PrerequisitesBackground assumed includes basic material in linear algebra, optimization, machine learning, and graphical models.
- In-class quiz (~10 times) 10%
- Homeworks (4 assignments) 50% (distribution: 5%, 15%, 15%, 15%)
- Course project presentation 20%
- Course project writeup 20%
- Best paper award (Top 3): 10% bonus
- There will not be a final exam.
- 7 late days in total with no penalty.
SyllabusThe planned syllabus is as below. Certain contents may be added or removed based upon the interactions in class and other situations.
- Introduction to Machine Learning and Deep Learning
- Machine learning paradigm
- Optimization, Gradient Descent, Back Propagation
- Linear Classifier, k-NN Classifier
- Multi-layer Perceptron, Convolutional Neural Network (CNN)
- Why deep learning is more effective?
- Non-convex Optimization Issues in Deep Learning
- Shape Representation, Geometry Parameterization, Machine Learning on Extrinsic Geometry
- Multi-view Representation, the Novel-view Synthesis Problem
- Volumetric Representation, 3D CNN
- Parametric Representation of Surfaces, Differential Geometry
- Point Cloud Representation, EMD, PointNet
- Shape Grammar, Procedural Modeling
- Simulation for Learning
- Render for CNN
- Domain Adaptation, GAN
- Intuitive Physics
- Graph Laplacian, Intrinsic Geometry
- Spectral Graph Theory
- Extrinsic vs Intrinsic Geometry
- Laplacian-Bertrami operator
- Heat Kernel and Wave Kernel signatures
- Graph CNNs, Deep Learning on Manifolds
- Map Networks
- Rigid and Non-rigid Shape Registration
- Functional Map, Canonical Correlation Analysis
- Cycle Consistency
- Functional Map Network, Deep Functional Map Network
- Unsupervised Learning by Map Networks, Cycle GAN
- Map Networks on Shape Parts
- Map Networks across Data Domains (Image Captioning, Joint Image-Shape Networks)
- Topological Data Analysis
- Topology Review, Complexes
- Homology Groups, Persistent Homology
- Open Problems in Shape Understanding