Skip to main content (Press Enter).
U.S. Air Force Logo
Home
Environment
Community Engagement
Honorary Commanders
About Us
Biographies
Emergency Management
CAF
SAPR
ADAPT
Helping Matrix
Base Directory
Questions
We Care
Home Life
Victim Support Services
Mental Wellness
Workplace
Physical Wellness
Financial Wellness
Units
Official Photo
Honor Guard Requests
Contact Us
Visitor Control Center
CAC/ID Card & DEERS Updates
Sexual Misconduct Disciplinary Actions
Dover AFB'S Area Defense Council
Dover Air Force Base
DAF EXECUTIVE ORDER IMPLEMENTATION
Public Affairs Support
Official Photo
News
Team Dover Newcomers
About Us
DVIDSVideoPlayer
Playlist:
Search Results
Video by Kevin D Schmidt
Player Embed Code:
Download
Embed
Share
Dr. Yubei Chen
Air Force Research Laboratory
March 22, 2024 | 01:18:31
In this edition of QuEST, Dr. Yubei Chen discusses his work on Principles of Unsupervised Representation Learning
Key Moments in the video include:
Introduction to Dr. Chen’s lab, mentors, and collaborators
Current Machine Learning Paradigm
Natural intelligence learns with intrinsic objectives
Future machine learning paradigm and unsupervised representation learning
Defining Unsupervised representation learning
Supervision and similarities - spatial co-occurrence, temporal co-occurrence, Euclidean neighborhoods
Main points:
- derive unsupervised representation transform from neural and statistical principles
- simplification and unification of deep unsupervised learning
- the convergence
Neural principle: sparse coding
Statistical principle: manifold learning
Manifold learning, local linear embedding
Sparse manifold transform
Encoding of a natural video sequence
Recap of Main points
Audience questions:
On the three sources of similarity, do you think there is a way to map semantic similarities from crowdsourcing kinds of things like Concept Net?
Are there equivalencies here with cryo-EM analyses?
One of the things that made deep learning what it is their performance on ImageNet and AlexNet, right? Same thing with transformers and language translation, so how are you going to demonstrate that this impressive body of work is better than whatever state-of-the-art is out there? How are you going to demonstrate that it’s useful?
Follow-up: Is there a benchmark or standard data set, which you might produce, that establishes something about representation learning?
Co-occurrence is great for a lot of things but a poor choice for comparison when you have different dimensions for valuation you might want to do? Are you thinking about extending your ideas beyond things that are co-occurring or similar along one dimension and further away?
Is there any sort of procedure for pruning vestigial actions that are no longer necessary for the interpolated tasks that won’t just propagate down for future interpolations?
More
Tags
quest
AFRL
ACT3
unsupervised representation learning
More
Up Next
01:01:58
Kabrisky Memorial Lecture 2025
01:00:53
Michael Robinson - Topological Features in Large Language Models (and beyond?)
01:00:41
QuEST (2024-05-24 Joseph Houpt - Mathematical Psychology
01:23:26
Nikolaus Kriegeskorte - Comparing models by their predictions of representational geometries and topologies
01:10:36
Lila Davachi - Temporal Integration and Separation of Sequential Events in Memory
Now Playing
Dr. Yubei Chen
01:00:12
Anna Schapiro - Learning representations of specifics and generalities over time
59:49
Chris Baldassano - Studying memory in the brain with the Method of Loci
59:55
Memory Palace
01:00:41
QuEST Discussion - Memory Athletics Speed Card Methods
57:32
Tony & Michael Dottino - USA Memory Athletics Championships
01:00:07
QuEST - Creation of Flexible Memory
58:27
QuEST - Learning and Memory Conversation
01:16:00
Dr. Robert Stickgold - Sleep, Memory, and Dreams: A Unified View
59:28
Dr. Chou Hung
More Videos