Skip to main navigation
Skip to search
Skip to main content
Hong Kong Baptist University Home
Help & FAQ
Link opens in a new tab
Search content at Hong Kong Baptist University
Home
Scholars
Departments / Units
Research Output
Projects / Grants
Prizes / Awards
Activities
Press/Media
Student theses
Datasets
Visual Semantic Knowledge Discovery for Multimodal Intent Recognition
Yaoyang Cheng
*
*
Corresponding author for this work
Department of Computer Science
Research output
:
Contribution to journal
›
Journal article
›
peer-review
Overview
Fingerprint
Fingerprint
Dive into the research topics of 'Visual Semantic Knowledge Discovery for Multimodal Intent Recognition'. Together they form a unique fingerprint.
Sort by:
Weight
Alphabetically
Keyphrases
Knowledge Discovery
100%
Semantic Knowledge
100%
Visual Semantics
100%
Multimodal Intent Recognition
100%
Spatiotemporal
40%
Gradient-based
40%
Intention Recognition
40%
Adaptive Perturbation
40%
State-of-the-art Techniques
20%
Video-based
20%
Video Frames
20%
Semantic Features
20%
Audio
20%
User Intention
20%
Extracting Features
20%
Feature Map
20%
Human Interaction
20%
Perturbation Parameter
20%
Convolutional Layer
20%
Lipschitz Condition
20%
Body Language
20%
Adaptive Gradient
20%
Rich Semantics
20%
Body Expressions
20%
Multimodal Fusion
20%
Video Modality
20%
Semantic Cues
20%
Video Swin-transformer
20%
Knowledge Discovery Model
20%
Elementary Unit
20%
Instance-aware
20%
Knowledge Features
20%
Computer Science
Knowledge Discovery
100%
Semantic Knowledge
100%
Intent Recognition
100%
Semantic Feature
16%
Deep Neural Network
16%
Feature Map
16%
Human Interaction
16%
Interpretability
16%
Convolution Layer
16%
Elementary Unit
16%
Lipschitz Condition
16%
Perturbation Parameter
16%
Swin Transformer
16%