BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Research - ECPv6.15.17//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Research
X-ORIGINAL-URL:https://www.pnw.edu/research
X-WR-CALDESC:Events for Research
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/Chicago
BEGIN:DAYLIGHT
TZOFFSETFROM:-0600
TZOFFSETTO:-0500
TZNAME:CDT
DTSTART:20240310T080000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0500
TZOFFSETTO:-0600
TZNAME:CST
DTSTART:20241103T070000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0600
TZOFFSETTO:-0500
TZNAME:CDT
DTSTART:20250309T080000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0500
TZOFFSETTO:-0600
TZNAME:CST
DTSTART:20251102T070000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0600
TZOFFSETTO:-0500
TZNAME:CDT
DTSTART:20260308T080000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0500
TZOFFSETTO:-0600
TZNAME:CST
DTSTART:20261101T070000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/Chicago:20251212T143000
DTEND;TZID=America/Chicago:20251212T153000
DTSTAMP:20260403T163829
CREATED:20251203T152336Z
LAST-MODIFIED:20251203T154753Z
UID:10000260-1765549800-1765553400@www.pnw.edu
SUMMARY:"Less is More — Toward Efficient Vision Transformers in Perception and Reasoning" Seminar
DESCRIPTION:Join us for the CS & CIVS Distinguished Speaker Seminar presented by Professor Yang Ni. \nThe remarkable success of recent large foundation models stems from the versatility of the transformer architecture and large-scale pre-training on massive multimodal datasets. Originally introduced in natural language processing\, the attention-centric transformer design has become the dominant paradigm across domains. \nMore recently\, transformers have emerged as the backbone of modern computer vision models. However\, when high-resolution images are processed\, they are decomposed into thousands of tokens\, orders of magnitude more than typical text inputs\, causing the quadratic computational complexity. \nTo address this challenge\, current research has focused on reducing the number of image tokens participating in attention. A central question then arises: which tokens truly carry essential information\, and which can be safely pruned? \nThis talk provides an overview of recent advances in token reduction strategies for efficient transformers\, with an emphasis on large vision-language models (LVLMs) where perception and reasoning are deeply integrated. \nLocation \nCIVS Theater\, Powers Building\nHammond Campus
URL:https://www.pnw.edu/research/event/less-is-more-towards-efficient-vision-transformers-in-perception-and-reasoning-seminar/
CATEGORIES:Student Life,University Calendar
END:VEVENT
END:VCALENDAR