Distinguished Lecture: Safe, Trustworthy Autonomous Mobility: A Human-Centered Symbiotic Systems Perspective

Autonomous mobility is largely approached as a vehicle-centric problem. Persistent challenges in safety, scalability, and public trust suggest a deeper issue: “intelligence” is often considered in isolation rather than as distributed. This presentation argues that truly safe and trustworthy autonomy will emerge only through symbiotic computational systems, where perception, decision-making, and control are distributed across humans, machines, and infrastructure. The presentation starts with an overview of the four decades-long progress in autonomous driving and related advancements in driver assistance technologies. It is followed by a discussion of the central thesis: that many failures in autonomous mobility stem not from algorithms alone, but from how system boundaries are defined— what is sensed, where intelligence resides, and how responsibility is shared. Framing autonomy as a systems- level problem, the talk draws on principles of distributed and embodied cognition to unify perspectives from robotics, artificial intelligence, human–computer interaction, and transportation engineering. Concrete examples from multidisciplinary research by the CVRR and LISA teams at UC San Diego, conducted on real vehicles in real-world driving environments and validated through both quantitative benchmarks and qualitative studies in collaboration with industry partners, illustrate how shared autonomy can tightly couple human state (e.g., intent, attention, readiness) with environmental context to enable safer and more adaptive human–AI interaction. The lecture also discusses how advances in foundation models, self-supervised learning, and active learning can improve generalization and robustness in safety-critical settings. The talk concludes with key open challenges, including multimodal foundation models for traffic ecosystems, human–AI co-adaptation, and continual learning under domain shift, important problems to realize scalable, trustworthy autonomous mobility. Co-sponsored by: Vishnu S. Pendyala, San Jose State University Speaker(s): Professor Mohan Trivedi, Dr. Vishnu S Pendyala Room: MLK Room 225, Dr. Martin Luther King, Jr. Library (SJSU), 150 E San Fernando St San Jose, California 95112, San Jose, California, United States, Virtual: https://events.vtools.ieee.org/m/556950

IEEE Québec Seminar: Wireless Digital Twins: Key Considerations for Modeling, Building, Tuning, and Utilization

Zoom Link: https://ulaval.zoom.us/j/65778451409?pwd=B1j19PbbWPhyXWjxkTf9PjOfIekUCY.1 Talk Abstract: Digital twins of the wireless environments offer new capabilities to the communication network design and operation. They could be utilized offline to build site-specific datasets for pre-training and evaluation machine learning models, or online to provide real-time or near real-time priors that aid the various communication system decisions on precoding, channel estimation, spectrum sharing, resource allocation, among many interesting applications. In this talk, I will present key aspects and considerations for modeling, building, calibrating, and utilizing these digital twins to maximize their gains while balancing constraints on cost, latency, and computational overhead. I will also introduce DeepVerse 6G, the world’s first large-scale digital-twin research platform, which provides high-fidelity multi-modal sensing and communication “true” digital twin datasets to accelerate research and development across a wide range of applications. Speaker Biography: Ahmed Alkhateeb received his B.S. and M.S. degrees in Electrical Engineering from Cairo University, Egypt, in 2008 and 2012, and his Ph.D. degree in Electrical and Computer Engineering from The University of Texas at Austin, USA, in 2016. After the Ph.D., he spent some time as a Wireless Communications Researcher at the Connectivity Lab, Facebook, before joining Arizona State University (ASU) in the Spring of 2018, where he is currently an Associate Professor in the School of Electrical, Computer, and Energy Engineering. His research interests are in the broad areas of wireless communications, signal processing, machine learning, and applied math. Dr. Alkhateeb is the recipient of the 2012 MCD Fellowship from The University of Texas at Austin, the 2016 IEEE Signal Processing Society Young Author Best Paper Award for his work on hybrid precoding and channel estimation in millimeter-wave communication systems, and the NSF CAREER Award 2021 to support his research on leveraging machine learning for large-scale MIMO systems. Meeting Link: https://ulaval.zoom.us/j/65778451409?pwd=B1j19PbbWPhyXWjxkTf9PjOfIekUCY.1, Québec City, Quebec, Canada, G1X 4C5

IEEE New Era AI 2026 workshop ” Building AI Applications with Amazon Bedrock” by Amazon Lambda

Early Bird: 50% off before May 1st, In-person maximum 70 Seats, Remote: no limit. 50% off: Student $10, IEEE member: $17, Non-IEEE $30, Remote $20 Certificate $7 In this hands-on workshop, participants build AI applications using Amazon Bedrock. Through guided labs, they progress from foundational text generation and chatbots to advanced patterns including RAG, multimodal image processing, structured data extraction, and security guardrails. Each lab produces a working prototype that participants can extend. Along the way, participants gain experience with prompt engineering techniques, learn to work with both text and image APIs, and explore how to secure AI applications with content filtering, PII masking, and prompt attack prevention. The workshop is designed to be modular — participants can focus on the tracks most relevant to their interests and move at their own pace. Who Is This For? The skills from this workshop apply across industries and roles.  Healthcare professionals can use image understanding and multimodal capabilities to prototype tools for analyzing medical imagery like X-rays or pathology slides.  Data scientists and analysts can apply text extraction and summarization patterns to process research papers, survey responses, or unstructured datasets at scale.  Software engineers can integrate RAG-based chatbots into existing applications to surface answers from internal documentation or knowledge bases.  Business analysts can use structured data extraction to pull insights from contracts, customer feedback, or financial reports without writing complex parsing logic.  Students and Researchers in any field can prototype AI assistants that help with literature review, data labeling, or experiment documentation. Whether you write code daily or have never touched Python, the workshop's modular structure lets you engage at the level that fits your background. Speakers Nithin Vommi is an Engineering Manager at Amazon Web Services, where he has spent nearly a decade building and scaling large-scale distributed systems. He leads AWS Lambda streaming and queueing platforms, focusing on serverless architectures and event-driven systems. Nithin has published technical articles, holds patents, and speaks at AWS events including re:Invent. Over his time at AWS, he has launched more than a dozen customer- facing features on serverless platforms, several highlighted at AWS re:Invent and broadly adopted by enterprises. His current interests include serverless, generative AI, and event-driven design patterns. Tejas Ghadge is engineering head for AWS Amplify, AWS Lambda Event Driven Applications and AWS Lambda Developer Experience where he leads an organization of 100+ engineers/managers across multiple sites in US and Canada. With over 14 years of experience at AWS, Tejas brings deep operational and architectural experience from – operating large scale (millions of requests per second) event driven systems, leading and analyzing hundreds of operational incidents and successfully launching dozens of delightful customer features for AWS Lambda and AWS Amplify customers. Technical Requirements Laptop with a modern web browser (Chrome, Firefox, or Edge) 2. Stable internet connection 3. No local software installation required — labs run in a browser-based IDE and a temporarily provided AWS account 4. No prior machine learning or data science experience required Key Takeaways for Participants 1. How to use Amazon Bedrock APIs for text generation, embeddings, image generation, and multimodal tasks. 2. Working prototypes across five tracks: text generation, RAG chatbot, document summarizer, image generator/editor, multimodal chatbot, and structured data extractor 3. Practical prompt engineering techniques applicable across models — summarization, content creation, translation, analysis, and code generation 4. Hands-on experience implementing guardrails for content blocking, PII masking, and prompt injection defense – IEEE Professional certificate and Hrs will be offered to those attendees who pass a simple quiz at the end of class, with a $7 fee. Parking : Click on the following link: https://www.offstreet.io/events/XM21K0JV and enter your vehicle license plate. Co-sponsored by: Neha, Shiny, Mike, Anil, sheree Speaker(s): Nithin, Room: 142, Bldg: Harding Building, 1215 E Columbia St , Seattle,, Washington, United States, 98122, Virtual: https://events.vtools.ieee.org/m/556516