【演講】2019/11/19 (二) @工四816 (智易空間),邀請到Prof. Geoffrey Li(Georgia Tech, USA)與Prof. Li-Chun Wang(NCTU, Taiwan) 演講「Deep Learning based Wireless Resource Allocation/Deep Learning in Physical Layer Communications/Machine Learning Interference Management」
IBM中心特別邀請到Prof. Geoffrey Li(Georgia Tech, USA)與Prof. Li-Chun Wang(NCTU, Taiwan)前來為我們演講,歡迎有興趣的老師與同學報名參加!
演講標題:Deep Learning based Wireless Resource Allocation/Deep Learning in Physical Layer Communications/Machine Learning Interference Management
演 講 者:Prof. Geoffrey Li與Prof. Li-Chun Wang
時 間:2019/11/19(二) 9:00 ~ 12:00
地 點:交大工程四館816 (智易空間)
活動報名網址:https://forms.gle/vUr3kYBDB2vvKtca6
報名方式:
費用:(費用含講義、午餐及茶水)
1.費用:(1) 校內學生免費,校外學生300元/人 (2) 業界人士與老師1500/人
2.人數:60人,依完成報名順序錄取(完成繳費者始完成報名程序)
※報名及繳費方式:
1.報名:請至報名網址填寫資料
2.繳費:
(1)親至交大工程四館813室完成繳費(前來繳費者請先致電)
(2)匯款資訊如下:
戶名: 曾紫玲(國泰世華銀行 竹科分行013)
帳號: 075506235774 (國泰世華銀行 竹科分行013)
匯款後請提供姓名、匯款時間以及匯款帳號後五碼以便對帳
※將於上課日發放課程繳費領據
聯絡方式:曾紫玲 Tel:03-5712121分機54599 Email:tzuling@nctu.edu.tw
Abstract:
1.Deep Learning based Wireless Resource Allocation
【Abstract】
Judicious resource allocation is critical to mitigating interference, improving network efficiency, and ultimately optimizing wireless network performance. The traditional wisdom is to explicitly formulate resource allocation as an optimization problem and then exploit mathematical programming to solve it to a certain level of optimality. However, as wireless networks become increasingly diverse and complex, such as high-mobility vehicular networks, the current design methodologies face significant challenges and thus call for rethinking of the traditional design philosophy. Meanwhile, deep learning represents a promising alternative due to its remarkable power to leverage data for problem solving. In this talk, I will present our research progress in deep learning based wireless resource allocation. Deep learning can help solve optimization problems for resource allocation or can be directly used for resource allocation. We will first present our research results in using deep learning to solve linear sum assignment problems (LSAP) and reduce the complexity of mixed integer non-linear programming (MINLP), and introduce graph embedding for wireless link scheduling. We will then discuss how to use deep reinforcement learning directly for wireless resource allocation with application in vehicular networks.
2.Deep Learning in Physical Layer Communications
【Abstract】
It has been demonstrated recently that deep learning (DL) has great potentials to break the bottleneck of the conventional communication systems. In this talk, we present our recent work in DL in physical layer communications. DL can improve the performance of each individual (traditional) block in the conventional communication systems or jointly optimize the whole transmitter or receiver. Therefore, we can categorize the applications of DL in physical layer communications into with and without block processing structures. For DL based communication systems with block structures, we present joint channel estimation and signal detection based on a fully connected deep neural network, model-drive DL for signal detection, and some experimental results. For those without block structures, we provide our recent endeavors in developing end-to-end learning communication systems with the help of deep reinforcement learning (DRL) and generative adversarial net (GAN). At the end of the talk, we provide some potential research topics in the area.
3.Machine Learning Interference Management
【Abstract】
In this talk, we discuss how machine learning algorithms can address the performance issues of high-capacity ultra-dense small cells in an environment with dynamical traffic patterns and time-varying channel conditions. We introduce a bi adaptive self-organizing network (Bi-SON) to exploit the power of data-driven resource management in ultra-dense small cells (UDSC). On top of the Bi-SON framework, we further develop an affinity propagation unsupervised learning algorithm to improve energy efficiency and reduce interference of the operator deployed and the plug-and-play small cells, respectively. Finally, we discuss the opportunities and challenges of reinforcement learning and deep reinforcement learning (DRL) in more decentralized, ad-hoc, and autonomous modern networks, such as Internet of things (IoT), vehicle -to-vehicle networks, and unmanned aerial vehicle (UAV) networks.
Bio:
Dr. Geoffrey Li is a Professor with the School of Electrical and Computer Engineering at Georgia Institute of Technology. He was with AT&T Labs – Research for five years before joining Georgia Tech in 2000. His general research interests include statistical signal processing and machine learning for wireless communications. In these areas, he has published around 500 referred journal and conference papers in addition to over 40 granted patents. His publications have cited by 37,000 times and he has been listed as the World’s Most Influential Scientific Mind, also known as a Highly-Cited Researcher, by Thomson Reuters almost every year since 2001. He has been an IEEE Fellow since 2006. He received 2010 IEEE ComSoc Stephen O. Rice Prize Paper Award, 2013 IEEE VTS James Evans Avant Garde Award, 2014 IEEE VTS Jack Neubauer Memorial Award, 2017 IEEE ComSoc Award for Advances in Communication, and 2017 IEEE SPS Donald G. Fink Overview Paper Award. He also won the 2015 Distinguished Faculty Achievement Award from the School of Electrical and Computer Engineering, Georgia Tech.
Li-Chun Wang (M'96 -- SM'06 -- F'11) received Ph. D. degree from the Georgia Institute of Technology, Atlanta, in 1996. From 1996 to 2000, he was with AT&T Laboratories, where he was a Senior Technical Staff Member in the Wireless Communications Research Department. Currently, he is the Chair Professor of the Department of Electrical and Computer Engineering and the Director of Big Data Research Center of of National Chiao Tung University in Taiwan. Dr. Wang was elected to the IEEE Fellow in 2011 for his contributions to cellular architectures and radio resource management in wireless networks. He was the co-recipients of IEEE Communications Society Asia-Pacific Board Best Award (2015), Y. Z. Hsu Scientific Paper Award (2013), and IEEE Jack Neubauer Best Paper Award (1997). He won the Distinguished Research Award of Ministry of Science and Technology in Taiwan twice (2012 and 2016). He is currently the associate editor of IEEE Transaction on Cognitive Communications and Networks. His current research interests are in the areas of software-defined mobile networks, heterogeneous networks, and data-driven intelligent wireless communications. He holds 23 US patents, and have published over 300 journal and conference papers, and co-edited a book, “Key Technologies for 5G Wireless Systems,” (Cambridge University Press 2017).
advances in engineering research 在 國立陽明交通大學電子工程學系及電子研究所 Facebook 的最佳解答
【Talk】Clock Synchronization in Wireless Sensor Networks: from Traditional Estimation Theory to Distributed
###@@@ You are all invited to come. @@@###
Topic:Clock Synchronization in Wireless Sensor Networks: from Traditional Estimation Theory to Distributed
Time:December 22, 2017 ( Friday, 11:00AM~12:00PM)
Venue:R210, Engineering Building 4, NCTU
交通大學工程四館210室
Speaker:Prof. Yik-Chung Wu / The University of Hong Kong
Language:Lectured in English
Abstract: In this talk, we will review the advances of clock synchronization in wireless sensor network in the past few years. We will begin with the optimal clock synchronization algorithms in pairwise setting, in which maximum likelihood (ML) estimator from traditional estimation theory is the major tool. Then, we will discuss the more challenging networkwide synchronization, in which every node in the network needs to synchronize with each other. In this case, more powerful distributed signal processing techniques are required. In particular, we will illustrate how Belief Propagation (BP), distributed Kalman Filter (KF) and Alternating Direction Method of Multipliers (ADMM) method help in solving networkwide synchronization.
Bio: Yik-Chung Wu received the B.Eng. (EEE) degree in 1998 and the M.Phil. degree in 2001 from the University of Hong Kong (HKU). He received the Croucher Foundation scholarship in 2002 to study Ph.D. degree at Texas A&M University, College Station, and graduated in 2005. From August 2005 to August 2006, he was with the Thomson Corporate Research, Princeton, NJ, as a Member of Technical Staff. Since September 2006, he has been with HKU, currently as an Associate Professor. He has been a visiting scholar at Princeton University for the summers of 2011 and 2015. His research interests are in general area of signal processing, machine learning, and communication systems, and in particular distributed signal processing and robust optimization theories with applications to communication systems and smart grid. Dr. Wu served as an Editor for IEEE Communications Letters, is currently an Editor for IEEE Transactions on Communications and Journal of Communications and Networks.
advances in engineering research 在 國立陽明交通大學電子工程學系及電子研究所 Facebook 的最佳解答
【Talk】Prof. Zhengya Zhang (U Michigan): Neuromorphic Computing Using Sparse Codes: From Algorithm to Hardware (July 16, 2015(Thursday, 10:30am-12pm)
Invite you all to join it. 歡迎踴躍參加 !
Title: Neuromorphic Computing Using Sparse Codes: From Algorithm to Hardware
Date: July 16, 2015 ( Thursday, 10:30 am ~ 12:00 pm)
Place: ED528, 5F, Engineering Building 4, NCTU
交通大學(光復校區)工程四館5樓528室
Speaker: Prof. Zhengya Zhang (University of Michigan, Ann Arbor)
Abstract:
Some of the latest advances in computer vision have been built upon the understanding of the mammalian primary visual cortex (V1). The receptive fields of V1 neurons can be compared to the basis functions underlying natural images. Learning the receptive fields allows us to carry out complex vision processing, including efficient image encoding, feature detection, and classification. Sparse coding is one development in unsupervised machine learning for training a network of neurons using natural images to extract the receptive fields that resemble the V1 receptive fields. We explore the dynamics of the sparse coding algorithm for an efficient mapping onto practical hardware. Design considerations involving tuning network and neuron responses have a significant impact on the neuron spiking pattern that determines the fidelity of image processing and the efficiency of resource utilization. The spiking pattern can be further exploited to improve the performance and scalability of the hardware architecture. The soft neural computation is intrinsically error tolerant and many opportunities exist in approximating the neuron communication and computation in designing high-performance and energy-efficient image processing hardware.
Biography:
Zhengya Zhang received the B.Sc. degree from the University of Waterloo in Canada in 2003, and the M.S. and Ph.D. degrees from the University of California, Berkeley, in 2005 and 2009, respectively. Since 2009, he has been with the Department of Electrical Engineeringand Computer Science at the University of Michigan, Ann Arbor, where he is currently an Associate Professor. His research is in the area of low-power and high-performance VLSI circuits and systems for computing, communications and signal processing. Dr. Zhang received the Intel Early Career Faculty Award in 2013, the National Science Foundation CAREER Award in 2011, the David J. Sakrison Memorial Prize from UC Berkeley in 2009, and the Best Student Paper Award at the Symposium on VLSI Circuits in 2009. He is an Associate Editor of the IEEE Transactions on Circuits and Systems-I, II, and the IEEE Transactions on Very Large Scale Integration (VLSI) Systems.
Host: 交大電子系楊家驤教授 Email: chy@nctu.edu.tw