🧔เมื่อพูดถึงศาสตร์ AI หรือปัญญาประดิษฐ์
มันมีมานานแล้ว และก่อนที่มันจะนิยมในยุคนี้
ครั้งหนึ่งโลกเราเคยอยู่ในยุค AI Winter
หรือฤดูหนาวของ AI
หมายถึงช่วงเวลา ที่เกือบทุกภาคส่วน
ต่างหันหลังให้กับคำว่า AI
บริษัทต่างๆ ไม่ได้โฟกัส ไม่ไส่ใจลงทุน หรือวิจัยงานด้านนี้เท่าไร
.
💥 แต่เมื่อ Deep learning (การเรียนรู้เชิงลึก)
ซึ่งเป็นสาขาหนึ่งของ AI ได้ก้าวเท้าเข้ามา
ก็จุดประกายให้ศาสตร์นี้กลับมาบูมอีกครั้ง
อย่าง Machine learning ที่เป็นซุปเปอร์เซต
ของวิชา Deep learning ก็กลับมาบูมด้วยเช่นกัน
ทั้งๆ ที่เมื่อก่อนก็ใช้กันเยอะอะนะ
.
เมื่อพูดถึงเทคนิคหนึ่งของ Deep learning
ที่เป็นที่นิยม ก็มีหลายตัว
...แต่วันนี้จะพูดถึงเทคนิคที่ชื่อ
➡ Convolutional Neural Networks
ชื่อย่อคือ ConvNet
หรือบางทีก็เรียก CNN (ไม่ใช่สำนักข่าวชื่อดังของอเมริกานะ)
.
เทคนิคนี้นิยมใช้ในงานพวก Computer Vision
เอาไว้ทำให้คอมพิวเตอร์มีวิชั่นสายตามองเห็น 🤓
เหมือนสายตามนุษย์ก็ไม่ปาน ...เป็นไงเท่ห์เปล่า
(จริงๆ คอมมองเห็นรูปเป็นตัวเลขเท่านั้น แต่ใช้ AI ประมวลผลออกมาจึงเข้าใจ)
.
ตัวอย่างการประยุกต์ใช้เช่น
- Object detection เอาไว้ตรววจับวัตถุสิ่งของที่อยู่ในรูป แยกรูปสิ่งของจากภาพได้
- Face recognition เอาไว้รู้จำใบหน้า รู้อายุ เพศ สีหน้าอาการต่างๆ
- Image Classification เอาไว้แยกแยะรูปภาพ
เป็นต้น
.
✍ ผมเคยโน๊คเลคเชอร์วิชา
Convolutional Neural Networks เอาไว้
เผื่อมีใครกำลังเรียนอยู่
หรือสนใจเอาไว้ทบทวนได้ครับ
.
๘๘๘๘๘๘๘๘๘๘๘๘๘๘๘๘๘๘๘๘๘๘๘๘๘๘๘๘๘๘
และอยากจะขอประชาสัมพันธ์ (ขายของหน่อยนะ)
๘๘๘๘๘๘๘๘๘๘๘๘๘๘๘๘๘๘๘๘๘๘๘๘๘๘๘๘๘๘
สำหรับใครที่สนใจหนังสือปัญญาประดิษฐ์ติดอันดับขายดี (Best seller)
ในหมวดหมู่คอมพิวเตอร์ของ MEB
📔 หนังสือ "ปัญญาประดิษฐ์ (AI) ไม่ยาก" เข้าใจได้ด้วยเลขม. ปลาย เล่ม 1 (เนื้อหาภาษาไทย)
สนใจสั่งซื้อได้ที่
👉 https://www.mebmarket.com/web/index.php…
.
ตัวอย่างหนังสือ
👉 https://www.dropbox.com/s/fg8l38hc0k9b…/chapter_example.pdf…
.
✍เขียนโดย โปรแกรมเมอร์ไทย thai progammer
When it comes to AI science or artificial intelligence.
It has been for a long time and before it's popular in this era.
Once in the world, we were in the AI Winter era.
Or the winter of AI
Means a moment in almost every part.
All turned away from AI
Companies don't focus. I don't want to invest or research on this side.
.
💥 but when Deep learning (depth learning)
One branch of AI has stepped foot in
Well sparked this science back boom again
Like machine learning that is super set
Deep learning subjects come back to boom as well.
I used it a lot before.
.
When it comes to one technique of Deep learning.
There are many of them popular.
... but today will talk about a technique named.
➡ Convolutional Neural Networks
The initials are ConvNet
Or sometimes called CNN (not American famous news agency)
.
This technique is used in Computer Vision work.
To make a computer have vision, eyesight 🤓
It's like human eyes. It's not like... It's cool.
(Actually, the computer can only see the photo as a number, but I use AI to process it so I understand.)
.
Examples of application, e.g.
- Object detection. Let's catch the objects in the picture. Separate the items from the picture.
- Face recognition to know. Remember face, age, gender, facial expressions, various symptoms.
- Image Classification to digest photos
Etc.
.
✍ I used to note Lakecher subjects.
Convolutional Neural Networks
In case anyone is studying.
Or if you are interested, you can review.
.
888888888888888888888888888888
And I want to ask for publicity (selling items)
888888888888888888888888888888
For anyone who is interested in the best selling Artificial Intelligence book (Best seller)
In the MEB computer category
📔 The book ′′ Artificial Intelligence (AI) is not difficult It can be understood by the number. End of book 1 (Thai language content)
If interested, order at.
👉 https://www.mebmarket.com/web/index.php?action=BookDetails&data=YToyOntzOjc6InVzZXJfaWQiO3M6NzoiMTcyNTQ4MyI7czo3OiJib29rX2lkIjtzOjY6IjEwODI0NiI7fQ&fbclid=IwAR11zxJea0OnJy5tbfIlSxo4UQmsemh_8TuBF0ddjJQzzliMFFoFz1AtTo4
.
A sample of books
👉 https://www.dropbox.com/s/fg8l38hc0k9b0md/chapter_example.pdf?fbclid=IwAR1DH5L4NPXTq0UEXsjcI-nxSyBoPzlCtaCM2t9RulNDi2bS3vaFpD8LmI0
.
✍ Written by Thai programmer thai progammerTranslated
deep learning application 在 大學生 BIG Student Facebook 的最佳解答
【 跨領域成長:非資工背景如何成為軟體工程師?】
購票連結:https://www.blink.com.tw/event/1437/
非資工背景的你,想跨領域成為軟體工程師嗎?
跨領域人才炙手可熱、程式語言成為顯學,非相關科系也得培養程式能力。
若你不只想會一點程式語言,更希望以此為工作,這場講座將帶你開啟你的軟體工程師之路。
本次講座邀請兩位原本非資工科系、現為軟體工程師的講者,一位從理工科系到 #深度學習資料科學家、另一位則從心理系到 #前端開發工程師,他們將分享自己的跨領域學習經驗。
若你正在思考是否要 #轉職工程師、#想學程式卻不知道怎麼開始、#想了解當工程師的秘辛,這場講座很適合你!
--------------------------------------------------------------
本系列講座主題為「跨領域成長」,預計邀請不同職業/產業中的跨領域學習者,藉由分享自身跨域成長的過程及自學的經驗,擴大參與者對未來的想像。
在此之外另有以職業工作內容為主軸的「職業窺探」及以產業現況與未來為講題的「產業解密」等系列活動,跟著Blink一起探索職涯、挖掘更多可能。
【議題設定】
本系列講座聚焦於#如何跨領域學習,預計分享內容有:
成為軟體工程師的契機及心路歷程
軟體工程師工作日常
軟體工程師必備的能力和工具
如何自學成為軟體工程師:如何從0開始學習程式語言?學習資源及方法推薦
給想從事相關工作者建議:適合的人格特質、履歷及作品集準備方向等
【講者介紹】
Chang Rong Ke 柯長榮 / 知名IC設計公司 深度學習資料科學家
非資工或資管本科,求學過程中也鮮少寫程式,理工科碩士畢業後,經歷過半導體製程整合、面板研發、封測模擬、專利工程師、iOS App開發等不同領域之後,發覺喜好及自身優勢還是在於演算法,於2016年正式轉職機器學習/深度學習/電腦視覺領域至今。
靠著自學,經過四年多的快速成長及跳槽,目前於美股上市IC設計公司擔任深度學習資料科學家,主要協助開發 Deep learning accelerator (AI ASIC) 及在搭載自家晶片的 edge 裝置上實作模型。
陳柏融 / Jubo 智齡科技 Web Application Developer
大學與研究所均就讀心理學系,考取臨床心理師證照後,某天赫然發現和電腦聊天竟然比和人交談來得輕鬆,回想起了小時候建立的第一個電玩秘技分享網站,於是誤打誤撞開啟了前端開發之路。
從心理學走向資訊科學,再從資訊科學走回與人的連結,發現兩者都是在釐清人的需求,解決生活上的問題,相信透過資訊與科技能夠帶給人們更幸福與便利的生活。
【活動資訊】
日 期|2020/06/13(六)
時 間|14:00 – 16:00
形 式|線上講座(以Zoom 網路研討會進行)
【講座時程】
13:30 - 14:00 開放入場
14:00 - 14:10 講座引言
14:10 - 14:50 Chang Rong Ke 柯長榮 / 知名IC設計公司擔任深度學習資料科學家
14:50 - 15:30 陳柏融 / Jubo 智齡科技 Web Application Developer
15:30 - 16:00 互動論壇+Q&A
購票連結:https://www.blink.com.tw/event/1437/
deep learning application 在 國立陽明交通大學電子工程學系及電子研究所 Facebook 的最佳貼文
【演講】2019/11/19 (二) @工四816 (智易空間),邀請到Prof. Geoffrey Li(Georgia Tech, USA)與Prof. Li-Chun Wang(NCTU, Taiwan) 演講「Deep Learning based Wireless Resource Allocation/Deep Learning in Physical Layer Communications/Machine Learning Interference Management」
IBM中心特別邀請到Prof. Geoffrey Li(Georgia Tech, USA)與Prof. Li-Chun Wang(NCTU, Taiwan)前來為我們演講,歡迎有興趣的老師與同學報名參加!
演講標題:Deep Learning based Wireless Resource Allocation/Deep Learning in Physical Layer Communications/Machine Learning Interference Management
演 講 者:Prof. Geoffrey Li與Prof. Li-Chun Wang
時 間:2019/11/19(二) 9:00 ~ 12:00
地 點:交大工程四館816 (智易空間)
活動報名網址:https://forms.gle/vUr3kYBDB2vvKtca6
報名方式:
費用:(費用含講義、午餐及茶水)
1.費用:(1) 校內學生免費,校外學生300元/人 (2) 業界人士與老師1500/人
2.人數:60人,依完成報名順序錄取(完成繳費者始完成報名程序)
※報名及繳費方式:
1.報名:請至報名網址填寫資料
2.繳費:
(1)親至交大工程四館813室完成繳費(前來繳費者請先致電)
(2)匯款資訊如下:
戶名: 曾紫玲(國泰世華銀行 竹科分行013)
帳號: 075506235774 (國泰世華銀行 竹科分行013)
匯款後請提供姓名、匯款時間以及匯款帳號後五碼以便對帳
※將於上課日發放課程繳費領據
聯絡方式:曾紫玲 Tel:03-5712121分機54599 Email:tzuling@nctu.edu.tw
Abstract:
1.Deep Learning based Wireless Resource Allocation
【Abstract】
Judicious resource allocation is critical to mitigating interference, improving network efficiency, and ultimately optimizing wireless network performance. The traditional wisdom is to explicitly formulate resource allocation as an optimization problem and then exploit mathematical programming to solve it to a certain level of optimality. However, as wireless networks become increasingly diverse and complex, such as high-mobility vehicular networks, the current design methodologies face significant challenges and thus call for rethinking of the traditional design philosophy. Meanwhile, deep learning represents a promising alternative due to its remarkable power to leverage data for problem solving. In this talk, I will present our research progress in deep learning based wireless resource allocation. Deep learning can help solve optimization problems for resource allocation or can be directly used for resource allocation. We will first present our research results in using deep learning to solve linear sum assignment problems (LSAP) and reduce the complexity of mixed integer non-linear programming (MINLP), and introduce graph embedding for wireless link scheduling. We will then discuss how to use deep reinforcement learning directly for wireless resource allocation with application in vehicular networks.
2.Deep Learning in Physical Layer Communications
【Abstract】
It has been demonstrated recently that deep learning (DL) has great potentials to break the bottleneck of the conventional communication systems. In this talk, we present our recent work in DL in physical layer communications. DL can improve the performance of each individual (traditional) block in the conventional communication systems or jointly optimize the whole transmitter or receiver. Therefore, we can categorize the applications of DL in physical layer communications into with and without block processing structures. For DL based communication systems with block structures, we present joint channel estimation and signal detection based on a fully connected deep neural network, model-drive DL for signal detection, and some experimental results. For those without block structures, we provide our recent endeavors in developing end-to-end learning communication systems with the help of deep reinforcement learning (DRL) and generative adversarial net (GAN). At the end of the talk, we provide some potential research topics in the area.
3.Machine Learning Interference Management
【Abstract】
In this talk, we discuss how machine learning algorithms can address the performance issues of high-capacity ultra-dense small cells in an environment with dynamical traffic patterns and time-varying channel conditions. We introduce a bi adaptive self-organizing network (Bi-SON) to exploit the power of data-driven resource management in ultra-dense small cells (UDSC). On top of the Bi-SON framework, we further develop an affinity propagation unsupervised learning algorithm to improve energy efficiency and reduce interference of the operator deployed and the plug-and-play small cells, respectively. Finally, we discuss the opportunities and challenges of reinforcement learning and deep reinforcement learning (DRL) in more decentralized, ad-hoc, and autonomous modern networks, such as Internet of things (IoT), vehicle -to-vehicle networks, and unmanned aerial vehicle (UAV) networks.
Bio:
Dr. Geoffrey Li is a Professor with the School of Electrical and Computer Engineering at Georgia Institute of Technology. He was with AT&T Labs – Research for five years before joining Georgia Tech in 2000. His general research interests include statistical signal processing and machine learning for wireless communications. In these areas, he has published around 500 referred journal and conference papers in addition to over 40 granted patents. His publications have cited by 37,000 times and he has been listed as the World’s Most Influential Scientific Mind, also known as a Highly-Cited Researcher, by Thomson Reuters almost every year since 2001. He has been an IEEE Fellow since 2006. He received 2010 IEEE ComSoc Stephen O. Rice Prize Paper Award, 2013 IEEE VTS James Evans Avant Garde Award, 2014 IEEE VTS Jack Neubauer Memorial Award, 2017 IEEE ComSoc Award for Advances in Communication, and 2017 IEEE SPS Donald G. Fink Overview Paper Award. He also won the 2015 Distinguished Faculty Achievement Award from the School of Electrical and Computer Engineering, Georgia Tech.
Li-Chun Wang (M'96 -- SM'06 -- F'11) received Ph. D. degree from the Georgia Institute of Technology, Atlanta, in 1996. From 1996 to 2000, he was with AT&T Laboratories, where he was a Senior Technical Staff Member in the Wireless Communications Research Department. Currently, he is the Chair Professor of the Department of Electrical and Computer Engineering and the Director of Big Data Research Center of of National Chiao Tung University in Taiwan. Dr. Wang was elected to the IEEE Fellow in 2011 for his contributions to cellular architectures and radio resource management in wireless networks. He was the co-recipients of IEEE Communications Society Asia-Pacific Board Best Award (2015), Y. Z. Hsu Scientific Paper Award (2013), and IEEE Jack Neubauer Best Paper Award (1997). He won the Distinguished Research Award of Ministry of Science and Technology in Taiwan twice (2012 and 2016). He is currently the associate editor of IEEE Transaction on Cognitive Communications and Networks. His current research interests are in the areas of software-defined mobile networks, heterogeneous networks, and data-driven intelligent wireless communications. He holds 23 US patents, and have published over 300 journal and conference papers, and co-edited a book, “Key Technologies for 5G Wireless Systems,” (Cambridge University Press 2017).