学生对 University of Alberta 提供的 Prediction and Control with Function Approximation 的评价和反馈
课程概述
热门审阅
AC
Dec 1, 2019
Well peaced and thoughtfully explained course. Highly recommended for anyone willing to set solid grounding in Reinforcement Learning. Thank you Coursera and Univ. of Alberta for the masterclass.
WP
Apr 11, 2020
Difficult but excellent and impressing. Human being is incredible creating such ideas. This course shows a way to the state when all such ingenious ideas will be created by self learning algorithms.
126 - Prediction and Control with Function Approximation 的 149 个评论(共 149 个)
创建者 Nicolas M
•Oct 24, 2020
Very interesting course: I have learned many things. A translation to other languages would be great: sometimes I can't memorize everything as I would if it was in my mother tongue.
Using another paper to study ( Experiments with Reinforcement Learningin Problems with Continuous State and Action Space) was a great idea that should be done in other courses.
创建者 Lucas O S
•Jan 21, 2020
Great course, deserve 5 stars. It is a good complement to the book, it adds interesting visualizations to help parse the content. The only issues were in the exercises. There are technical issues with the notebook platform where it keeps disconnecting from time to time, with no warning, and you lose your unsaved work (seems like token expiration).
创建者 Anirban D
•Jul 24, 2022
Excellent instructors and good concepts and assignments help you learn by doing. My only reason for giving a 4 is that this courses uses some internal tool (RL Glue) and hence none of the Jupyter notebooks are implementable outside. Some well known reinforcement learning framework like tensorforce perhaps should have been used.
创建者 남상혁
•Jan 17, 2021
Very good lecture! I understand a lot about function approximation such as linear approximation, neural networks, etc. However, detail of video lectures were not perfect as the textbook. If you don't want to read a lot of text and listen to the lectures, you might not understand a lot of concepts.
创建者 Hugo V
•Jan 15, 2020
it was great to apply what I have learned from the book, but it was hard to find my mistakes in the course 3 notebook. I also misunderstood the alphas in the course 4 notebook at first glance, their indices look like they are powers (sorry for the bad english). Besides it, great course.
创建者 Bhavesh A
•Dec 17, 2024
Everything is great about this course except the policy gradient part, the need of policy gradients and why we should use policy gradients while we can use function approximation and (to compute policy gradients, we anyways have to compute function approximation).
创建者 Amit J
•Mar 16, 2021
Lecture quality could have been better. They look like practiced monologues rather than a class where a teacher is trying (hard) to explain a concept. If one has to wait for assignment to get the full grasp, it doesn't reflect too well on the instructors.
创建者 Lik M C
•Jan 18, 2020
The course is still good. But the assignment is not as good as course 1 and 2. In fact, the contents of the course are getting complicated and interesting as well. But the assignments are relatively simple.
创建者 Mark P
•Aug 16, 2020
Solid intro course. Wish we covered more using neural nets. The neural net equations used very non-standard notation. Wish the assignments were a little more creative. Too much grid world.
创建者 Anton P
•Apr 12, 2020
There is a lot of material covered in the course. Be aware the pace picks up considerably from the first two courses. This said, it is a worthwhile course to take.
创建者 Vladyslav Y
•Sep 8, 2020
I wish agents that are based on visual information (with the usage of CNN) would be included in the course. But overall that was really great!
创建者 Sharang P
•Feb 27, 2020
more detailed explanation of some of the assignments and how state values are got with tile coding but overall a great experience!
创建者 Jerome b
•Apr 9, 2020
Great course, based on the reference book about reinforcement learning. A must for anyone interested in machine learning.
创建者 Rajesh M
•Apr 17, 2020
I loved the course videos and programming assignments. The only suggestion would be to go a little deeper in the videos.
创建者 Muhammed A Ç
•Sep 4, 2021
Programming exercises are not self explaining. But instructors are explaining concept in a perfect way
创建者 Pouya E
•Dec 2, 2020
Great overall. The content on policy gradient could be expanded, some details were delivered hastily.
创建者 Rish K
•May 19, 2020
The average reward and differential return needs to be explained more thoroughly
创建者 Ramaz J
•Oct 17, 2019
Course is great! Maybe some slides would be helpful not to forget.
创建者 Charles X
•Jun 21, 2021
Gets hard to understand.
创建者 Quarup B
•Jul 25, 2021
Content is great, but the text is super dense -- slow read for me. The lectures are much clearer, although also a bit dense / quick paced to retain the information long term (especially if one wishes to skip the reading).
创建者 Prashant M
•Jun 7, 2020
great course material but you need read the RL book through out the course. Also assignments are bit difficult, oops concept is mandatory.
创建者 Justin N
•Mar 31, 2020
Lectures are pretty good, but the programming exercises are extremely easy. All of the problems are rather contrived as well.
创建者 Yassine B
•May 4, 2020
I think It must be more deep neural networks dedicated course and not focus on coarse and tile coding!!!
创建者 Bernard C
•May 24, 2020
Course was good, but assignments were not well constructed. Problems with the unit tests were frequent.