学生对 University of Alberta 提供的 Prediction and Control with Function Approximation 的评价和反馈
课程概述
热门审阅
AC
Dec 1, 2019
Well peaced and thoughtfully explained course. Highly recommended for anyone willing to set solid grounding in Reinforcement Learning. Thank you Coursera and Univ. of Alberta for the masterclass.
WP
Apr 11, 2020
Difficult but excellent and impressing. Human being is incredible creating such ideas. This course shows a way to the state when all such ingenious ideas will be created by self learning algorithms.
101 - Prediction and Control with Function Approximation 的 125 个评论(共 148 个)
创建者 Ignacio O
•Nov 29, 2019
Really good, I learned a lot.
创建者 FREDERIC N
•May 2, 2020
Great speakers and content!
创建者 Majd W
•Feb 1, 2020
Very practical course.
创建者 李谨杰
•Jun 17, 2020
Excellent class !!!
创建者 Arun S
•May 20, 2023
i really liked it
创建者 Mohamed A
•Sep 11, 2021
very good course
创建者 Hugo T K
•Aug 18, 2020
Excellent course.
创建者 Murtaza K B
•Apr 25, 2020
Excellent course
创建者 Ivan
•Aug 30, 2020
Just brilliant
创建者 Juan “ L
•Aug 3, 2022
great course!
创建者 Oriol A L
•Nov 19, 2020
Very good!
创建者 Ben - C L Y
•Jul 8, 2020
Very good!
创建者 Nithiroj T
•Dec 21, 2023
very good
创建者 Jialong F
•Feb 22, 2021
gooood!
创建者 Justin O
•May 17, 2021
Great
创建者 Artod
•Feb 26, 2021
Super
创建者 Ananthapadmanaban, J
•Jul 19, 2020
I am disappointed with policy gradients being introduced on last week of the 3rd course. The instructors need to understand that 12 weeks is too much for introduction before starting a good project to implement the concepts with a hope to better understand them (course 4). Policy gradients should have been introduced in week 3/4 of course 2 itself. The content before that should be made more efficient (4 weeks to understand until q-learning/sarsa and 2 weeks to understand function approximation should be enough). I realized after course 2 that Andrew Ng has 3/4 videos on RL in the recently released ML class from Stanford. I am yet to go through them, but I feel they may explain these faster with same amount of rigour. However, the stanford class assignments are not public, which makes this course still useful because of the assignments. However, thanks to the instructors for this course.
创建者 PHILIP C
•Jun 18, 2021
This is a good course, but I continue to be disappointed in the lack of detail in the lectures. I fill in the detail with the Deep Mind lectures on Reinforcement Learning by David Silver. The programming assignments are difficult, not because they are challenging, but because the data structures are not well explained and the conceptual connections between the equations in the book and the code structures used for the implementation are not clear. It's like being given somebody's not-very-well-documented code and trying to figure out what they were thinking. All that said, I think that the course offers a lot and I have learned a lot from it so far.
创建者 Luiz C
•Oct 3, 2019
Almost perfect, except two ~minor objections:
1/ the learning content between the 4 weeks is quite unbalanced. The initial weeks of the course are well sized, whereas week #3 and week #4 feel a touch light. It feels like the Instructors rushed to make the Course available online, and didn't have time to put as much content as they wished in the last weeks of the Course
2/ there are too many typos in some notebooks (specifically notebook of week #3). It gives the impression it was made in a rush, and nobody read over it again. Besides there seems to currently be some issue with this assignment
创建者 Luka K
•Jan 4, 2021
It is a good introduction to prediction and control with function approximation. Combining book and instructros results in a simple and nice explanation. What keeps it from the perfect grade are the examples. It would be nice if there are more examples and explained in a more detailed way why and how the example works. For example sometimes instructors would just say that the robot can use this, and that is mostly it. The other thing is more interactive project work. For example I would like to see how is my pendulum moving after N number of episodes. I would feel more satisfactory then.
创建者 Dmitry S
•Jan 5, 2020
Definitely a course to take to learn the ropes of RL. For this course, it is critical to follow and math. I'd love to give 5 stars to this course but will however take one away since the course could benefit a lot if the math was made a bit simpler to follow. The book referenced in the course is excellent and does help, but still, some more pedagogical repetition/rephrase, simplification of notation, a bit slower pace of narration would make the course even better. Having said that, this seems to be the best course available at this time. Many thanks to tutors.
创建者 Hadrien H
•Feb 4, 2021
I really appreciate that this course gives more hands on and assignments exercises. Really helped a lot in the understanding of the theory. As the books gets deeper into concepts and complexity so does the class, which is nice, but I felt like the depth and complexity in which the online class goes does not really keep up with the book content. Not only by skipping chapters but also by staying a bit at a too high level sometimes. Still a very good course again and really accessible, entertaining and resourceful material and instructors.
创建者 Stevie W
•May 11, 2021
It's a great course, and they cover the basics of function approximation. The instructors were clear and knowledgeable, and the content that was covered was solid.
However, they skip some content that I feel is really important for modern RL, specifically the "deadly triad" regarding the convergence of off-policy approximate TD methods. They also don't discuss or link to papers on PPO or other recent advancements in RL, and I was hoping to learn more about those in particular.
创建者 Narendra G
•Jul 19, 2020
This course is important for those who not just want to learn RL for mere sake but want to dive into various topics currently in research (for that reading textbook is of most importance). This specialization would have been even better if it had included some more complex topics from the textbook. To fully comprehend all the topics, guidance from experts is necessary.
创建者 Nicolas M
•Oct 24, 2020
Very interesting course: I have learned many things. A translation to other languages would be great: sometimes I can't memorize everything as I would if it was in my mother tongue.
Using another paper to study ( Experiments with Reinforcement Learningin Problems with Continuous State and Action Space) was a great idea that should be done in other courses.