Topics in Machine Learning
(Fall 2016)



[Course description] [Announcements] [Lectures] [Assignments] [Reading material]

Course description

Course staff:

Time and Location: Tuesday 1315-1600, Ziskind building, room 1

Syllabus: This course will provide a self-contained introduction to some of the actively-researched areas in machine learning today. It will cover theoretical principles and challenges as well as practical algorithms. The focus will be on supervised and discriminative learning, where the goal is to learn good predictors from data while making few or no probabilistic assumptions. Along the way, we will introduce and use tools from probability, game theory, convex analysis and optimization.

Prerequisites: There are no formal prerequisites. However, the course requires mathematical maturity, and students are expected to be familiar with linear algebra and probability, as taught in undergraduate computer science or math programs.

Announcements

  • (03/3) Grades for those taking the moed A exam have been reported to Feinberg.
  • (24/2) Clarification regarding past exams posted below: Note that some of the questions relate to homework questions from those years, so were designed assuming students answered closely-related questions as part of the course.
  • (23/2) Due to unforeseen circumstances, the Q&A session planned for tomorrow (February 24) at 2PM is postponed to 3:30PM, and will beheld by Ohad instead of Itay.
  • (19/2) Assignment 5 was graded, and can be picked up from Itay Safran's mailbox
  • (18/2) Previous exams and solutions: 2016 (solution), 2017 (solution)
  • (18/2) To help you prepare for the exam, there will be a Q&A session on Sunday, February 24th, 2-4PM, in room 155. Please send any potential questions to Itay until Friday February 22nd.
  • (18/2) Assignment 4 was graded, and can be picked up from Itay Safran's mailbox
  • (3/2) The due date of assignment 5 is extended until Thursday, February 7th (the assignments can be placed in Itay Safran's mailbox).
  • (3/2) Clarification regarding assignment 5, question 3: First note that in this question we consider fully-connected feed-forward neural networks with a fixed activation, where the output neuron has a linear activation (hence computes a linear function of its input). In the first part of the question you are requested to explicitly state the architecture and weights of the network you construct, whereas in the second part in (b) just a brief description of the architecture used and what it computes will suffice (no need to specify neurons' weights).
  • (3/2) Clarification regarding assignment 5, question 1: You need first to construct a regret bound holding in expectation for the points w_t dictated by online gradient descent (using the unbiased gradient estimates), and then use it to get a regret bound holding in expectation with respect to the points actually played by the algorithm. This uses the fact that the algorithm deviates from online gradient descent by only a small probability epsilon at every iteration.
  • (22/1) Assignment 5 uploaded, due February 5.
  • (8/1) Slightly modified formulation of question 1, assignment 4, to clarify that the regret should be with respect to any vector u of norm at most B.
  • (1/1) Assignment 4 uploaded, due January 15th.
  • (1/1) Assignment 2 is graded and can be picked up from Itay Safran's mailbox
  • (12/12) Assignment 1 is graded and can be picked up from Itay Safran's mailbox
  • (12/12) Assignment 3 uploaded, due January 1st. The first two problems require material that will be taught in next week's class (December 18)
  • Assignment 2 deadline is extended till Thursday, December 13th, 3:30PM (the assignments can be placed in Itay Safran's mailbox on the 2nd floor of Ziskind building).
  • Assignment 2 uploaded, due December 11th.
  • Updated the guidance in question 3. You should consider r' in [r_m,r^*] in the first part, and instances of distance less than r_m in the second.
  • Assignment 1 uploaded, due November 15th.

    Assignments

    Reading material

    The course does not follow any specific text, but much of the first two-thirds is covered by the following: Additional Sources include: