Offerings in the past two years emphasized issues with fitting highly nonlinear models, reducing noise, framing hypotheses for investigation, and using the Bootstrap technique for significance testing and verification of theoretical error bars. During last year's offering a further main issue with the intended extension to the chess model emerged, which led to the article https://rjlipton.wordpress.com/2017/05/23/stopped-watches-and-data-analytics/ in full technical g(l)ory. This issue was further quantified by me over the summer but is still not resolved. What emerges is the appropriateness of using maximum likelihood estimation (MLE) versus other fitting methods. Hence this term's offering will lead off with MLE, drawing on how it is covered in courses such as CSE574 but in new contexts. As always, the seminar can branch from there according to wishes and interests of students for presentations and more. This can extend to deep learning and the recent exploits of AlphaGo Zero.
As with last year, requirements will be participation in discussions, learning and applying computational statistical and charting tools, and presenting a "mini-project" or paper (likely teamed). No background in chess is assumed (enough will be covered early on) and also there are no other prerequisites.
The seminar aims to relate this research to other Machine Learning applications that have been researched in the Department, and to explore issues in their common methodology. This includes comparing the many different statistical fitting methods (Bayesian, max-likelihood, simple frequentist, and more) that can be used and judged within this same model. The basics of chess and chess programs will be covered in the initial series of lectures by me.
Here are a two-page description and a longer overview of the research, the latter with some mathematical details. My homepage links my public anti-cheating site, papers, talks, New York Times article, and other pages; students in the seminar will be given access to my private sites where testing is done. The last section of the overview includes some possible seminar topics and projects within this research, but students will be equally welcome to give presentations relating it to machine-learning related topics they have had in other courses.
Students are expected to participate in discussions and give at least two hours of presentations. Grading is S/U, 1--3 credits.