Publisher's Synopsis
Series foreword: Preface 1 introduction: Optimization and machine learning; 2. Convex optimization with sparsity-inducing norms. 3. Interior-point methods for large-scale cone programming; 4. Incremental gradient, sub gradient, and proximal methods for convex optimization: a survey. 5. First-order methods for nonsmooth convex large-scale optimization I: General purpose methods; 6. First-order methods for nonsmooth convex large-scale optimization II: Utilizing problem's structure. 7. Cutting-plane methods in machine learning; 8. Introduction to dual decomposition for inference; 9. Augmented lagrangian methods for learning, selecting, and combining features; 10. The convex optimization approach to regret minimization. 11. Projected newton-type methods in machine learning; 12. Interior-point methods in machine learning; 13. The trade-offs of large-scale learning; 14. Robust optimization in machine learning; 15. Improving first and Second-Order methods by modeling uncertainty; 16. Bandit view on noisy optimization. 17. Optimization methods for sparse Inverse covariance selection; 18. A path wise algorithms for covariance selection.