In machine learning and exploratory data analysis, the major goal is the development of solutions for the automatic and efficient extraction of knowledge from data. Many important methods for data analysis are based on eigenproblems. While linear eigenproblems are standard tools in many applications, e.g., in the form of the principal component analysis (PCA) or spectral clustering, they are limited in their modeling capabilities. In this talk, I will discuss nonlinear eigenproblems since they significantly extend the modeling freedom. In particular, the important principles of sparsity and robustness can be incorporated. After an introduction of the framework, I will present two important examples: sparse PCA and our recent results on tight relaxations of balanced graph cuts via nonlinear eigenproblems. Moreover, an efficient algorithm-a generalization of the inverse power method-is introduced for the resulting nonconvex and nonsmooth optimization problems.