Multivariate Probability Calibration with Isotonic Bernstein Polynomials

Some image

Multivariate probability calibration is the problem of predicting class membership probabilities from classification scores of multiple classifiers. To achieve better performance, the calibrating function is often required to be coordinate-wise non-decreasing; that is, for every classifier, the higher the score, the higher the probability of the class labeling being positive. To this end, we propose a multivariate regression method based on shape-restricted Bernstein polynomials. This method is universally flexible: it can approximate any continuous calibrating function with any specified error, as the polynomial degree increases to infinite. Moreover, it is universally consistent: the estimated calibrating function converges to any continuous calibrating function, as the training size increases to infinity. Our empirical study shows that the proposed method achieves better calibrating performance than benchmark methods. Related: IJCAI-20 paper.

Clustering Partial Lexicographic Preference Trees

Some image

Due to the preordering nature of PLP-trees, we define a variant of Kendall’s τ distance metric to be used to compute distances between PLP-trees for clustering. To this end, extending the previous work by Li and Kazimipour (Li and Kazimipour 2018), we propose a polynomial time algorithm PlpDis to compute such distances, and present empirical results comparing it against the brute-force baseline. Based on PlpDis, we use various distance-based clustering methods to cluster PLP-trees learned from a car evaluation dataset. Our experiments show that hierarchical agglomerative nesting (AGNES) is the best choice for clustering PLP-trees, and that the single-linkage variant of AGNES is the best fit for clustering large numbers of trees. Related: FLAIRS-33 paper.

Human-in-the-Loop Learning for Decision Analysis

Some image

We design and implement a decision analysis system using human-in-the-loop learning to learn interpretable predictive decision models (e.g., lexicographic preference trees and conditional preference networks) to provide insight into agents' decision making process. Related: FLAIRS-32 paper.

Smart Transportation

Some image

We designed and developed a smart multi-modal transportation planner that allows user-specific metrics (e.g., crime rates and crash data), to specify constraints as a theory in the linear temporal logic, and to express preferences as a preferential cost function. In the demo, an optimal trip is computed for Alice who doesn't have a car but has a bike, and she wants to bike at least 1 and at most 2 hours. Moreover, she prefers biking and public transits over uber. Related: AAAI Worshop paper.

Preference Learning Library

To facilitate preference learning, we are building a library of various practical preferential datasets useful for conducting preference learning experiments on real-world data.

Preference Learning

Some image

We introduced the preference formalism of partial lexicographic preference trees, or PLP-trees, over combinatorial domains of alternatives. We study the problem of passive learning, that is, the problem of learning preference models given a set of pairwise preferences between alternatives, called training examples, provided by the user upfront. Specifically, for several classes of PLP-trees, we study how to learn (i) a PLP-tree, preferably of a small size, consistent with a dataset of training examples, and (ii) a PLP-tree correctly ordering as many of the examples as possible in case of inconsistency. Then, we evaluate the predictive power of our model empirically in comparison with other ranking systems in the setting of instance ranking, corresponding to ordinal classification in machine learning.

Social Choice for Combinatorial Domains

Some image

When candidates are combinations of values from domains of features, there are just too many of them for humans to express preferences as strict total orders (or votes) over all candidates. However, the system of lexicographic preference trees (LP-trees) often provide compact representations of preferences over combinatorial domains. Our work focuses on two preference-aggregation problems, the winner problem and the evaluation problem, based on positional scoring rules (such as k-approval and Borda) when preferences are represented as LP-trees. We obtain new computational complexity results of these two problems and provide empirical analysis in two programming formalisms, answer set programming (ASP) and weighted partial maximum satisfiability (WPM). Related: presentation

Preference Modeling and Optimization

Some image

Preferences over sets can be modeled as weighted propositional formulas. Given a database (the space of possible outcomes), constraints (filtering the database to get the space of feasible outcomes) and preferences (soft constraints indicating personal likings and dislikings), we design and implement a preference reasoning system that automatically produces optimal solutions based on multiple criteria: possibilistic logic, leximin ordering, discrimin ordering and pareto dominance.