skip to main content
10.5555/3020488.3020503guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

A new parameter learning method for Bayesian networks with qualitative influences

Published: 19 July 2007 Publication History

Abstract

We propose a new method for parameter learning in Bayesian networks with qualitative influences. This method extends our previous work from networks of binary variables to networks of discrete variables with ordered values. The specified qualitative influences correspond to certain order restrictions on the parameters in the network. These parameters may therefore be estimated using constrained maximum likelihood estimation. We propose an alternative method, based on the isotonic regression. The constrained maximum likelihood estimates are fairly complicated to compute, whereas computation of the isotonic regression estimates only requires the repeated application of the Pool Adjacent Violators algorithm for linear orders. Therefore, the isotonic regression estimator is to be preferred from the viewpoint of computational complexity. Through experiments on simulated and real data, we show that the new learning method is competitive in performance to the constrained maximum likelihood estimator, and that both estimators improve on the standard estimator.

References

[1]
Ahuja, R. and J. Orlin (2001). A fast scaling algorithm for minimizing separable convex functions subject to chain constraints. Operations Research 49(5), 784-789.
[2]
Altendorf, E., A. Restificar, and T. Dietterich (2005). Learning from sparse data by exploiting monotonicity constraints. In F. Bacchus and T. Jaakkola (Eds.), Proceedings of the 21st Conference on Uncertainty in Artificial Intelligence (UAI-05), pp. 18-25. AUAI Press.
[3]
Blake, C. and C. Merz (1998). UCI repository of machine learning databases [http://www.ics.uci.edu/~mlearn/mlrepository.html].
[4]
Brunk, H. (1965). Conditional expectation given a σ-lattice and applications. Annals of Mathematical Statistics 36, 1339-1350.
[5]
Dykstra, R. and C. Feltz (1989). Nonparametric maximum likelihood estimation of survival functions with a general stochastic ordering and its dual. Biometrika 76(2), 331-341.
[6]
Dykstra, R. and T. Robertson (1982). An algorithm for isotonic regression for two or more independent variables. The Annals of Statistics 10(3), 708-716.
[7]
El Barmi, H. and H. Mukerjee (2005). Inferences under a stochastic ordering constraint: the k-sample case. Journal of the American Statistical Association 100(469), 252-261.
[8]
Fayyad, U. and K. Irani (1993). Multi-interval discretization of continuous valued attributes for classification learning. In Proceedings of IJCAI-93 (volume 2), pp. 1022-1027. Morgan Kaufmann.
[9]
Feelders, A. and L. van der Gaag (2005). Learning Bayesian network parameters with prior knowledge about context-specific qualitative influences. In F. Bacchus and T. Jaakkola (Eds.), Proceedings of the 21st Conference on Uncertainty in Artificial Intelligence (UAI-05), pp. 193-200. AUAI Press.
[10]
Hoff, P. (2003). Nonparametric estimation of convex models via mixtures. Annals of Statistics 31(1), 174-200.
[11]
Hogg, R. (1965). On models and hypotheses with restricted alternatives. Journal of the American Statistical Association 60(312), 1153-1162.
[12]
Koop, G. (2000). Analysis of Economic Data. John Wiley and Sons.
[13]
Maxwell, W. and J. Muchstadt (1985). Establishing consistent and realistic reorder intervals in production-distribution systems. Operations Research 33, 1316-1341.
[14]
McLachlan, G. and T. Krishnan (1997). The EM algorithm and extensions. Wiley.
[15]
Niculescu, R., T. Mitchell, and R. Rao (2006). Bayesian network learning with parameter constraints. Journal of machine learning research 7, 1357-1383.
[16]
Niculescu, R., T. Mitchell, and R. Rao (2007). A theoretical framework for learning Bayesian networks with parameter inequality constraints. In Proceedings of the twentieth international joint conference on artificial intelligence, pp. 155-160.
[17]
Robertson, T., F. Wright, and R. Dykstra (1988). Order Restricted Statistical Inference. Wiley.
[18]
Wellman, M. (1990). Fundamental concepts of qualitative probabilistic networks. Artificial Intelligence 44, 257-303.
[19]
Wittig, F. and A. Jameson (2000). Exploiting qualitative knowledge in the learning of conditional probabilities of Bayesian networks. In C. Boutilier and M. Goldszmidt (Eds.), Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence, pp. 644-652. Morgan Kaufmann.

Cited By

View all
  • (2011)Automated refinement of Bayes networks' parameters based on test ordering constraintsProceedings of the 25th International Conference on Neural Information Processing Systems10.5555/2986459.2986748(2591-2599)Online publication date: 12-Dec-2011

Recommendations

Comments

Information & Contributors

Information

Published In

UAI'07: Proceedings of the Twenty-Third Conference on Uncertainty in Artificial Intelligence
July 2007
483 pages
ISBN:0974903930
  • Editors:
  • Ron Parr,
  • Linda van der Gaag

Publisher

AUAI Press

Arlington, Virginia, United States

Publication History

Published: 19 July 2007

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 11 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2011)Automated refinement of Bayes networks' parameters based on test ordering constraintsProceedings of the 25th International Conference on Neural Information Processing Systems10.5555/2986459.2986748(2591-2599)Online publication date: 12-Dec-2011

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media