Skip to main content

Exploiting Multitask Learning Schemes Using Private Subnetworks

  • Conference paper
Computational Intelligence and Bioinspired Systems (IWANN 2005)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 3512))

Included in the following conference series:

  • 3152 Accesses

Abstract

Many problems in pattern recognition are focused to learn one main task, SingleTask Learning (STL). However, most of them can be formulated from learning several tasks related to the main task at the same time while using a shared representation, MultitaskLearning (MTL). In this paper, a new MLT architecture is proposed and its performance is compared with those obtained from other previous schemes used in MTL. This new MTL scheme makes use of private subnetworks to induce a bias in the learning process. The results provided from artificial and real data sets show how the use of this private subnetworks in MTL produces a better generalization capabilities and a faster learning.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Abu-Mostafa, Y.S.: Learning from hints in neural networks. Journal of Complexity 6(2), 192–198 (1990)

    Article  MATH  MathSciNet  Google Scholar 

  2. Caruana, R.: Multitask learning: a knowledge-based source of inductive bias. In: Proceedings of the 10th International Conference of Cognitive Science, pp. 41–48 (1993)

    Google Scholar 

  3. Thrun, S.: Is learning the n-thing any easier than learning the first? Advances in Neural Information Processing Systems (NIPS), 640–646 (1996)

    Google Scholar 

  4. Caruana, R.: Multitask learning. Ph. D. Thesis. Carnegie Mellon University (1997)

    Google Scholar 

  5. Hinton, G.E.: Learning distributed representations of concepts. In: Proceedings of the 8th International Conference of Cognitive Science, pp. 1–12 (1986)

    Google Scholar 

  6. Baxter, J.: Learning Internal Representations. Ph. D. Thesis. The Flinders University of South Australia (1994)

    Google Scholar 

  7. Baxter, J.: A model of inductive learning bias learning. Journal of Artificial Intelligence Research 12, 149–198 (2000)

    Google Scholar 

  8. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning Representations by Back-propagating Errors. Nature 323, 533–536 (1986)

    Article  Google Scholar 

  9. Wnek, J., Michalski, R.S.: Comparing symbolic and subsymbolic learning: Three studies. In: Michalski, R.S., Tecuci, G. (eds.) Machine Learning: A Multistrategy Approach, vol. 4, pp. 318–362 (1993)

    Google Scholar 

  10. Silver, D.: Selective transfer of neural network task knowledge. Ph. D. Thesis. University of Western Ontario (2000)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

García-Laencina, P.J., Figueiras-Vidal, A.R., Serrano-García, J., Sancho-Gómez, JL. (2005). Exploiting Multitask Learning Schemes Using Private Subnetworks. In: Cabestany, J., Prieto, A., Sandoval, F. (eds) Computational Intelligence and Bioinspired Systems. IWANN 2005. Lecture Notes in Computer Science, vol 3512. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11494669_29

Download citation

  • DOI: https://doi.org/10.1007/11494669_29

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-26208-4

  • Online ISBN: 978-3-540-32106-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics