In machine learning (ML), boosting is an ensemble metaheuristic for primarily reducing bias (as opposed to variance). It can also improve the stability and accuracy of ML classification and regression algorithms. Hence, it is prevalent in supervised learning for converting weak learners to strong learners.
The concept of boosting is based on the question posed by Kearns and Valiant (1988, 1989): "Can a set of weak learners create a single strong learner?" A weak learner is defined as a classifier that is only slightly correlated with the true classification. A strong learner is a classifier that is arbitrarily well-correlated with the true classification. Robert Schapire answered the question in the affirmative in a paper published in 1990. This has had significant ramifications in machine learning and statistics, most notably leading to the development of boosting.
Initially, the hypothesis boosting problem simply referred to the process of turning a weak learner into a strong learner. Algorithms that achieve this quickly became known as "boosting". Freund and Schapire's arcing (Adapt[at]ive Resampling and Combining), as a general technique, is more or less synonymous with boosting.