Multi-objective optimization: Difference between revisions

Content deleted Content added
m clean up, replaced: journal=Proceedings of International Conference on Learning Representations (ICLR) → journal=Proceedings of International Conference on Learning Representations
A priori methods: | Altered template type. Add: eprint, class, date, title, authors 1-6. Changed bare reference to CS1/2. Removed parameters. Some additions/deletions were parameter name changes. | Use this tool. Report bugs. | #UCB_Gadget
Line 190:
:where <math>W_j</math> is individual optima (absolute) for objectives of maximization <math>r</math> and minimization <math>r+1</math> to <math>s</math>.
 
* '''hypervolume/Chebyshev scalarization'''<ref name="Golovin2021">Daniel{{cite arXiv | last1=Golovin and| Qiuyifirst1=Daniel | last2=Zhang. | first2=Qiuyi | title=Random Hypervolume Scalarizations for Provable Multi-Objective Black Box Optimization. ICML| 2021.date=2020 https://arxiv| class=cs.org/abs/LG | eprint=2006.04655 }}</ref>
::<math>
\min_{x\in X} \max_i \frac{ f_i(x)}{w_i}
Line 197:
 
=== Smooth Chebyshev (Tchebycheff) scalarization ===
The '''smooth Chebyshev scalarization''';<ref name="Lin2024">{{cite arXiv | last1=Lin, X.;| first1=Xi | last2=Zhang, X.;| first2=Xiaoyuan | last3=Yang, Z.;| first3=Zhiyuan | last4=Liu, F.;| first4=Fei | last5=Wang, Z.;| first5=Zhenkun | last6=Zhang, Q.| (2024).first6=Qingfu | “Smoothtitle=Smooth Tchebycheff Scalarization for Multi-Objective Optimization”.Optimization ''arXiv| preprint''date=2024 [[arXiv:| class=cs.LG | eprint=2402.19078]]. }}</ref> also called smooth Tchebycheff scalarisation (STCH); replaces the non-differentiable max-operator of the classical Chebyshev scalarization with a smooth logarithmic soft-max, making standard gradient-based optimization applicable. Unlike typical scalarization methods, it guarantees exploration of the entire Pareto front, convex or concave.
 
;Definition