GPU Large-Scale Nonlinear Programming

preview_player
Показать описание
📢 Large-Scale Nonlinear Programming on GPUs: State-of-the-Art and Future Prospects
🎓 Presenter: Sungho Shin, ANL / MIT
📆 Date/Time: Thurs, Apr 11
🌐 Where: Online – Interactive chat and video conference
Congratulations to Sungho Shin as the winner of the AIChE CAST Division W. David Smith Jr. Graduate Publication Award.
AIChE Computing & Systems Technology Division webinar.

Bio

Chat Messages

From John Hedengren : We have about 5 minutes left for the presentation and then will have a brief Q+A. Please put your question in the chat window or unmute your microphone to ask.
From Emrullah ERTURK : 1- Can this developed NLP solver be used with JuMP? 2- Can we use this solver with the MIP solver to solve MINLP problems?
From Ashfaq Iftakher : Why Hybrid KKT systems seem to require less number of iterations that Lifted KKT? Is the degree of ill conditioning less in Hybrid KKT?
From Tianhao Liu : 1- How do the sparsity and problem scale in general nonlinear optimization (rather than special cases like ACOPF) affect the acceleration of GPU? 2- Can GPU (and cuDSS) still significantly outperform CPU in linear systems (e.g., for IPMs’ KKT system)? ...I mean general linear systems.
From John Hedengren : Junho Park will take over the Q+A moderation. I need to go start a class at BYU. Excellent presentation Sungho!
From Laurens Lueg : Can you comment on whether applying problem-level decomposition, ie. further decomposing the overall KKT system into partitions based on the problem structure (e.g. Schur complement decomposition) benefits the condensed IPM using CUDSS?
to just give the full sparse system to the GPU-enabled linear solver?
From Ashfaq Iftakher : Thanks Sungho for the detailed explanation. Excellent Presentation!!
From Emrullah ERTURK : Thanks for the presentation and answers.
From Laurens Lueg : Thank you!
From Tianhao Liu : Wonderful work! Thank you.
Рекомендации по теме
Комментарии
Автор

Hello, sorry for asking Lagrange multipliers here but I wanna clear my doubts . Can I just solve mutlivariable Optimizations using this ?
1. Solve problem standard way(like in equation constraints) so assuming both are active.
2. Solve problem with one Constrait being inactive using thay constant so it will be zero. Another constrait is active
3. Again but now vica versa of 2.
4. Both constraits are inactive.


And then which will be min and max will be my answer. Is that right ? Also whaf if I have just one constrait inequality. Looks like it will be only two steps
1. Solve like in equation constrait so its active
2. Solve like its nonactive so zero.

BUT if I get some values of x and y it MUST satisfy the inequality of constrait in both cases as Constrait was annulated because of That Constant by which its multiplied but its still our limitations. Please tell me if I am right, because I solved many Optimization problems using this method and it worked everytime .

Anonimka