Written by 10:57 AM Tech

AI that learns physical laws on its own: A new technique for stable learning has been developed.

GIST Research Selected for Spotlight Paper at the World’s Leading AI Conference ‘NeurIPS’

The research team led by Professor Ui-Seok Hwang from the Electrical and Computer Engineering Department at Gwangju Institute of Science and Technology (GIST) has developed an adaptive sampling framework called LAS, designed to enable artificial intelligence (AI) to simultaneously choose where to compute and accurately perform those calculations. This technology helps AI intelligently select the points for computation and obtain more precise results there, utilizing a physics model called Langevin dynamics to adjust positions for frequent inspection of significant or error-prone intervals while gradually modifying internal neural network computation rules to reduce errors. This advance paves the way for AI to more autonomously learn physical laws,

The adaptive sampling framework (LAS) developed by Professor Ui-Seok Hwang’s GIST team allows AI to select where to compute and accurately perform the learning process simultaneously. It uses the physics model “Langevin dynamics” to let AI autonomously frequently check critical or error-prone intervals while gradually altering neural network computation rules to reduce errors./GIST

The era where AI can independently learn physical laws has arrived, and domestically, researchers have made this learning process more stable and intelligent. On the 20th, GIST’s Professor Ui-Seok Hwang and his research team announced the development of new technology addressing the instability and fluctuations seen in AI learning physical laws.

The new ‘Langevin Adaptive Sampling (LAS)’ technique, developed by the research team, facilitates stable learning in the physics-informed neural network (PINN), an AI model solving partial differential equations (PDEs). This research achievement has been recognized as a spotlight paper, ranking in the top 3.5% of all submissions at NeurIPS, the most prestigious AI conference. The paper was approved for publication on the 18th of last month and will be presented this December in San Diego, USA.

Partial differential equations mathematically express various physical phenomena such as changes in temperature, pressure, fluid flow, and electromagnetic fields over time and space. Physics-informed neural networks are technologies that allow AI to solve these equations, enhancing computational efficiency by directly integrating physical laws into the learning process instead of merely memorizing data, thereby reducing data collection costs.

However, previous methods had limitations. If the residual became large in certain sections during learning, AI would become overly focused on that area, causing learning to become skewed and unstable. The residual refers to the discrepancy between predicted values by AI and actual conditions, essentially the error.

To ensure that AI doesn’t lose its way while computing challenging parts, the research team applied a learning method mimicking particle movement, known as Langevin dynamics, a physics model. As particles move randomly but frequently pass through important areas, AI is designed to intensively explore sections with significant errors or complex conditions during learning.

Additionally, the AI is programmed to observe how the error changes, preventing it from being overly fixated on parts with large errors. In other words, the AI assesses not just ‘how wrong it is’ but also ‘where to go to be less wrong’, incorporating slight random movements to focus learning around smoother and more stable sections rather than unstable ones.

As a result, LAS significantly reduces errors compared to previous methods and consistently produces results even when learning speed or neural network structure changes. Remarkably, LAS successfully solved complex 4-8 dimensional high-dimensional heat transfer problems where previous methods failed, with higher computational efficiency, yielding faster and more accurate results at comparable costs.

Professor Hwang stated, “This research presents a method enabling stable learning even in complex models while reducing computational costs,” adding, “It can provide reliable AI solutions for various industries including manufacturing, energy, environment, and climate.”

Visited 2 times, 2 visit(s) today
Close Search Window
Close
Exit mobile version