5/1/25

Exploring the LLM universe for astronomy research

From ChatGPT to Deep Research, the pace of foundation models of AI has generated much excitement as well as reckoning: what are LLMs capable of? What are their limitations? How far can we push for AI models for research? One of the core missions of the Explorable Universe group within CosmicAI is to explore these questions for code LLMs.

In this talk, Dr. Li and Joseph first present a primer that explains how LLMs are trained, especially within the context of our own code LLM training experience, thereby scoping the capabilities of these models. Next, they discuss their recent efforts to build a new coding benchmark that tests the limitations of LLMs, focusing on code execution and visualizations.

Next

Time-Series Modeling of High-Resolution Radio Spectra