Large Language Models (LLMs) provide a pre-trained foundation for training many interesting artificial intelligence systems. However, they have many shortcomings. They are expensive to train and to update, their non-linguistic knowledge is poor, they make false and self-contradictory statements, and these statements can be socially and ethically inappropriate. In this talk, I will review these shortcomings and current efforts to address them within the existing LLM framework. I will then argue for a different, more modular architecture that decomposes the functions of existing LLMs and adds several additional components. I believe this alternative can address all of the shortcomings of LLMs. I will speculate about how this modular architecture could be built through a combination of machine learning and engineering.
Dose assessment: from conceptual model to environmental radioactivity monitoring
Radionuclides are discharged into the environment from a variety of nuclear and radiation facilities, potentially causing harmful effects on human health and the environment. If discharges are likely to result in adverse radiological effects, they must be evaluated in...