Skip to content
Thanh Le-Cong
Back to Publications
arXiv preprint arXiv:2601.020602026· Under Review

Perish or Flourish? A Holistic Evaluation of Large Language Models for Code Generation in Functional Programming

Nguyet-Anh H. Lang, Eric Lang, Thanh Le-Cong, Bach Le, Quyet-Thang Huynh

TL;DR

FPEval introduces FPBench — 721 programming tasks across Haskell, OCaml, and Scala — to holistically assess LLM code generation in functional programming. Despite gains with larger models, LLMs consistently produce non-idiomatic, imperative-style code and struggle more with purely functional languages than hybrid or imperative ones. LLMs can partially self-repair these issues when given static analysis feedback.

Abstract

Functional programming provides strong foundations for developing reliable and secure software systems, yet its adoption remains not widespread due to the steep learning curve. Recent advances in Large Language Models (LLMs) for code generation present new opportunities to lower these barriers. However, extensive evaluations of LLMs largely focus on imperative programming languages, and their capabilities in functional programming languages (FP) remain underexplored. To address this gap, we introduce FPEval, a holistic evaluation framework built on FPBench, a new benchmark of 721 programming tasks across three difficulty levels on three mainstream FP languages: Haskell, OCaml and Scala. FPEval provides comprehensive evaluation infrastructures with both test validations with comprehensive test suites and static analysis tools to assess both functional correctness and code style and maintainability. Using this framework, we evaluate state-of-the-art LLMs, including GPT-3.5, GPT-4o, and GPT-5, for code generation in functional programming languages and Java as an imperative baseline. Our results demonstrate that LLM performance in functional programming improves substantially with model advancement; however, error rates remain significantly higher in purely functional languages (Haskell and OCaml) than in hybrid (Scala) or imperative (Java) languages. Moreover, LLMs frequently generate non-idiomatic functional code that follows imperative patterns, raising concerns about code style and long-term maintainability. Finally, we show that LLMs can partially self-repair both correctness and quality issues when provided with static analysis feedback and hand-crafted instructions for common types of issues.

Contributions

  • 1.FPBench: a new benchmark of 721 programming tasks spanning three difficulty levels (easy, medium, hard) across three mainstream functional programming languages — Haskell, OCaml, and Scala.
  • 2.FPEval: a holistic evaluation framework combining test-case validation and static analysis tools to assess both functional correctness and code style and maintainability — going beyond pass@k metrics.
  • 3.Systematic evaluation of state-of-the-art LLMs (GPT-3.5, GPT-4o, GPT-5) on functional programming with Java as an imperative baseline, revealing a persistent performance gap in purely functional languages.
  • 4.Empirical evidence that LLMs frequently produce non-idiomatic functional code following imperative patterns, and that LLMs can partially self-repair both correctness and style issues using static analysis feedback.

Key Results

FPBench: 721 tasks across Haskell, OCaml, and Scala at three difficulty levels
Error rates significantly higher in purely functional languages (Haskell, OCaml) than in hybrid (Scala) or imperative (Java)
LLM performance improves substantially from GPT-3.5 → GPT-4o → GPT-5, yet gaps persist
LLMs frequently generate non-idiomatic code following imperative patterns in functional languages
Partial self-repair of correctness and style issues is achievable with static analysis feedback

Citation

BibTeX
@article{lang2026fpeval,

  title     = {Perish or Flourish? A Holistic Evaluation of Large Language Models for Code Generation in Functional Programming},

  author    = {Lang, Nguyet-Anh H. and Lang, Eric and Le-Cong, Thanh and Le, Bach and Huynh, Quyet-Thang},

  journal   = {arXiv preprint arXiv:2601.02060},

  year      = {2026},

  eprint    = {2601.02060},

  archivePrefix = {arXiv},

  primaryClass  = {cs.PL},

  doi       = {10.48550/arXiv.2601.02060}

}
LLMcode generationfunctional programmingevaluationbenchmarkprogram synthesis