
Italian-Legal-BERT
Italian-Legal-BERT is a 110M-parameter domain-adapted BERT model for Italian legal NLP, trained on 3.7GB of court decisions from Italy's National Jurisprudential Archive.

Italian-Legal-BERT is a 110M-parameter domain-adapted BERT model for Italian legal NLP, trained on 3.7GB of court decisions from Italy's National Jurisprudential Archive.

Researchers at Scuola Superiore Sant'Anna in Pisa built Italian-Legal-BERT, a 110M-parameter model trained on 3.7GB of Italian court decisions that outperforms general Italian BERT on legal NLP tasks.

Gemini 2.5 Flash Lite leads the Vectara hallucination leaderboard at 3.3% error rate while GPT-4o and Gemini 2.5 Pro dominate long-document tasks - full rankings, benchmark scores, and pricing.

Gemini 2.5 Pro leads WMT25 human evaluation across 16 language pairs while GPT-5 tops community benchmarks - full rankings, BLEU and COMET scores, and pricing for every major model.