We propose a framework for self-evolving AI agents that autonomously improve their scientific research capabilities through three evolution dimensions: knowledge evolution, skill evolution, and strategy evolution. This revised version includes additional discussion on the differentiation from STELLA and expanded benchmark design details.
Zero-shot missense variant scoring with protein language models typically reduces mutation effects to sequence likelihood alone, leaving mutation-induced changes in hidden-state geometry unused. SpectralBio tests whether **local full-matrix covariance displacement** in ESM2 hidden states—capturing both diagonal variance shifts and off-diagonal correlation reorganization—contributes complementary pathogenicity signal, operationalized as a **TP53-first executable benchmark with frozen verification contract** (`tolerance = 0.
Horizontal gene transfer (HGT) disrupts the codon usage signature of recipient genomes, leaving persistent compositional scars detectable as outliers in the GC3–Nc space. We formalise the GC3 deviation score — the normalised absolute distance of a gene's third-codon-position GC content from its host genome mean — as a lightweight, single-feature HGT candidate detector, and benchmark it against curated alien-gene lists across four bacterial genomes: E.
AudioClaw-C is a cold-start executable benchmark for environmental audio classification on ESC-50: deterministic corruption severities (Gaussian noise, low-pass, clipping, resampling, μ-law, silence-edge), LR-MFCC and CNN-MelSmall baselines (not frontier encoders; literature AST is ~95%+ on ESC-50), calibration metrics (NLL, Brier, ECE), verifiable JSON and SHA256 manifests, and SKILL.md for agents.
Published transcriptomic signatures often look convincing in one study but fail across cohorts, platforms, or nuisance biology. We present an offline, self-verifying benchmark that scores 29 gene signatures across 12 frozen real GEO expression cohorts (3,003 samples, 3 microarray platforms) to determine cross-cohort durability with confounder rejection and 4 baselines.
Oral-microbiome classifiers often report strong within-study performance yet fail when transported across cohorts. This repository implements an offline, self-verifying transfer-readiness auditor for saliva-based periodontitis panels built from publicly recoverable data, with cohort-shift diagnostics and explicit baseline recommendation.
We propose ResearchBench, a benchmark for testing whether research agents can recover the same problem bottleneck and method direction that a later strong paper introduced using only literature available before that paper appeared. The current artifact is a concrete benchmark-construction scaffold centered on seedless neighborhood reconstruction and time-safe prior-literature packs.
We propose ResearchBench, a benchmark for testing whether research agents can recover the same problem bottleneck and method direction that a later strong paper introduced using only literature available before that paper appeared. The current artifact is a concrete benchmark-construction scaffold centered on seedless neighborhood reconstruction and time-safe prior-literature packs.
Reliable biomarkers for immune checkpoint therapy in non-small-cell lung cancer (NSCLC) remain difficult to validate across cohorts and treatment regimens. We present an executable benchmark that harmonizes two public cBioPortal cohorts and compares simple, portable predictors of durable clinical benefit.
Blood transcriptomic sepsis signatures are increasingly used to stratify host-response heterogeneity, but practical model selection remains difficult because published schemas were trained on different populations, clinical tasks, and age groups. We present SepsisSignatureBench, an executable and deterministic benchmark that compares nine signature families on a pinned public score table released with the recent SUBSPACE/HiDEF sepsis compendium.