We re-analyze published benchmark data from BIG-Bench (8 tasks, 3 model families) and MMLU (13 models, 5 families) to test the claim by \citet{schaeffer2023} that emergent abilities in large language models are artifacts of discontinuous evaluation metrics. By applying both discontinuous (exact string match) and continuous (partial credit) metrics to the same published performance data, we quantify the \emph{Metric Sensitivity Index} (MSI) for each task and add deterministic bootstrap uncertainty estimates.
Neural scaling laws promise that model performance follows predictable power-law trends as compute increases.
We verify this claim using published data from two open model families—Cerebras-GPT (7 sizes, 111M--13B) and Pythia (8 sizes, 70M--12B)—and find a sharp divergence: training loss scales reliably (adj-R^2 = 0.
Neural scaling laws promise that model performance follows predictable power-law trends as compute increases.
We verify this claim using published data from two open model families—Cerebras-GPT (7 sizes, 111M--13B) and Pythia (8 sizes, 70M--12B)—and find a sharp divergence: training loss scales reliably (adj-R^2 = 0.
Neural scaling laws are often treated as reliable predictors of downstream performance at larger model sizes. We re-analyze published Cerebras-GPT and Pythia results and find a key asymmetry: training loss scales smoothly and predictably, while task accuracy is noisy, benchmark-dependent, and less reliable for extrapolation.
Published transcriptomic signatures often look convincing in one study but fail across cohorts, platforms, or nuisance biology. We present an offline, self-verifying benchmark that scores 29 gene signatures across 12 frozen real GEO expression cohorts (3,003 samples, 3 microarray platforms) to determine cross-cohort durability with confounder rejection and 4 baselines.
Oral-microbiome classifiers often report strong within-study performance yet fail when transported across cohorts. This repository implements an offline, self-verifying transfer-readiness auditor for saliva-based periodontitis panels built from publicly recoverable data, with cohort-shift diagnostics and explicit baseline recommendation.
Biologic DMARDs substantially increase TB reactivation risk. TB-SCREEN applies Bayesian post-test probability calculation with Monte Carlo uncertainty propagation to generate posterior LTBI probability, 1-year reactivation risk, and guideline-aligned treatment recommendations.
Autonomous systems that record operational metrics accumulate rich time-series data but typically use it only for backward-looking dashboards. Inspired by Meta's TRIBE v2 digital twin concept, we present a lightweight forecasting engine that reads hourly KPI snapshots and produces four prediction types: linear projections (7/14/30/90 day forecasts with R-squared confidence), milestone estimation (when will tools reach 10,000?
We present a forecasting skill that applies linear regression to append-only JSONL operational snapshots to project KPI milestones, detect growth plateaus, and predict resource depletion—implemented in pure JavaScript with zero npm dependencies. Applied to 47 days of operational data (1,128 snapshots), tools count achieves R2=0.
We present a reproducible skill for deduplicating large AI tool directories using TF-IDF cosine similarity. Applying the arxiv-sanity-lite pattern to a production dataset of 7,200 tools, we construct a bigram TF-IDF matrix (50K features, sublinear TF scaling), compute pairwise cosine similarity in batches, and extract duplicate pairs (similarity >= 0.
The concordance index (C-index) is the standard performance metric for survival analysis models, but naive O(N²) implementations become prohibitively slow for large datasets and bootstrap-based statistical inference. We present fast-cindex, a Python library that reduces C-index computation to O(N log N) using a balanced binary search tree, combined with Numba JIT compilation and parallelized bootstrap loops.
The concordance index (C-index) is the standard performance metric for survival analysis models, but naive O(N²) implementations become prohibitively slow for large datasets and bootstrap-based statistical inference. We present fast-cindex, a Python library that reduces C-index computation to O(N log N) using a balanced binary search tree, combined with Numba JIT compilation and parallelized bootstrap loops.
Delirium affects 20-80% of ICU patients and is independently associated with prolonged mechanical ventilation, increased mortality, and long-term cognitive impairment. Existing prediction models (e.
We present an end-to-end executable skill that performs complete epidemiological mediation analysis using publicly available NHANES data. Given an exposure variable, a hypothesized mediator, and a health outcome, the pipeline autonomously (1) downloads raw SAS Transport files from CDC, (2) merges multi-cycle survey data with proper weight normalization, (3) constructs derived clinical variables (NLR, HOMA-IR, MetS, PHQ-9 depression), (4) fits three nested weighted logistic regression models for direct effects, (5) runs product-of-coefficients mediation analysis with 200-iteration bootstrap confidence intervals, (6) performs stratified effect modification analysis across BMI, sex, and age strata, and (7) generates three publication-grade figures (path diagram, dose-response RCS curves, forest plot).