Changelog#

4 February 2026#

Bitflip-aware LoRA fine-tuning of Llama-3.1-8B (Bitflip-Aware LoRA Fine-Tuning)

LoRA adapters with only 1.2% trainable parameters effectively mitigate random bitflip noise, reducing validation perplexity from 1008.95 to 11.01 (clean baseline: 7.91).

Item

Link

Llama-3.1-8B with random bitflip noise

Bitflip-Aware LoRA Fine-Tuning

4 October 2025#

Optical Transformer fine-tuning on CLM models (60M – 1.1B) (Scaling Optical Transformers to Causal Language Models)

Full fine-tuning of pretrained CLM models with optical transformer simulation.

Item

Link

Optical Transformer on CLM

Scaling Optical Transformers to Causal Language Models

1 October 2025#

Optical Transformer, Spiking Transformer, and PIM on RoBERTa

Initial experiments on RoBERTa with three new compute paradigms.

Item

Link

Optical Transformer on RoBERTa

Optical Neural Networks on RoBERTa

Spiking Transformer on RoBERTa

Spiking Neural Networks on RoBERTa

Processing in Memory on RoBERTa

Processing-in-Memory on RoBERTa

9 June 2025#

Mase-triton released on PyPI (Mase-Triton)

Our software-emulation and acceleration backend is now publicly available:

pip install mase-triton

See Mase-Triton for full documentation.

15 April 2025#

System and model-level training simulation for Small Language Models

Initial release of the scaling framework and bitflip-aware pretraining pipeline.

Item

Link

Environment setup

Installation

Pretraining AICrossSim-CLM (60M – 1.1B) and evaluation

LLM Pretraining & Evaluation

Bitflip-aware pretraining and evaluation

Random Bitflip on CLM