Read & write the web in executive mode.

A Chrome extension that rewrites any webpage in place — and lets you share the result with anyone.

Used by researchers at Caltech, Oxford, ETH Zürich, and Brown.

pnas.org/doi/full/10.1073/pnas.0901069106

Human-induced evolution caused by unnatural selection through harvest of wild animals

Abstract

Human harvest of wild populations imposes nonrandom selection that reduces desirable phenotypes, contrasting with agriculture where such phenotypes are bred to increase. Undesirable changes in exploited populations (reduced body size, earlier sexual maturity, reduced antler size) result from selection against desirable traits, and the evolution caused by harvest may greatly extend recovery time once harvesting stops. We strongly encourage those managing harvested wild populations to implement monitoring programs to detect exploitation-induced selection before it impacts viability.

Human harvest of wild populations is almost always nonrandom, with individuals of certain size, morphology, or behavior more likely to be removed, bringing about genetic change if the selected phenotype has a genetic basis, as exemplified by elephants without tusks increasing from 10% to 38% in Zambia due to poaching, trophy hunting of bighorn sheep causing decreased horn size, and angling potentially selecting for more elusive brown trout.

In agriculture, breeding stock has been selected for thousands of years to increase desirable phenotypes, but wild animal harvesting removes the most desirable individuals, leaving less desirable ones to reproduce. Human-induced selection in the wild received little consideration until recently. Harvest can affect sexual selection by removing individuals with particular characteristics from the breeding pool, and unnatural selection generally acts at cross purposes to sustainable harvest.

One click transforms a dense research paper into an “Explain Like I’m 5” (ELI5) version.

See it in action.

Switch between the original and the rewritten version — without losing your place in the text.

Read the executive way.

Simplify

Turn specialist writing into plain language. Read the research paper, the technical blog post, or the Terms & Conditions in words you’d actually use.

Speed Read

Make any article get to the point faster. A 2,000-word feature becomes a 500-word brief. Executive style.

Cut Through Spin

Read between the lines in press releases. Others may fall for it, but not you.

Clean

Modernize clunky or dated writing. Same content, smoother read.

Write once. Readers choose.

Share any piece of web content as a magicreader link.

Your readers can toggle between the full version and a shorter or simplified one — just like switching between light mode and dark mode. No extension needed on their end.

thinkingmachines.ai/blog/lora
magicreader
Thinking Machines Lab
LoRA Without Regret
John Schulman · Sep 29, 2025

Today’s leading language models contain upwards of a trillion parameters, pretrained on tens of trillions of tokens. Base model performance keeps improving with scale, as these trillions are necessary to learn the vast amount of knowledge and capabilities shown by these models.

In contrast, post-training involves smaller datasets and generally focuses on narrower domains of knowledge and ranges of behavior. It seems wasteful to use a terabit of weights to represent updates that could be compressed into a much smaller space. The leading PEFT method is low-rank adaptation, or LoRA. LoRA replaces each weight matrix W from the original model with a modified version W′ = W + γBA, where B and A are matrices with far fewer parameters than W, and γ is a scaling factor.

LoRA may offer advantages in the cost and speed of post-training, and there are also a few operational reasons to prefer it to full fine-tuning: multi-tenant serving, since LoRA trains an adapter while keeping the original weights unchanged, a single inference server can keep many adapters in memory and sample from them simultaneously in a batched way.

These reasons are sufficient to explain the growing popularity of LoRA since the publication of the original LoRA paper in 2021. However, there is agreement that LoRA underperforms in settings that resemble pre-training, namely those with very large datasets that exceed the storage capacity of the low-rank matrices.

In our experiments, we find that indeed, when we get a few key details right, LoRA learns with the same sample efficiency as full fine-tuning and achieves the same ultimate performance. This article covers a series of supervised fine-tuning and reinforcement learning experiments we conducted to determine the conditions under which LoRA matches full fine-tuning efficiency.

How it works.

1. Install the extension

Add magicreader to Chrome. It takes ten seconds.

2. Hit Ctrl − / Ctrl S on any page

The content shortens or rewrites in place. Toggle back anytime.

3. Share with anyone

Generate a link that lets others toggle between versions — no extension needed.

Trusted by the smartest readers.

Caltech
Brown University
Oxford
ETH Zurich

Choose the right membership.

Popular with Students & Researchers
Starter
$0/mo

Get started for free.
Modest monthly usage limits.

Get started
Plus
$12/mo

For everyday readers.
Higher limits for long documents.

Get Plus
Pro
$36/mo

For professionals.
Highest limits, early access to beta features.

Get Pro

Try magicreader today.