A Chrome extension that rewrites any webpage in place — and lets you share the result with anyone.
Used by researchers at Caltech, Oxford, ETH Zürich, and Brown.
Human harvest of wild populations imposes nonrandom selection that reduces desirable phenotypes, contrasting with agriculture where such phenotypes are bred to increase. Undesirable changes in exploited populations (reduced body size, earlier sexual maturity, reduced antler size) result from selection against desirable traits, and the evolution caused by harvest may greatly extend recovery time once harvesting stops. We strongly encourage those managing harvested wild populations to implement monitoring programs to detect exploitation-induced selection before it impacts viability.
Human harvest of wild populations is almost always nonrandom, with individuals of certain size, morphology, or behavior more likely to be removed, bringing about genetic change if the selected phenotype has a genetic basis, as exemplified by elephants without tusks increasing from 10% to 38% in Zambia due to poaching, trophy hunting of bighorn sheep causing decreased horn size, and angling potentially selecting for more elusive brown trout.
In agriculture, breeding stock has been selected for thousands of years to increase desirable phenotypes, but wild animal harvesting removes the most desirable individuals, leaving less desirable ones to reproduce. Human-induced selection in the wild received little consideration until recently. Harvest can affect sexual selection by removing individuals with particular characteristics from the breeding pool, and unnatural selection generally acts at cross purposes to sustainable harvest.
One click transforms a dense research paper into an “Explain Like I’m 5” (ELI5) version.
Switch between the original and the rewritten version — without losing your place in the text.
Turn specialist writing into plain language. Read the research paper, the technical blog post, or the Terms & Conditions in words you’d actually use.
Make any article get to the point faster. A 2,000-word feature becomes a 500-word brief. Executive style.
Read between the lines in press releases. Others may fall for it, but not you.
Modernize clunky or dated writing. Same content, smoother read.
Share any piece of web content as a magicreader link.
Your readers can toggle between the full version and a shorter or simplified one — just like switching between light mode and dark mode. No extension needed on their end.
Today’s leading language models contain upwards of a trillion parameters, pretrained on tens of trillions of tokens. Base model performance keeps improving with scale, as these trillions are necessary to learn the vast amount of knowledge and capabilities shown by these models.
In contrast, post-training involves smaller datasets and generally focuses on narrower domains of knowledge and ranges of behavior. It seems wasteful to use a terabit of weights to represent updates that could be compressed into a much smaller space. The leading PEFT method is low-rank adaptation, or LoRA. LoRA replaces each weight matrix W from the original model with a modified version W′ = W + γBA, where B and A are matrices with far fewer parameters than W, and γ is a scaling factor.
LoRA may offer advantages in the cost and speed of post-training, and there are also a few operational reasons to prefer it to full fine-tuning: multi-tenant serving, since LoRA trains an adapter while keeping the original weights unchanged, a single inference server can keep many adapters in memory and sample from them simultaneously in a batched way.
These reasons are sufficient to explain the growing popularity of LoRA since the publication of the original LoRA paper in 2021. However, there is agreement that LoRA underperforms in settings that resemble pre-training, namely those with very large datasets that exceed the storage capacity of the low-rank matrices.
In our experiments, we find that indeed, when we get a few key details right, LoRA learns with the same sample efficiency as full fine-tuning and achieves the same ultimate performance. This article covers a series of supervised fine-tuning and reinforcement learning experiments we conducted to determine the conditions under which LoRA matches full fine-tuning efficiency.
Add magicreader to Chrome. It takes ten seconds.
The content shortens or rewrites in place. Toggle back anytime.
Generate a link that lets others toggle between versions — no extension needed.