· New courses

Defending RNG in a world that loves shortcuts

We are opening a focused track on protecting random number evidence when AI assistants sit next to your engineers. The long read walks through a toy model of seed space (why a 2128 state is not the same as “enough for marketing”), the order of magnitude of re-seed risk if a clock skews, and a blunt inventory of where “helpful” automation speeds up the wrong diff. Read the full piece and mention cohort RNG & AI defense on registration if you want a seat next round.

When your RTP line goes quiet, listen harder

Return-to-player is a long-run story told in a short-run office. The essay does the arithmetic people skip in stand-ups: how the standard error of a mean return shrinks with √N, why a “flat” 96.0% line on a 30-day chart can still be hiding half a point of model mismatch, and what sample sizes actually mean for confidence—not fortune-telling, just variance doing what variance does. Read it if you sign off on numbers your gut does not like.

One stream, many owners: anti-fraud telemetry that survives Monday

A calmer way to run the argument between security, risk, and support: stable names, honest retention, and a change log for rules you can read out loud. We spell out a back-of-napkin Bayes table—fraud base rate, rule sensitivity, false positive rate—so a “high precision” model still does not drown your ops team, and we estimate what 90 days of 2KB JSON per event actually costs at a million events a day. Open the post if you are tired of the same false positive every sprint.