AI neurostimulation for attention
See this tweet for a nice summary!
#Key notes
-
tRNS is a non-invasive electric brain stimulation method using weak, random electric currents to influence brain activity.
tRNS modulates neural activity by activating sodium channels and adjusting excitation/inhibition balances.
-
Study done with 35 healthy adults (not those with ADHD).
#Experiment itself
-
Home-based and AI-driven (bayesian optimization)
Two significant barriers limit the optimization and scalability of neurostimulation: personalization and ecological validity. Personalization, which involves tailoring stimulation parameters to individuals, often requires resource-intensive methods such as exhaustive parameter testing or MRI-based adjustments. These approaches are impractical for large-scale applications. Ecological validity is another challenge, as most studies occur in controlled laboratory settings that poorly reflect real-world environments like homes or workplaces. This limits the generalizability of findings and hinders real-world implementation.
-
How do we know
regression to the mean isn't happening here? -
Because there was within-person comparison: each participant did all three conditions (personalized tRNS, one-size, sham) on different days. Any natural bounce-back, practice, or day-to-day noise should affect all three conditions similarly.
-
There was an explicit sham control: If “low people bounce up” were the driver, you’d expect low-baseline participants to improve under sham too.
#How'd the "AI-tuning" actually work?
- What’s being tuned? One parameter: current intensity (0.1–1.6 mA, step 0.1).
- Personalizers (“covariates” or inputs): baseline A′ (your pre-stim performance) and head circumference.
- Process
- Experiment 1 (algorithm build)
- Burn-in: 72 sessions with random intensities to seed the model.
- Then pBO: 218 sessions where a single global GP over [intensity, baseline A′, head size] is refit before each stimulation session and then proposes the next session’s intensity for that specific participant profile.
- Pooling: The GP uses all accumulated data across users, but outputs a personalized intensity because baseline A′ and head size are inputs.
- Experiment 3 (validation): Each participant did three sessions on different days: (i) personalized-by-pBO, (ii) one-size (1.5 mA), (iii) sham. For the pBO session, the model (trained on Exp-1) chose one intensity for that session; there wasn’t an A/B/C ladder inside a single session.
(Actual title: Personalized home based neurostimulation via AI optimization augments sustained attention; included in
Neurode's cited papers .)
#Other learning
- Bayesian optimization is a method for optimizing expensive black-box functions (each test is expensive and the underlying function is unknown).
- Attempts to balance exploration with exploitation.
- It works by iteratively building a probabilistic model (often Gaussian Process) of the function and uses an acquistion function (like Expected Improvement) to intelligently select the next best experiment to run.
- The F-statistic is a ratio: variability between groups ÷ variability within people (noise). Higher F → groups differ more than you’d expect by noise.
- Syntax: F(df₁, df₂) = value. df₁ (numerator df): how many group contrasts you’re testing (for 3 groups, df₁ ≈ 2).
- df₂ (denominator df): the residual degrees of freedom. In mixed models, this can be non-integer (Satterthwaite approximation), e.g., 59.57.
- p-value: probability of seeing an F at least this big if, in truth, there’s no real group difference.
- Lower p is better for detecting a real effect (common cutoff p < 0.05).
- p is not effect size; it’s a strength-of-evidence number.
- Applying this:
- Whole sample: F(2, 59.57)=0.27, p=0.77 → tiny F, big p → no reliable difference among the 3 conditions overall.
- Low-baseline subgroup: F(2,25.13)=7.51, p=0.003 → big F, small p → conditions differ. Follow-ups: personalized > one-size > sham
- High-baseline subgroup: F(2,25.82)=0.56, p=0.58 → no difference.