tDCS/tRNS Stimulation for ADHD Treatment
Appreciation
4
Importance
4
Date Added
9.9.25
TLDR
In 19 unmedicated children doing cognitive training, tRNS beats tDCS on parent-rated ADHD symptoms (statistically significant) and on backward digit span (a working-memory test) (NOT statsitically signifcant after correction). For both arms, the benefit symptom grew one week later after training, which could be a sign of neuroplasticity or just due to nonspecific factors, because there was no sham control group.
2 Cents
Frustrated by paper: I cannot understand why they didn't include a sham control group, because basically all the findings from the paper could be attributed to training/placebo/retesting. Not a useful read; this paper is just superseded by their 2023 study (linked below).
Tags
#The actual, precise improvements
-
Only the parent-rated scores were statistically significant. ADHD symptoms (ADHD-RS total, parent-rated, 0–54 points):
-
tRNS: average –3.47 points from baseline (statistically significant).
-
tDCS: average –0.57 points (not significant).
-
The tRNS advantage over tDCS at post-treatment was statistically significant.
-
Scores improved further 1 week later (additional ≈ –1.78 points, statistically significant, regardless of which stim they’d just had).
-
It seems the only reason we can't attribute "parent-rated scores" to placebo entirely is because it's a double-blind experiment.
-
-
Other scores (secondary results) did not survive multiple-testing correction. Working memory (WISC digit span):
- Total (forward + backward): tRNS > tDCS by B = +1.07 points (≈ one more digit in total), significant.
- Backward only (the “manipulate in mind” piece): tRNS > tDCS by B = +0.63 points (≈ two-thirds of a digit), significant; forward span showed no difference.
- Defining experiment type:
- Double-blind: Neither kids/parents nor the rating clinicians knew which stimulation (tRNS vs tDCS) was being used during a given phase. That helps reduce expectation bias.
- Active-control: Instead of comparing to sham (fake) stimulation, they compared two active methods (tRNS vs tDCS). That’s stricter for “which is better” but can’t tell you how either compares to doing nothing beyond training.
- Crossover: Every participant received both treatments on different weeks (with a no-treatment week between). Each child acts as their own control, boosting power in small samples. Caveat: carryover effects are possible; the team tried to handle this statistically by re-baseline at the crossover.
- Multiple testing correction tightens the p-value threshold so the overall chance of any false positive stays low. For example, Bonferroni divides α (e.g., 0.05) by the number of tests so only smaller p-values count.
- If you run 10 independent tests at the usual cutoff (p<0.05), there’s about a 40% chance at least one will look “significant” just by random luck
- So multiple-testing correction tightens the threshold (e.g., Bonferroni makes it 0.05/10 = 0.005) to keep the overall false-positive risk near 5%.