Week 4: Extracting Pharmacodynamic Data
March 30, 2026
Hey guys! Last week, I had just finished collecting the pharmacokinetic data. This week, I turned to the other half of my data, subjective effect intensity vs time. This is the pharmacodynamic side of my dataset, describing “how high people felt,” that I’ll eventually pair with the plasma concentration values in order to build my concentration-effect equation.
The extraction process was largely the same as last week, using WebPlotDigitizer to collect values from the 8 papers. However, this side of the data also came with its own set of complexities, each needing to be dealt with carefully.
The first was standardizing the intensity scale. Across my 8 papers, the experimenters didn’t always measure subjective effects with the same scale. Earlier I had mentioned that I looked for papers using a Visual Analog Scale (VAS), where participants mark a point on a 100mm line ranging from “nothing” to “extreme,” producing a continuous score between 0 and 100%. There was one paper though, Madsen et al., that used a Likert scale instead, where participants chose a whole number between 0 and 10. These are not identical, but since they do use the same underlying construct and the conversion is straightforward, I stuck with it and just multiplied all the Likert scores by 10, ex. making a 7/10 into 70%. All subjective effect data in my master dataset is now on a 0–100% scale.
The second decision was which VAS measurement to use. Several papers reported many different kinds of subjective effects simultaneously, for example there was “any drug effect,” “good drug effect,” “ego dissolution,” “drug liking,” etc. I decided to exclusively use the “any drug effect” measurement across all papers. This is a very clear and simple measurement that captures the overall strength of the psychedelic experience, regardless of whether it was pleasant or unpleasant. It’s also convenient since it allows me to gather data from papers that didn’t classify their subjective effect intensity measurements as thoroughly. Using a specific subscale like “good drug effect” could have possibly introduced some bias, since an intense experience would be connected with high plasma concentrations but could also be categorized under low “good drug effect” ratings, interfering with the relationship I’m trying to model.
The same mean ± SD challenge from last week also applied here, with one addition. Some figures used graphs that had error bars leaving a measured value to represent confidence intervals, as opposed to standard deviations. A confidence interval indicates how precisely the mean was estimated, rather than the spread of the raw data. Since the confidence intervals in these figures were consistently narrow, though, that meant the mean values were generally well-estimated and reliable. For these figures, I extracted only the central measured value, while noting down that the error bars were present but small.
Again, the way WebPlotDigitizer is designed to be used manually does mean that small clicking errors are unavoidable, however, the same argument applies: with data representing 189 participants across 8 studies, random noise should average out rather than bias the results.
The master paired dataset is now coming to completion. Next week, I begin matching the concentration and effect time curves together and loading everything into Python for the first time. See you all then!

Leave a Reply
You must be logged in to post a comment.