Nitrile and latex gloves may cause overestimation of microplastics (news.umich.edu)

by giuliomagnifico 265 comments 583 points
Read article View on HN

265 comments

[−] Mordisquitos 48d ago
I'm amazed that wasn't taken into account! Many years ago, in the final year of my Biology degree, I did a paid summer internship at an Evolutionary Biology lab here in Spain, assisting in a project where they were researching relationships between metal ion accumulation (mostly zinc) and certain SNPs (≈"gene varieties"). A lot of my work was in slicing tiny fragments of deep-frozen human livers and kidneys in a biosafety cabinet over dry ice.

The reason I bring this up is because the researchers had taken the essential precaution of providing me with a ceramic knife to do the cutting (and platic pliers), to eliminate the risk of contaminating the samples with metal from ordinary cutting implements.

That some research on microplatics did not take into account the absolutely mental amount of single-use plastic that is involved in biological research, particularly gloves of all things, boggles the mind.

[−] Thorrez 47d ago

> single-use plastic that is involved in biological research

The samples were not contaminated by plastic in the gloves. Latex gloves don't contain plastic, they're made from natural rubber. Nitrile gloves also don't contain plastic, although they're very similar to plastic.

The contamination that this study found wasn't microplastic contamination. The gloves weren't adding microplastics. The gloves were adding stearates, which aren't plastic, but look like microplastic in many of the methods for measuring microplastics.

[−] johnbarron 47d ago

>> I'm amazed that wasn't taken into account!

This was taken into account: https://news.ycombinator.com/item?id=47563392

[−] timr 47d ago
You found a paper saying that contamination is possible. That doesn’t mean that most of these plastic studies are doing the necessary controls, let alone the (almost impossible) task of preventing the contamination in a laboratory setting where nanomolar detection levels are used to make broad claims.
[−] dahart 47d ago
Are more “controls” what is necessary here? The problem wasn’t plastic contamination, it was the presence of stearates. Distinguishing between stearates and microplastics sounds like a classification problem, not a control problem.

There is practically universal recognition among microplastics researchers that contamination is possible and that strong quality controls are needed, and to be transparent and reproducible, they have a habit of documenting their methodology. Many papers and discussions suggest avoiding all plastics as part of the methodology, e.g. “Do’s and don’ts of microplastic research: a comprehensive guide” https://www.oaepublish.com/articles/wecn.2023.61

Another thing to consider is that papers generally compare against baseline/control samples, and overestimating microplastics in baseline samples may lead to a lower ratio of reported microplastics in the test samples, not higher.

[−] timr 47d ago
Many papers in this field are missing obvious controls, but you’re correct that controls alone are insufficient to solve this problem.

When you are taking measurements at the detection limit of any molecule that is widespread in the environment, you are going to have a difficult time of distinguishing signal from background. This requires sampling and replication and rigorous application of statistical inference.

> Another thing to consider is that papers generally compare against baseline/control samples,

Right, that’s what a control is.

> and overestimating microplastics in baseline samples may lead to a lower ratio of reported microplastics in the test samples, not higher.

There’s no such thing as “overestimating in baseline samples”, unless you’re just doing a different measurement entirely.

What you’re trying to say is that if there’s a chemical everywhere, the prevalence makes it harder to claim that small measurement differences in the “treatment” arm are significant. This is a feature, not a bug.

[−] dahart 47d ago
You’re still bringing up different issues than this article we are commenting on.

> There’s no such thing as “overestimating in baseline samples”

What do you mean? Contamination and mis-measurement of control samples is a thing that actually happens all the time, and invalidates experiments when discovered.

> What you’re trying to say is that if there’s a chemical everywhere, the prevalence makes it harder to claim that small measurement differences in the “treatment” arm are significant.

No. What I was trying to say is that if the control is either mis-measured, for example by accidentally counting stearates as microplastics, or contaminated, then the summary outcome may underestimate or understate the prevalence of microplastics in the test sample, even though the measurement over-estimated it.

[−] timr 47d ago

> What do you mean? Contamination and mis-measurement of control samples is a thing that actually happens all the time, and invalidates experiments when discovered.

The entire point of a control is to test for that sort of contamination (or more generally, for malfunctions in the experimental workflow). In the case of a negative control, specifically, you're looking for an "positive" where one should not exist. If an experiment is set up such that you can obtain differential contamination in the controls but not the experimental arms, as you've described, then the entire experiment is invalid.

> What I was trying to say is that if the control is either mis-measured, for example by accidentally counting stearates as microplastics, or contaminated, then the summary outcome may underestimate or understate the prevalence of microplastics in the test sample, even though the measurement over-estimated it.

The control cannot be "mis-measured", any more or less than the other arms can be "mis-measured". You treat them identically, otherwise the control is not a control. Neither example you've given are exceptions: if the assay mistakes chemical B for chemical A, then it will also do so for the non-controls. If the experimental process contaminates the controls, it will also contaminate the non-controls.

What you're missing is that there's no absolute "correct" measurement -- yes, the control may itself be contaminated with something you don't even know about, thus "understating" the absolute measurement of whatever thing you're looking for, but the absolute measurement was never the goal. You're looking for between-group differences, nothing more.

Just to make it clearer, if I were going to run an extremely naïve experiment of this sort (i.e. detection of trace chemical contamination C via super-sensitive assay A) with any hope of validity, I'd want to do multiple replications of a dilution series, each with independent negative and positive controls. I'd then use something like ANOVA to look for significant deviations across the group means. This is like the "science 101" version of the experimental design. Any failure of any control means the experiment goes in the trash. Any "significant" result that doesn't follow the expected dilution series patterns, again, goes in the trash.

(This is, of course, after doing everything you can to mitigate for baseline levels of the contaminant in the lab environment, which is a process that itself probably requires multiple failed iterations of the experiment I just described.)

Most of the plastic contamination papers I have read are far, far from even that naïve baseline.

[−] fosdad2131321 47d ago

> The entire point of a control is to test for that sort of contamination

No, the point of a control is to give you a reference point that shares all the systemic biases and unknown unknowns, not to detect those biases. If you follow the same procedure on a known null and on your experiment and observe an effect, assuming you really did exactly the same thing except the studied intervention, you can subtract out the bias.

This one example of technical jargon diverging from colloquial or intuitive use, and it is the type of thing people who haven't had statistics or scientific process education often struggle with because they keep applying their colloquial intuitions.

You talk like you understand this on the rest of the comment so I'm confused by this framing, and the person you are replying to points out (in my reading ) that contamination of the control 1) does happen in practice (in the sense that there was an accidental intervention) and 2) if the gloves contaminated both the measurements and control the same way then the control is exactly serving it's purposes

[−] dahart 46d ago
You’re repeating several of my points in your own words, supporting them and not arguing with them, even though your language and emphasis suggests you think you are arguing.

> then the entire experiment is invalid

Isn’t that what I said? You even quoted me saying it. But I didn’t say anything about only control being contaminated or mis-measured, I think you’re assuming something I didn’t say. Validity is, of course, compromised if the control is compromised, regardless of what happens to the test samples.

> The control cannot be “mis-measured” […] yes, the control may itself be contaminated […]

So which is it? Isn’t the article we’re commenting on talking about the possibility of mis-measuring? Are you suggesting this article cannot possibly be an issue when measuring control samples? Why not?

Controls absolutely can be mis-measured or contaminated or both. It has been known to happen. It’s bad when this happens because it means the experiment has to be re-done.

> If the experimental process contaminates the controls, it will also contaminate the non-controls

Yes! This is exactly what I was implying, and is exactly how you might end up underestimating the relative presence of whatever you’re looking for in the test, if your classification procedure overestimates it.

> You’re looking for between-group differences

Yes! and this is why if, for example, you didn’t notice your control had stearates and you counted them as microplastics accidentally, and then reported that your test sample had 2x more microplastics than your control, you might have missed the fact that your test actually had 10x more microplastics, or that your control actually had none when you thought incorrectly that it had some.

This, of course, is not the only possible outcome, not the only way that the results might be distorted. But this is one possible outcome that the Michigan paper at hand is warning against, no?

> Most of the papers I have read are far, far from even that naïve baseline.

Short of it, or exceeding it? Based on earlier comments, I assume you mean they’re not meeting your standards. I don’t know what you’ve read, and my brief googling did not seem to support your claims here so far. Can you provide some references? It would be especially helpful if you showed recent/modern SOTA papers, work that is considered accurate, and is highly referenced.

[−] njarboe 47d ago
Any scientific paper that does not document how things were done (methodologies) is basically worthless in the search for truth.
[−] dahart 47d ago
I agree completely. My point is that documenting methodology is standard practice, as is strict quality control, in the microplastics literature. I don’t know what controls are missing according to GP, and we don’t yet have references here to back up that claim. By and large I think researchers are aware of the difficulties measuring this stuff, and doing everything they can to ensure valid science.
[−] idiotsecant 47d ago
Luckily HN software developers, the foremost authority on literally every subject imaginable, are here to bless the world with their insights.
[−] bonoboTP 47d ago
I think there's an important distinction of smug better-knowing instances.

"I have unique insight as a non-expert that all experts miss and the entire field is blind to" -> usually nonsense

"I think in this specific instance academically qualified people are missing something that's obvious to me" -> often true.

[−] timr 47d ago
There’s also the possibility that some of us actually, you know…have subject-matter expertise.
[−] refulgentis 47d ago
Spiritual equivalent of a life sciences forum discovering memory safety, one person who wrote code for a bit saying they wrote a memory bug in C once, then someone clutching pearls about why all programmers irresponsibly write memory unsafe code given it has a global impact.

Been here 16 years, it's always an adventure seeing whether stuff like this falls into:

A) Polite interest that doesn't turn into self-keyword-association

B) Science journalism bad

C) Can you believe no one else knows what they're doing.

(A) almost never happens, has to avoid being top 10 on front page and/or be early morning/late night for North America and Europe. (i.e. most of the audience)

(B) is reserved for physics and math.

(C) is default leftover.

Weekends are horrible because you'll get a "harshin' the vibe" penalty if you push back at all. People will pick at your link but not the main one and treat you like you're argumentative. (i.e. 'you're taking things too seriously' but a thoughtful person's version)

[−] Der_Einzige 47d ago
You joke, but given that SWE/AI researchers literally invented AI that does everything else for them and is often super-human at intelligence across most things, I would unironically prefer the opinion of the creator of such a system over most others for most things.
[−] caycep 45d ago
granted, I feel like maybe a review of lab equipment regularly is not a bad idea. in my low level undergraduate summer job, we realized all the stuff I did in those 3 months were moot at the end because I end up running some blanks on the pipetting robot and discovered that some glitch resulted in progressively less material being pipetted towards the end of the tray vs. the beginning....
[−] Betelbuddy 47d ago

>>That some research on microplatics did not take into account the absolutely mental amount of single-use plastic that is involved in biological research, particularly gloves of all things, boggles the mind

What boggles the mind is you commenting on an article you clearly did not read...stating something that is not there...

[−] p-e-w 47d ago

> I'm amazed that wasn't taken into account!

Agreed. While I didn’t anticipate this myself, nor would have likely figured it out myself, I also don’t expect my claims to influence global policy.

The scientists who failed to realize this do expect that, so the standards we expect from them need to be higher in accordance with that.

[−] giantg2 48d ago
Classic. This is like that female serial killer in Europe that turned out to actually just be DNA from a woman making the DNA collection swabs.
[−] EPWN3D 47d ago
The various "OMG MICROPLASTICS" studies always smacked of alarmism. No one has actually identified tangible harms from microplastics; it's just taken as a given that they are bad. So this fueled a bunch of studies that tried to find them everywhere. Even the authors of this study go to great pains to not challenge the dogma that microplastics are existentially terrifying. So I fully expect we'll still be panicking over vague, undefined harm whenever we find microplastics somewhere.

This type of research requires very little creativity or study design -- just throw a dart in a room and try and find microplastics in whatever it lands on. Boom, you get a grant for your study, and journalists will cover your result because it gets clicks. Whenever this type of incentive exists, we should be very skeptical of a rapidly-emerging consensus.

[−] s0rce 47d ago
I guess with Raman I can see this being misidentified but I do testing with FTIR at my job, although not often for microplastics and we often detect olefins and stearates and they don't seem to get confused. I didn't realize there were stearates on nitrile gloves, we'll need to be more careful of that. We are always weary of protein contamination from people, or cellulose/nylon from clothing.
[−] AndrewKemendo 47d ago
The way this study was done makes perfect sense for finding this cross-contamination issue, but does not actually address how microplastics samples are extracted and found in sampling studies.

The below meta-study largely discusses sampling methods and protection from cross contamination so everyone here acting like this one study’s somehow invalidates decades of quality research:

>Due to the wide contamination of the environment with microplastics, including air [29], measures should be taken during sampling to reduce the contamination with these particles and fibers. The five rules to reduce cross-contamination of microplastic samples are: (1) using glass and metal equipment instead of plastics, which can introduce contamination; (2) avoiding the use of synthetic textiles during sampling or sample handling, preferring the use of 100% cotton lab coat; (3) cleaning the surfaces with 70% ethanol and paper towels, washing the equipment with acid followed by ultrapure water, using consumables directly from packaging and filtering all working solutions; (4) using open petri dishes, procedural blanks and replicates to control for airborne contamination; (5) keeping samples covered as much as possible and handling them in clean rooms with controlled air circulation, limited access (e.g. doors and windows closed) and limited circulation, preferentially in a fume hood or algae-culturing unit, or by covering the equipment during handling [15], [26], [95], [105], [107]. A fume hood can reduce 50% of the contamination [105] while covering samples during filtration, digestion and visual identification can reduce more than 90% of contamination [95].

So don’t ghost ride the whip about the death of the microplastic plague just yet.

https://www.sciencedirect.com/science/article/pii/S016599361...

[−] dust42 48d ago
So basically the gloves that kitchen staff now must wear means we get an extra dose of micro plastics? Yikes.
[−] zug_zug 48d ago
This is good news, probably. We'll have to wait and see which studies replicate and which don't.
[−] userbinator 47d ago
But stearates are also chemically very similar to some microplastics, according to the researchers, and can lead to false positives when researchers are looking for microplastic pollution.

"Chemically very similar", as in "contains long hydrocarbon chains", something which even all biological matter (lipids) has. I've looked at a few microplastic studies and many of them use pyrolysis and mass spectroscopy to detect their presence, which is going to show almost the same results for animal fat as pure hydrocarbon plastics like PE (the most common plastic by production volume) and PP.

[−] khalic 48d ago
This study assumes everybody is oblivious to contamination, and explicitly says they can't differentiate. Not useful and bordering on the tautological
[−] jongjong 47d ago
Studies are extremely difficult to get right. I'm generally a little bit skeptical of data for this reason.

A bit of a tangent but still on the subject of environmental pollution; the other day I found out that CO2 sensor sensitivity naturally drifts over time... So when a CO2 sensor is replaced for long term climate research, if they try to calibrate the new sensor to the old one at the time of replacement, the drift would be carried over into the new sensor even if no actual real change of CO2 occurred... Apparently there are standards to prevent this but mistakes have been identified multiple times with the methodology for setting that standard... Anyway measuring data accurately is really hard.

People do not appreciate correctness enough.

[−] inglor_cz 48d ago
While we are used to associate "the observer effect" with particle physics, it can appear in biology and/or chemistry as well.

Keeping things meticulously clean on the microscopic level is a complicated task. One of the many reasons why so few EUV chip fabs even exist.

[−] beloch 47d ago
"The researchers used air samplers which are fitted with a metal substrate. Air passes through the sampler, and particles from the atmosphere deposit onto the substrate. Then, using light-based spectroscopy, the researchers are able to determine what kind of particles are found on the substrate.

Clough prepared the substrates while wearing nitrile gloves, which is recommended by the guidance of literature in the microplastics field. But when she examined the substrates to estimate how many microplastics she captured, the results were many thousands of times greater than what she expected to find."

------------------

The very first thing that should have been done is to run results for a substrate that hadn't been placed in the sampler. You need to know what a zero result looks like just to characterize your setup. You'd also want to run samples with known and controlled micro-plastic concentrations. Why didn't they do this? Their results are utterly meaningless if they didn't.

[−] ErigmolCt 47d ago
So the takeaway is: we've been accidentally adding "microplastics" with the very gloves we use to avoid contamination. That's almost poetic
[−] johnbarron 47d ago
A rediscovery...six years later:

"When Good Intentions Go Bad — False Positive Microplastic Detection Caused by Disposable Gloves" - https://pubs.acs.org/doi/10.1021/acs.est.0c03742

From the study in the OP you cannot derive that current studies on microplastics are not valid. The headline framing that scientists have been measuring their own gloves, is science journalism doing what it does best...

Stearates are water soluble soaps, so any study using standard wet chemistry extraction, and that is most of them, washes them away before analysis even begins. Stearates also cant mimic polystyrene, PET, PVC, nylon, or any of the dozens of other polymers routinely found in environmental and human tissue samples.

Nothing to see here.