As Jeffrey discussed on Saturday, “some” in the US intelligence community (aka DIA) have “increasing concerns that North Korea has an ongoing covert uranium enrichment program”. David Albright, in Glenn Kessler’s story, claims this is based on discovering a single HEU particle that is just 3.5 years old.
Just the day before hearing this news I was, by coincidence, reading this year’s IPFM report. Commenting on Jeffrey’s post, Alex Glaser (one of the Princeton boys who produced the report) summarizes their key conclusion about dating HEU:
As noted in the quote from the IPFM report, it is impossible to determine (or estimate) the age of a micron-sized HEU particle based on the Th-230/U-234 isotope-ratio if the particle is micron-sized, which is what we expect to find on a swipe sample, and less than 20-25 years old. In other words, an HEU sample has to be quite “large” in order to be able to specify an age of less than 5 five years using the Th-230/U-234-method.
The IC’s finding—that the particle in question was just three and half years old—is therefore noteworthy to say the least. So, is it plausible?
The short answer is yes, if the particle was sufficiently large. The larger a particle is, the greater the number of atoms it contains, and hence the smaller the relative error in determining the Th-230/U-234 ratio. So, better questions are: How large would such a particle have to have been? And, how likely is it that USIC would have found a particle of this size?
Those of you without a mathematical/scientific bent may like to skip forward to the results at this point. However, I am going to lay out my analysis in considerable depth so very keen readers can scrutinize it.
Summary of the science
First off, we don’t know what method was used to date the particle. The method I describe here—using thermal ionization mass spectrometry (TIMS)—is probably the best described in the literature (including in an article by LaMont and Hall, two US government scientists at Savannah River) but it is not the only possibility. Apart from other forms of mass spectrometry and other isoptopes of interest, Alex Glaser points out the possibility of using fluorine to date the particle. But here we will stick with TIMS…
All uranium contains a small fraction of U-234, which decays to Th-230, with a half-life of 246,000 years. The age, t, of the sample (more properly, the age since it was last purified) is related to the Th-230/U-234 ratio, R, by the equation:
where β=-6.4 × 10-6 yr-1 and K = 0.44 yr-1 (their relationship to the half lives of U-234 and Th-230 are given here, which also has a nice supplementary discussion of this technique).
Incidentally, a formula that will be useful later is obtained by inverting (1):
Anyway, in terms of experimental technique, the sample must first be chemically treated to separate the U and Th before TIMS can be used to count the number of U-234 and Th-230 atoms. From this R and hence t can be calculated using (1). To correct for inevitable process losses (which may affect U and Th differently), the sample is “spiked” with known amounts of “tracers”. Tracers are isotopes that are not otherwise present in the sample (such as U-233 or Th-229) and they provide a “yardstick” against which to measure the U-234 and Th-230 concentrations.
Error analysis
Because the concentration of U-234 in a sample is much, much higher than the concentration of Th-230, the former can be measured much more accurately than the latter. In fact, given that in very small samples almost all the uncertainty in determining R comes from the uncertainty in the number of Th-230 atoms then
where σR is the uncertainty in R and σN2 is the uncertainty in the number of Th-230 atoms. N1 and N2 denote the number of U-234 and Th-230 atoms respectively.
In the very small samples we are worrying about here, almost all the uncertainty in N2 comes from so-called counting statistics: the inherent variability in the number of Th-230 particles that hit the detector inside the mass spectrometer (see Table 1 in Lamont and Hall for a detailed “error budget”). Because this process is described by Poisson statistics then, if there were no process losses, we would expect σN2=N20.5. In reality, because only a fraction of the Th-230 atoms in the sample contribute to the mass spectrometry then the uncertainty in N2 is given by
where A is a constant (much bigger than 1) that undoubtedly depends on the specific experimental set-up. Physically A-2 represents the fraction of Th-230 atoms that contribute to the spectrometry.
Finally, by differentiating (1), associating dt and dR with σt and σR, and using (2), (3) and (4), I finally obtain:
where σt is the uncertainty in the age of the particle.
Note that all the constants in (5) are known, except for A. Fortunately, the paper by LaMont and Hall provides enough information to estimate A=6200 for their experimental set-up. I won’t go through the details of that calculation here (but can supply them on request, of course).
I want to point out at this stage that, applied to older particles, my model leads to significantly different conclusions from the 2008 IPFM report. As stated above, that report concludes that you need a particle of about 3 microns in diameter to date it if it is a few decades old. In contrast, my model predicts, for instance, that a particle of 50 microns in diameter would be needed to date a 25 year old particle to within an accuracy 5 years. I believe the difference emerges because the IPFM conclusion is based on the particle size needed to contain a detectable quantity of Th-230. However, I think that a larger amount of Th-230 is needed to be able to measure the number of atoms sufficiently accurately for dating. However, as I say at the end of this posting, I am by no means completely confident in my model and given the quality of IPFM’s work the fact that my conclusion differs from theirs does worry me. Anyway, onto the results using my model…
Results
If you skipped the last couple of sections: welcome back.
The graph below shows the diameter of a particle (in microns) against the uncertainty in its age (in years), assuming a spherical particle with a nominal age of 3.5 years and a density of 10 g cm-3. Apologies for the lack of axis labels—I am having Excel problems…
Plot of particle diameter (x-axis, x10-6m) against uncertainty in age (y-axis, years)
Now, USIC told Albright that the age of the particle they found was 3.5 years. Given they cite the age to the nearest half year I take their uncertainty to be somewhere between six months and a year. Based of the graph above, this leads to a particle diameter of between 80 and 120 microns or, in terms of mass, between 2 and 7 microgrammes.
This would represent a very large particle. In fact, it is two orders of magnitude larger than the size normally found on a swipe sample: 1-3 microns (according to IPFM). I simply don’t know enough about environmental sampling to know whether such uber particles do occur from time to time. If any of you do, please feel free to comment.
It’s hard to say therefore whether USIC’s detection of a 3.5 year old particle is plausible. Nonethless, I would certainly urge caution in interpreting this claim. Assuming my analysis is correct (and that is a big assumption—see below), USIC has either found a truly huge particle (which may be unlikely) or they have a worryingly large error bar on their result (potentially a few years or more).
Moreover, even if USIC did find one huge particle, basing key policy choices on just one particle is foolhardy. There is an interesting reference in LaMont and Hall to “discarding outliers”. In other words, they occasionally obtained a result that was so wrong (many standard deviations away from the mean) that it was attributed to human error and discarded. If you have just one particle for analysis it is impossible to know whether your one result is one of these outliers.
Should you believe me?
Good question. Let me be very clear about the limitations of this analysis.
The above analysis has not been peer reviewed. In particular, I had to postulate equation (4) because I couldn’t find a relevant discussion of this issue in the literature. Moreover, my estimate of mass spectrometry accuracy (essentially represented by the constant A) is based on one paper that is four years old. If TIMS has moved on since then, or if USIC employed some other, more accurate technique, then it may have been able to get away with a much smaller particle. All of this I freely acknowledge. I don’t claim this is anything more than a first, rough attempt to get something out quickly to help inform the debate about North Korean enrichment. Please don’t treat this as anything more.
Your math checks out, but you can drop the minus sign in (5).
I take it your estimate of TIMS efficiency is based on your lit reference. This may reflect standard cost-effective practice, but more highly efficient ionization and beam formation might be possible using more expensive equipment and techniques.
However, I wouldn’t assume the “IC” report of a 3.5 year estimate implies a 0.5 year sigma for this figure, and not something much larger. Perhaps it should imply that, and “intelligence” should never be politicized, of course.
Thanks, James, for taking the trouble to do this analysis. Provisional it may be, but also enlightening.
A point you touch on briefly perhaps could use a little amplification: “If you have just one particle for analysis it is impossible to know whether your one result is one of these outliers.”
If David Albright’s account (as related by Glenn Kessler) is accurate, the 3.5 years figure is based on one particle out of a few. (The article quotes Albright as saying there are “very few particles.”) So there isn’t necessarily just one particle for analysis. We are left to wonder why this one is special, and if it indeed isn’t an outlier.
Then again, particle size distributions are usually quite broad, and big outliers may be numerically rare but they stand out because they’re big. If one saw a single large particle on microscope examination of a swab, it might be convenient to pick out that one particle for analysis, and I would not agree that one could not have high confidence in the single measurement; there might not be lots of room for human error in measuring the isotope ratio, particularly if the two counts are being done simultaneously and especially if a control measurement is run immediately before and/or after.
So, your analysis is interesting but so far inconclusive.
James,
As Mark has noted, there is no real problem with your math – but I think you are barking up the wrong tree.
Your lit reference does not really explore the limits of what TIMS is capable of at the cutting edge, but that is also just a red herring.
I think it far more likely that the measurement was made using Accelerator Mass Spectrometry (AMS) with the 230Th being measured on-axis in a gas-ionisation detector and the 234U being measured off-axis as a beam current in a Faraday cup.
With suitable controls of the chemistry of sample preparation that would push your A factor down two orders of magnitude.
Does that make sense?
This is a cool blog post. As James and Mark say though, it truly does seem like everything depends on the value of A. I mean, A=6200, is saying that only one in 36 million Th atoms are being counted.
While I can see that from a practical standpoint, there are going to be enormous losses coupling into the mass spec, is 36 million really the limit? Maybe that’s the limit if the mass spec is fed by a plasma source like it normally would be.
But, wouldn’t it be possible to build a mass spec that was fed by milling off the surface with a focused ion beam system. e.g. a low energy argon beam that sputters U/Th atoms off the actual particle. Perhaps the impact ionization would be enough, but to leave nothing to chance, one could laser ionize the atoms after coming off the surface. (I suppose this would create an important calibration constant in relative ionization of U-Th, and relative sputtering efficiency, but this should be measurable) You steer the ion beam until you start to see the uranium signal appearing in the mass spec output. Now, the Th and U atoms come off with a many eV energy spectrum, but once ionized, I can accelerate the hell out of them and so the optical column chromaticity dispersion should be small.
While perhaps difficult (read: expensive), I think modestly high collection efficiencies should be possible in an arrangement like this.
Let’s do some numbers(in a more primitive way than James’s). 3.5 years means that the Th/U ratio is about
3.5 yr/246,000 yr = 1.4e-5
Our 1 micron particle at 10 gm/cm^3 has about 1.3e10 atoms, so we are talking about 186,000 Th atoms in it. Say we can ionize and collect 3% of the particles into a 10 degree FWHM ion optics column and the particle distribution overall is lambertian off the surface(cos theta), then I get 186,000*(5/57.296)^2*0.03= 42 Th atoms, which should be (roughly) enough to measure 1/7 of the age, e.g. 0.5 years, right? (This all corresponds to A = 66 instead of 6200) I mean, high fractional ionization is certainly possible in AVLIS systems, and the ion optics I’m suggesting don’t seem too hard.
I agree that the specialized machine will cost a couple of million bucks to make, but, in principle, why not? Fundamentally, it seems possible.
James,
Your choice of 10 g/cc for the density seems to imply a non-metallic sample. Is that what was found?
Note to John Field,
A cool device for doing what you are describing is the Sensitive High Resolution Ion Microprobe (SHRIMP) developed at the Australian National University (ANU).
Still my money would be on the measurements being taken with an AMS system. Far greater sensitivity for minor isotopes and a very wide dynamic range with the major isotopes measured to very high precision of axis as an ion current and the minor isotopes measured event by event.
Thank you for all your interesting comments. I appreciate you taking the time to respond.
Let me reiterate that I certainly do not claim this analysis is definitive or conclusive, and tried to be very honest about its limitations in my post (including uncertainty about a reasonable value for A). Mass spectrometry is not my area of expertise and I’ve been trying to read up on it and get something out quickly without good e-journal access while at a conference in Italy…
My aim in writing the post was to encourage an informed discussion—I think I succeeded in that. In a day or so I will write another post summarising this discussion so that readers of the blog who do not delve into the comments are aware of the issues.
Some detailed responses…
Russell and John: Yes, you might well be right that a more accurate mass spec technique was used. When I get back to DC I will try (time permitting) to get into this a bit deeper and find out what value of A is possible with TIMS and other techniques, likes AMS. If you can point me in the direction of any relevant papers I’d be grateful.
Mark: The minus sign is in (5) because β<0: an eccentric definition but to save space I just wanted to refer to another paper rather than define everything myself from scratch.
Josh: Good point. You are right.
Yale: I don’t know for sure what material was found. If USIC thinks the DPRK might have a uranium enrichment programme then presumably it was UF<sub>6</sub> (or rather an oxidised/hydrolysed form of it). 10 g cm<sup>-3</sup> is the value used by IPFM.
OK, beta is defined negative, and also, given its value, for the time scales of interest, the exponential in (5) is just unity, while N2 grows linearly, hence sigmat/t is proportional to sqrt(t), which quantifies the statement you quoted. This means that if the particle is 3 years old it needs to be 3 times larger to date it within a half year than it would need to be to date it within 5 years if it were 30 years old.
Stated another way, if for a given “micron size” particle, the uncertainty would be 5 years if the particle were 22 years old, then if the particle is actually 3.5 years old, the uncertainty would be 1.7 years.
Corrected version:
OK, beta is defined negative, and also, given its value, for the time scales of interest, the exponential in (5) is just unity, while N2 grows linearly, hence sigmat/t is proportional to sqrt(t), which quantifies the statement you quoted.
This means that if for a given “micron sized” particle, the uncertainty would be 5 years if the particle were 22 years old, then if the particle is actually 3.5 years old, the uncertainty would be 2 years.
At a particle size of 1 um I would hesitate to assume that the composition of the particle is necessarily representative of the source. If the particle formed by, say, abrasion, then this kind of dimension is on a par with, or smaller than, the crystal structure of the parent material. Leading to the possibility that you’re measuring a grain boundary that has been enriched / depleted by diffusion effects. Alternatively, if it formed by condensation then you’d have to account for different boiling points, eutectics and what have you.
It seems to me that even if you could measure the particle’s composition with arbitrarily good accuracy, there’d still be a good chance that you’d actually be getting the U / Th ratio of a grain boundary, intermetallic, or something else not representative of the parent material. Consequently the error bars on any calculation of age are going to be greater than just the errors in determining the composition.
I agree vey much with the argument advanced by Mssrs. Leslie and Field that a TIMS discussion may not be heading in the right direction. The references Jeffrey provided previously clearly demonstrated that people will use whatever tools they have at their disposal – goodness, if you’ll excuse the condescension, the Hungarians even used gamma spectroscopy.
AMS was described more than 7 years ago as being able to detect 1 fg of U-236. Admittedly this is 2.5 million atoms and unfortunately there are only 100th the number of Th-230 atoms in a 3.5 y old HEU micron-sized particle that John Field suggests (as the particle can include a maximum of about 1% U-234 from which the Th is produced). However, detection limits have no doubt come down.
There’s a nice picture of LLNL’s AMS here.
I correct myself again:
sigmat/t grows as 1/sqrt(t).
So your estimate of the required particle size is sensitive not only to your assumption about A but also to your assumption about sigmat.
If you take the quoted statement at face value, estimating it implies a dating uncertainty of 5 years for a “micron-sized” particle 22 years old, using available techniques, then you are led to an uncertainty of 2 years for the same size particle if it is only 3.5 years old, using the same techniques (although, actually, we should calculate an uncertainty for exp(beta*t)).