Since one of the perennial canards that defenders of valid science have to endure on rationalist forums, a canard that is practically a masturbatory obsession with creationists, is the “radionuclide dating is based upon assumptions” canard, I thought it apposite to produce this post, for the specific purpose of destroying this canard once and for all. This post was, needless to say, inspired by another individual’s post elsewhere on the rigour of the Earth sciences, and I owe him a debt for inspiring me to get off my backside and post this.
Bear in mind that I’m able to provide this dissertation, not only because I learned the relevant theoretical work, but because I performed actual data measurements of the sort presented below in my physics classes - indeed, one of the eyebrow raising aspects of that time, that I delight in sharing, is that I was permitted to work with real live plutonium in the physics laboratory of my school, at the age of 15.
This latest abridgement of the post contains a few items not specified in older versions. Those familiar with the older versions of this post in various forums, will notice the few but relevant changes where they occur. Also, this forum limits posts to 32,000 characters, so the exposition presented here will be spread across two posts.
Given the nature of the material being covered, this will be a long post. Consequently, it will probably be spread over more than one post. However, I promise that persevering with it will be worth the effort.
Radionuclide Dating Is Rigorous
In order to address this topic at the proper level of detail, something that creationists prefer to avoid at all costs, I shall first begin with a discourse on the underlying physics of radionuclide decay, the precise mathematical law that this process obeys, and how that law is derived, both empirically and theoretically. Note that the decay law was first derived empirically, courtesy of a large body of work by scientists such as Henri Becquerel, Marie Curie, Ernest Rutherford and others. Indeed, the SI unit of activity was named the Becquerel in recognition of that scientist’s contribution to the early days of the study of radionuclide decay, and 1 Bq equals one transformation (decay) per second within a sample of radionuclide.
However, the underlying physics had to wait until the advent of detailed and rigorous quantum theories before it could be elucidated, and is based upon the fact that the nuclei of radionuclides are in an excited state with respect to the sum total of the quantum energy states of the constituent particles (which, being fermions, obey Fermi-Dirac statistics, and consequently, Pauli’s exclusion principle applies). In order for the system to move to a lower energy state, and settle upon a stable set of quantum numbers, various transformations need to take place, and these transformations result in the nucleus undergoing specific and well-defined structural changes, involving the emission of one or more particles. As well as the most familiar modes of decay, namely α and β- decay, other decay modes exist, and a full treatment of the various decay modes possible, along with the underlying quantum physics, is beyond the scope of this exposition, as it requires a detailed understanding of the behaviour of the appropriate quantum operators, and as a corollary, a detailed understanding of the behaviour of Hilbert spaces, a level of knowledge that is, sadly, not widespread. With this limitation in mind, however, it is still possible to deduce a number of salient facts about radionuclide decay, which I shall now present.
Empirical Determination Of The Decay Law
Initially, the determination of the decay law was performed empirically, by observing the decay of various radionuclides in the laboratory, taking measurements of the number of decay events, and plotting these graphically, with time along the x-axis, and counts along the y-axis. Upon performing this task, the data for many radionuclides is seen to lie upon a curve, and determination of the nature of that curve requires a little mathematical understanding.
To determine the nature of a curve, various transformations can be performed upon the data, which when applied, produce plots that allow linear regression techniques to be applied to the transformed data, to allow the nature of the curve function to be determined. This standard technique was devised by mathematicians back in the 19th century - the first uses of this technique can be traced back to Legendre in 1805, and Gauss in 1809, followed by the work of Quetelet expanding the use of linear regression to the social sciences. Other mathematicians integrated logarithmic transformations with linear regression, as illustrated below, to expand the remit of the technique to elucidating non-linear relationships between data.
The result of each of these transformations is as follows:
[1] Plot loge(y) against x - if the result is a straight line, then the relationship is of the form:
loge(y) = kx + C (where C is some constant, in particular, the y-intercept of the straight line)
which can be rewritten:
loge(y) - loge(C0) = kx (where C = loge(C0))
which rearranges to:
y = C0ekx
where C0 is derived from the y-intercept of the straight line produced by the transformed plotting, and k is the gradient of the transformed line.
[2] Plot y against loge(x) - if the result is a straight line, then the relationship is of the form:
y = k loge(x) + C, where k is the gradient of the line, and C is the y-intercept of the straight line.
[3] Plot loge(y) against loge(x) - if the result is a straight line, then the relationship is of the form:
loge(y) = k loge(x) + C (where C is the y-intercept of the straight line thus produced)
This rearranges to:
loge(y) - loge(C0) = k loge(x) (where C = loge(C0))
Which in turn rearranges to:
loge(y/C0) = loge(xk)
Which finally gives us the relationship:
y = C0xk, where k is the gradient of the straight line produced by the transformed data, and C0 is derived from the y-intercept of the straight line produced by the transformed data.
The above procedures allow us to determine the nature of the mathematical relationships governing large bodies of real world data, when those bodies of real world data yield curves as raw plots of y against x. By applying the relevant transformations to radonuclide decay data, it was found that transformation [1] transformed the data into a straight line plot (within the limits of experimental error, of course), and consequently, this informed the scientists examining the data that the decay law was of the form:
N = C0ekt
where C0 and k were constants to be determined from the plot, and which were regarded as being dependent upon the particular radionuclide in question.
Now, if we are start with a known amount of radionuclide, and observe it decaying, then each decay event we detect with a Geiger counter represents one nucleus undergoing the requisite decay transformation. Since the process is random, over a long period of time, decaying nuclei will emit α or β particles in all directions with equal frequency, so we don’t need to surround the material with Geiger counters in order to obtain measurements allowing a good first approximation to the decay rate.
Obviously if we’re engaged in precise work, we do set up our experiments to do this, especially with long-lived nuclei, because the decay events for long-lived nuclei are infrequent, and we need to be able to capture as many of them as possible in order to determine the decay rate with precision. Let’s assume that we’re dealing with a relatively short-lived radionuclide which produces a steady stream of decay events at a reasonably fast rate, in which case we can simply point a single Geiger counter at it, and work out what proportion of these events we are actually capturing, because that proportion will be the ratio of the solid angle subtended by your Geiger counter, divided by the solid angle of an entire sphere (this latter value being 4π). When we have computed this ratio (let’s call it R), which will necessarily be a number less than 1 unless we have surrounded your sample with a spherical shell of Geiger counters, we then start collecting count data, say once per second, and plotting that data.
In a modern setup we’d use a computer to collect this mass of data (a facility that wasn’t available to the likes of Henri Becquerel, Röntgen and the Curies when they were engaged in their work), in order to have as large a body of data as possible to work with. Before working with the raw data, we transform it by taking each of the data points and dividing it by R to obtain the true count.
Once the data has been collected, transformed and plotted, the end result should be a nice curve. At this point, we’re interested in knowing what sort of curve we have, and there are two ways we can determine this. One way is to take the transformed data set, comprising count values c1, c2, c3, … , cn, where n is the number of data points collected, compute the following values:
r1 = c2 - c1
r2 = c3 - c2
r3 = c4 - c3
…
rn-1 = cn - cn-1
and then plot a graph with rk on the vertical axis, and ck on the horizontal axis. This should give a reasonable approximation to a straight line, and the slope of that straight line, obtained via regression analysis, will give the first approximation to the decay constant k. At this point, we know we are dealing with a relationship of the form dN/dt = -kN, and you can then apply the integral calculus to that equation (see below). Technically, what we are doing here is approximating the derivative by computing first differences, another standard mathematical technique from the discipline of numerical analysis - textbooks on this topic are numerous, I might add here.
However, as a double check, we can also perform a logarithmic regression on the data, plotting loge(ck) against time, which should also reveal a straight line, and again, the slope of that line will give you the value of k, which should be in good agreement with the value obtained earlier using the more laborious plot of rk against ck. In other words, applying the transformation [1] above to the data set, and extracting an exponential relationship from the data. Since we now know that the data is of the form:
logeN = -kt
we can then derive the exponential form and check that it tallies with the integral calculus result.
Once we have that function coupling the decay rate to time, we can then work backwards, and feed in the values of the known starting mass and the experimentally obtained decay constant k, and see if the function obtained reproduces the transformed data points. If the result agrees with observation to a very good fit, we’re home and dry.
This is, essentially, how the process was done when the decay law was first derived - lots of data points were collected from observation of real radionuclide decay, and the above processes applied to that data, to derive the exponential decay law. When this was done for multiple radionuclides, it was found that they all obeyed the same basic law, namely:
N = N0e-kt
where N0 is your initial amount of radionuclide, N is the amount remaining after time t, and k is the decay constant for the specific radionuclide.
Now, having determined this decay law empirically, it’s time to fire up some calculus, and develop a theoretical derivation of the decay law. Which I shall now proceed to do.
Theoretical Derivation Of The Decay Law And Comparison With The Above Empirical Result
Upon noting, using the calculation of first differences in the empirical determination above, that the rate of change of material with time, plotted against the material remaining, is constant, this immediately leads us to conclude that the decay law is governed by a differential equation. An appropriate differential equation is therefore:
dN/dt = -kN
which states that the amount of material undergoing decay is a linear function of the amount of material present (and furthermore, the minus sign indicates that the process results in a reduction of material remaining). Rearranging this differential equation, we have:
dN/N = -k dt
Integrating this, we have:
∫dN/N = - ∫ k dt
Our limits of integration are, for the left hand integral, the initial amount at t=0, which we call N0, and the amount remaining after time t, which we call Nt. Our limits of integration for the right hand integral are t=0 and t=tp, the present time.
Thus, we end up with:
logeN -logeN0 = -ktp
By an elementary theorem of logarithms, this becomes:
loge(N/N0) = -ktp
Therefore, exponentiating both sides, we have:
N/N0 = e-kt
or, the final form:
N = N0e-kt
The half-life of a radionuclide is defined as the amount of time required for half the initial amount of material to decay, and is called T½. Therefore, feeding this into the equation for the decay law,
½N0 = N0e-kt
Cancelling N0 on both sides, we have:
½ = e-kt
loge½ = -kt
By an elementary theorem of logarithms, we have:
loge2 = kt
Therefore T½ = loge2/k
Alternatively, if the half-life is known, but the decay constant k is unknown, then k can be computed by rearranging the above to give:
k = loge2/T½
Which allows us to move seamlessly from one system of constants (half-lives) to another (decay constants) and back again.
If the initial amount of substance N0 is known (e.g., we have a fresh sample of radionuclide prepared from a nuclear reactor), and we observe the decay over a time period t, then measure the amount of substance remaining, we can determine the decay constant empirically as follows:
N = N0e-kt
N/N0 = e-kt
loge(N/N0) = -kt
Therefore:
(1/t) loge(N0/N) = k
On the left hand side, the initial amount N0, the remaining amount N and the elapsed time t are all known, therefore k can be computed using the empirically observed data.
Once again, this agrees with the empirical data from which the law was derived in the earlier exposition above, and consequently, we can be confident that we have alighted upon a correct result.
Once we have the decay law in place, it simply remains for appropriate values of k to be determined, which will be unique to each radionuclide. This work has been performed by scientists, and as a result of decades of intense labour in this vein in physics laboratories around the world, vast bodies of radionuclide data are now available.
Kaye & Laby’s Tables of Physical & Chemical Constants, devised and maintained by the National Physical Laboratory in the UK, contains among the voluminous sets of data produced by the precise laboratory work of various scientists a complete table of the nuclides, which due to its huge size, is split into sections to make it more manageable, in which data such as half-life, major emissions, emission energies and other useful data are included. The sections are:
[1] Hydrogen to Flourine (H1 to F24)
[2] Neon to Potassium (Ne17 to K54)
[3] Calcium to Copper (Ca35 to Cu75)
[4] Zinc to Yttrium (Zn57 to Y101)
[5] Zirconium to Indium (Zr81 to In133)
[6] Tin to Praesodymium (Sn103 to Pr154)
[7] Neodymium to Thulium (Nd129 to Tm177)
[8] Ytterbium to gold (Yb151 to Au204)
[9] Mercury to Actinium (Hg175 to Ac233)
[10] Thorium to Einsteinium (Th212 to Es256)
[11] Fermium to Oganesson (names for elements 112 onwards not officially recognised by IUPAC at the time of publication) (Fm242 to Og294)
Now, the above exhaustively compiled data gives rise to yet more data, in the form of the tables covering the major decay series. These arise from the observation of which radionuclides decay into which other radionuclides (or in the case of certain radionuclides, which stable elements are formed after decay), and all of these decay events follow specific rules, according to whether α decay, β- decay, or one of the other possible decay modes for certain interesting radionuclides, takes place. Again, data is supplied in the above tables with respect to all of this.
Now, we come to the question of how this data is pressed into service. Since the above work couples radionuclide decay to time, via a precise mathematical law, we can use this data to provide information on the age of any material that contains radionuclides. This can be performed by performing precise quantitative measurements of parent radionuclides and daughter products, all of which is well within the remit of inorganic chemists (since the chemistry of the relevant elements has been studied in detail, in some cases for over 200 years) and of course, modern gas chromatograph mass spectrometry can be brought to bear upon the process, yielding results with an accuracy that past chemists reliant upon earlier techniques could only dream of. Consequently, it is now time to cover the business of dating itself see the following post…