Skip to content


Radiation pressure and the Stefan-Boltzmann law

In a previous post on Kirchhoff's law (1859) and black bodies , we saw that the energy density of thermal radiation is a function of temperature only. The first measurements of thermal radiation (from hot platinum wire) were made by Tyndall, and from his results Stefan concluded, in 1879, that the energy radiated went as the fourth power of the absolute temperature. This empirical relationship was later theoretically determined, for black bodies, by Boltzmann in 1884. The law that bears both their names is:

\[ R_B = \sigma T^4 \]

and \(\sigma\) is known as the Stefan-Boltzmann constant, and \(R_B\) is the emissive power, the radiant power emitted per unit area.

Thermal radiation, Kirchhoff's law, and black bodies

All matter continuously emits electromagnetic radiation as a consequence of its temperature. This radiation is called thermal radiation or heat radiation (although of course it isn't intrinsically different from electromagnetic radiation generated by any other means). Thermal radiation is what makes thermal imaging possible, and why hot embers glow, etc. From our everyday experience and from experimentation we can see that both the wavelength and intensity of radiation emitted depend in some way on the temperature of the matter.

CUDA basics part 2

Recently, I posted a basic introduction to CUDA C for programming GPUs, which showed how to do a vector addition. This illustrated some of the CUDA basic syntax, but it wasn't a complex-enough example to bring to light some of the trickier issues to do with designing algorithms carefully to minimise data movement. Here we move on to the more complicated algorithm for matrix multiplication, C = AB, where we'll see that elements of the matrices get used multiple times, so we'll want to put them in the shared memory to minimise the number of times they get retrieved from the much slower global (or device) memory. We'll also see that, because data that a thread puts into shared memory is only accessible by the other threads in the same thread block, we need to be careful how we do this.

CUDA basics part 1


CUDA (Compute Unified Device Architecture) is an extension of C/C++, developed by NVIDIA, the GPU manufacturer, for programming their devices. (There is also a Fortran version, developed by PGI.)

The purpose of CUDA is to allow developers to program GPUs much more easily than previously, and since its inception in 2007, the use of GPUs has opened up beyond just graphics to more general, e.g. scientific, computing, which is often referred to as general-purpose GPU computing - GPGPU.