There is a bit more to image stacking, relating to signal-to-noise. If you put a stack of shit in, you get a stack of shit out.
Light frames - actual light data from the object you are taking an image of, overlaid with various sources of noise and errors
Dark frames - to capture CCD noise that occurs from a long exposure. The CCD produces electrical/heat noise in a somewhat reproducible pattern. By capturing a long exposure of CCD noise only (ie. you cover the aperture so no light gets to the CCD) you can subtract CCD noise from your light frames. I think you want to do roughly 0.5 - 1× the number of light frames. So if you do 20 × 10 minute light frames, you might also do 10-20 × 10 minute dark frames. I think every amateur should definitely be doing these when doing long exposures. Dark noise will be a significant fraction of your pixel brightness.
Flat field frames - if you point your camera/telescope/CCD at a plain white uniform field with no detail, you will see that the resulting image does not have uniform brightness, due to vignetting and CCD effects. You can capture flat field frames and try to compensate for the non-uniform image response in your light frames using flat fields. I've never bothered with this but you can do it easily enough by pointing the telescope at a bright thing, out-of-focus
Bias frames - pretty much the same a dark frame but captures only the initial readout noise from the CCD rather than the noise that results from a long exposure. I think unless you're doing something really advanced you probably don't need this. (They look the same as darks but less noisy.)
Note that most of them are heavily dependent on ambient temperature and camera settings so there's not much point taking your darks in the middle of the day and light frames at night.
You also have astronomical seeing. Turbulence in the air reduces the quality of your image. Generally the "quality" is measured by the visible (or CCD) size of a single star which is treated as a point source. You may want to ignore the worst quality images as you can get a better image overall if you discard some of them. For a sufficiently long exposure this is not really necessary but eventually you will be limited by seeing and the next step to getting better results would be short exposures and being more selective with image quality, moving to Tenerife/Hawaii, and no doubt there are some amateurs doing adaptive/active optics as well.
Then you have a bunch of other more complex image processing techniques like super-resolution and deconvolution (when you hit the diffraction limit) which you will probably never need given the weather in the UK
There is a bit more to image stacking, relating to signal-to-noise. If you put a stack of shit in, you get a stack of shit out.
Light frames - actual light data from the object you are taking an image of, overlaid with various sources of noise and errors
Dark frames - to capture CCD noise that occurs from a long exposure. The CCD produces electrical/heat noise in a somewhat reproducible pattern. By capturing a long exposure of CCD noise only (ie. you cover the aperture so no light gets to the CCD) you can subtract CCD noise from your light frames. I think you want to do roughly 0.5 - 1× the number of light frames. So if you do 20 × 10 minute light frames, you might also do 10-20 × 10 minute dark frames. I think every amateur should definitely be doing these when doing long exposures. Dark noise will be a significant fraction of your pixel brightness.
example:
Flat field frames - if you point your camera/telescope/CCD at a plain white uniform field with no detail, you will see that the resulting image does not have uniform brightness, due to vignetting and CCD effects. You can capture flat field frames and try to compensate for the non-uniform image response in your light frames using flat fields. I've never bothered with this but you can do it easily enough by pointing the telescope at a bright thing, out-of-focus
example:
Bias frames - pretty much the same a dark frame but captures only the initial readout noise from the CCD rather than the noise that results from a long exposure. I think unless you're doing something really advanced you probably don't need this. (They look the same as darks but less noisy.)
Note that most of them are heavily dependent on ambient temperature and camera settings so there's not much point taking your darks in the middle of the day and light frames at night.
You also have astronomical seeing. Turbulence in the air reduces the quality of your image. Generally the "quality" is measured by the visible (or CCD) size of a single star which is treated as a point source. You may want to ignore the worst quality images as you can get a better image overall if you discard some of them. For a sufficiently long exposure this is not really necessary but eventually you will be limited by seeing and the next step to getting better results would be short exposures and being more selective with image quality, moving to Tenerife/Hawaii, and no doubt there are some amateurs doing adaptive/active optics as well.
Then you have a bunch of other more complex image processing techniques like super-resolution and deconvolution (when you hit the diffraction limit) which you will probably never need given the weather in the UK