Get the latest version of ProFit:
First load the libraries we need:
Make a PSF and convolve it with itself:
fwhm.psf = 3
dim = rep(ceiling(fwhm.psf*10),2)
dim = dim + (1-(dim %% 2))
# More accurate than profitMakeGaussianPSF, just slower
psf = profitCubaSersic(mag=0,re=fwhm.psf/2,nser=0.5,dim=dim)
print(sum(psf))
## [1] 1
fwhm.srcs = c(1,3,9)
refsrcidx = 2
nsrcs = length(fwhm.srcs)
src = list()
conv = list()
for(i in 1:nsrcs)
{
src[[i]] = profitCubaSersic(mag=0,re=fwhm.srcs[i]/2,nser=0.5,dim=dim)
conv[[i]] = profitConvolvePSF(src[[i]], psf)
print(sum(conv[[i]]))
}
## [1] 0.9999988
## [1] 1
## [1] 0.999725
Check out the PSF:
… and its convolution:
Now compute the true profile, which is also a Gaussian:
conv.exact = list()
for(i in 1:nsrcs)
{
conv.exact[[i]] = profitCubaSersic(mag=0,re=sqrt(fwhm.psf^2+fwhm.srcs[i]^2)/2,nser=0.5,dim=dim)
print(sum(conv.exact[[i]]))
}
## [1] 1
## [1] 1
## [1] 0.9997613
Check if they differ:
Note how discretizing the PSF kernel pushes light out from the centre of the profile to approximately 1-2 FWHM:
diffrels = list()
for(i in 1:nsrcs)
{
diffrels[[i]] = (conv[[i]]-conv.exact[[i]])/conv.exact[[i]]
cens = ceiling(dim/2)
print(diffrels[[i]][cens[1]+0:5,cens[2]])
}
## [1] -0.0248584782 -0.0199393938 -0.0006294299 0.0449342437 0.1310676829
## [6] 0.2682313232
## [1] -0.024587483 -0.020957656 -0.010015054 0.008397708 0.034547523
## [6] 0.068811966
## [1] -0.005089902 -0.004934455 -0.004468025 -0.003690345 -0.002600973
## [6] -0.001199284
The discretized convolution is “softer” than it should be:
Note that our convolution kernel is the integrated flux of a Gaussian in each pixel. This is the correct kernel to use in the trivial case, as it perfectly reproduces the convolution of a point source in the centre of the image. However, it cannot exactly convolve a point source located anywhere other than the exact centre of the pixel, because that is where the convolution kernel is centred.
Cappellari 2017 state that the use of an integrated convolution kernel (rather than the value of the function at the centre of a pixel) is equivalent to an extra convolution by a boxcar filter. But each pixel in the image is convolved as if the flux were entirely contained at the centre of the pixel, which is the opposite of convolution by a boxcar filter. The source of the error when discretizing convolution kernels as the integral of the distribution is the fact that barycentre of the flux is not at the centre of the pixel. For a monotonically decreasing function (1D or 2D), the barycentre is always shifted from the pixel centre slightly towards the peak of the distribution. This means that the convolution pushes the flux out slightly further from the centre of the distribution than it should, and hence the resulting image is smoothed slightly more than it should be.
Put another way, if you integrate until you reach half of the total flux within a given pixel, you will always stop before the middle of pixel. As a last interpretation, there are always subpixel flux gradients and therefore higher-order moments in the distribution, which are not accounted for in convolution.
One solution suggested by Cappellari 2017 (and commonly used elsewhere, though not always intentionally) is to evaluate the kernel at the centre of the pixel instead of integrating it:
psfat = profitMakeModel(modellist = list(sersic = list(xcen = dim[1]/2,
ycen = dim[2]/2, mag = 0, re = fwhm.psf/2, nser = 0.5, axrat = 1,
ang = 0)), dim = dim, rough = TRUE)$z
conv.at = list()
diffrels.at = list()
for(i in 1:nsrcs)
{
conv.at[[i]] = profitConvolvePSF(src[[i]],psfat)
diffrels.at[[i]] = (conv.at[[i]]-conv.exact[[i]])/conv.exact[[i]]
print(diffrels.at[[i]][cens[1]+0:5,cens[2]])
}
## [1] 0.020022808 0.012683487 -0.003305484 -0.014317411 -0.009169152
## [6] 0.012292330
## [1] 1.640769e-05 1.612228e-05 1.634205e-05 1.506837e-05 3.336810e-06
## [6] -1.057282e-04
## [1] 1.137044e-05 1.117073e-05 1.054745e-05 9.425774e-06 7.672898e-06
## [6] 5.085408e-06
Curiously, this has the exact opposite effect: for a poorly resolved image the convolved image is too concentrated, and the residual alternates from positive to negative:
However, it is true that for reasonably well-resolved galaxies (not smaller than the PSF), the residuals are considerably smaller than with the integrated PSF. How this generalizes to different profiles will be explored later. There are possible analytic solutions for Gaussian+exponential and exponential+exponential convolutions to be tested. In the scenario of using an observed PSF (commonly used these days, and the effective output of software like PSFex), using the integrated PSF is unavoidable. For this reason using the integrated form of the PSF might be preferable for consistency purposes.
A simple method to improve convolution accuracy is to finesample (oversample) the image and convolution kernel. Of course, this is only possible if the image and kernel are defined analytically, which would be the case if they are both ProFit models. Proceeding as per the example in profitBenchmark (this will take a few seconds):
finesample=3L
dimfine = finesample*dim
psffine = profitCubaSersic(mag=0,re=finesample*fwhm.psf/2,nser=0.5,dim=dimfine)
print(sum(psffine))
## [1] 1
## [1] 1
Compute the difference again:
diffrelfine = (profitDownsample(convfine,finesample)-conv.exact[[refsrcidx]])/conv.exact[[refsrcidx]]
print(diffrelfine[cens[1]+0:5,cens[2]])
## [1] -0.0027745775 -0.0023589524 -0.0011110467 0.0009722072 0.0038958485
## [6] 0.0076667876
The residuals are just like before, only they are smaller by approximately finesample^2:
magimage(diffrelfine,magmap=F,zlim=c(-1,1)*5e-2/finesample^2)
print(diffrels[[refsrcidx]][cens[1]+0:5,cens[2]]/diffrelfine[cens[1]+0:5,cens[2]])
This is a convenient result, since we can predict the amount of finesampling required to achieve a given error. Unfortunately, in the case of brute force convolution, the computational cost scales with both the size of the image and the kernel, so the number of operations scales as finesample^4! That is not good. Can we improve on this?
The answer is yes - by realising that we don’t actually need the finesampled, convolved image, we can reduce the scaling to only finesample^2 convolutions at the original image size with offset PSFs. Behold:
subpsfs = list()
subimgs = list()
subrows = seq(1,to=dimfine[1],by=finesample)
subcols = seq(1,to=dimfine[2],by=finesample)
subcens = dimfine/2
idx = 1
conv.efficient = matrix(0,dim[1],dim[2])
for(i in 1:finesample)
{
xoffset = i - finesample/2 - 0.5
xrange = c(1,dimfine[1]) + xoffset
xrange[1] = max(1,xrange[1])
xrange[2] = min(dimfine[1],xrange[2])
xrange = xrange[1]:xrange[2]
for(j in 1:finesample)
{
yoffset = j - finesample/2 - 0.5
yrange = c(1,dimfine[2]) + yoffset
yrange[1] = max(1,yrange[1])
yrange[2] = min(dimfine[2],yrange[2])
yrange = yrange[1]:yrange[2]
# Integer offset of the previously finesampled PSF
# Which is a subpixel offset at the original image scale
subpsfs[[idx]] = matrix(0,dimfine[1],dimfine[2])
subpsfs[[idx]][xrange,yrange] = psffine[xrange-xoffset,yrange-yoffset]
subpsfs[[idx]] = profitDownsample(subpsfs[[idx]],finesample)
subimgs[[idx]] = psffine[subrows + i -1, subcols + j - 1]
conv.efficient = conv.efficient + profitConvolvePSF(subimgs[[idx]],subpsfs[[idx]])
idx = idx + 1
}
}
print(range(conv.efficient - profitDownsample(convfine,finesample)))
## [1] -3.469447e-18 1.387779e-17
Now let us test non-circular Gaussians (not finished, stop here!):
angs = c(psf=20,src=65)
axrats = c(psf=0.8,src=0.4)
fwhm.src=5
fwhms = c(psf=fwhm.psf,src=fwhm.src)
psf = profitCubaSersic(mag=0,re=fwhm.psf/2,nser=0.5,dim=dim,ang=angs["psf"], axrat=axrats["psf"])
src = profitCubaSersic(mag=0,re=fwhm.src/2,nser=0.5,dim=dim,ang=angs["src"], axrat=axrats["src"])
costh = cos((angs+90)*pi/180)
sinth = sin((angs+90)*pi/180)
fwhm.conv = c(x=sum(fwhms^2*costh*abs(costh)),
y=sum(fwhms^2*sinth*abs(sinth)))
fwhm.conv = sign(fwhm.conv)*sqrt(abs(fwhm.conv))
ang.conv = atan2(fwhm.conv["y"],fwhm.conv["x"])*180/pi-90
conv = profitCubaSersic(mag=0,re=sqrt(fwhm.src/2),nser=0.5,dim=dim,ang=angs["src"], axrat=axrats["src"])
And exponential with Gaussian, or exponential with exponential (work in progress):