

Here's the standard error of the sampling distribution of those mean predictions: The purple bounds look almost parallel to the blue predictions. Here's some data and a nonlinear least squares fit, with a confidence interval for the population mean (in this case generated from the sampling distribution since I know the true model, but something very similar could be done by asymptotic approximation or by bootstrapping): Here's an example of how hard it can be to see just with a confidence interval for the mean (prediction intervals can be far harder to see because their relative variation is so much less). Indeed with prediction intervals, especially with large variance but lots of data it can sometimes be hard to see the curve in ordinary linear regression - they can look almost straight, and it's relatively easy to discern deviation from straightness. It's also possible that for some data and some models the effect is relatively small and hard to see. However, doing the actual calculations is nontrivial and it may be that programs might take a shortcut in calculation which ignores that effect. The usual (approximately correct) calculations done for nonlinear regression involve taking a local linear approximation (this is given in Harvey's answer), but even without those we can get some notion of what's going on. You can see this by simulation easily enough, either by simulating data from a given model, or by simulating from the sampling distribution of the parameter vector. Confidence and prediction bands should be expected to typically get wider near the ends - and for the same reason that they always do so in ordinary regression generally the parameter uncertainty leads to wider intervals near the ends than in the middle
