The simple truth is that any such 'model' contains a number of 'assumptions' - best guesses based on what the modeller thinks is valid when data is either not obtainable, or is considered to be not variable. Thus, in my own discipline - fire - there are a number of mathematical computer models that are used to try and gauge smoke output, heat output and fire spread. They are most useful, in my experience, after a fire, when I can 'model' what actually happened with 'live' data and observations. That is what Sebastian does, and does incredibly well. Just don't ask him to use his model to 'predict' how a fire will behave in a building where he has to make a whole lot of guesses about the fire load, heat output and ventilation. You'll very likely get a very blunt answer.
The problem is that even the best 'model' is a gross oversimplification of the real world. There are simply too many variables to be modelled on even the biggest and best 'super computer' at present. Typically, once again using 'fire' as an example, the reason any 'prediction' is so difficult is that any given fire will change its behaviour if one varies almost any of the 'constant' parameters. Those who, like me, have seen lots of fires in structures will know what I mean. In any experimental burn this can be demonstrated if one sets up identical rooms with identical fire loads and arrangements. Simply opening a different window, or changing the direction of the airflow in some way can completely alter the behaviour of the fire. And that is just one small room with clearly defined boundaries and a whole slew of 'known' data.
Now move into the world of climate modelling. For one thing, the models are trying to replicate a massive, and very chaotic system. To make it worse, they are doing so with an almost impossible number of 'unknowns' that they try to compensate for by assuming 'constancy' or by excluding altogether. Typically, there are huge rows over what is termed 'feedback' when it comes to measuring how much heat is being absorbed or reflected back into space. In a 'fire' model, researchers are able to measure this rather exactly, after all we are working in a confined 'box' and can put instruments to measure radiation from the ceiling, the walls and other features. We're not 'guessing', we can measure it. In a Climate Model, this data is often not 'measured' uniformly, or it is simply guessed. Often where something is 'measured' (usually by comparing satellite data and doing some adding, subtracting and averaging) it is an 'average' of a chaotic picture, and therefore, at best, can be described as a snapshot of a moment in time.
Perhaps the best example I can use here is to look at Boyles and Charles Laws. Simple as it may sound, the relationship between pressure, temperature and volume has a huge impact on the behaviour of a fire in any 'confined' or contained space. The Law itself appears simple -
P = Pressure
V = Volume
T = Temperature
As those familiar with this relationship will know, changing any one of the three causes a change in the others. So, if one has a fixed volume, increasing the temperature will increase the pressure. If that can be relieved by allowing the 'volume' to vent (as when a window is opened), some of the heat will be lost. But, if the heat gain exceeds the loss through the venting, then the pressure will continue to rise. Now the problem with this is that 'feedback' element ... In a fire, it can, as I said, be measured, but the Earth's atmosphere is a different situation, and the measurements are, quite literally, all over the place. So, in order to put them into a 'model' and make some sense of it, the data is 'smoothed' (introducing another 'unknown'), then 'averaged' and massaged to fit.
Having taken the trouble to look at some of the raw data, and what, from that is fed into a 'model', all I can say is that the 'smoothed' data doesn't actually look anything like the original.
Which is why I'm with Sebastian. Predictions based on 'Models' are, at best unreliable. At worst, they are so badly manipulated and massaged they'd tell you anything you told them to. That is where the credibility gap becomes a chasm. Perhaps it is time the 'scientists' and their supporters relying on this, took a step back and, like Dr Judith Curry, admitted that there are huge uncertainties and massive gaps in our understanding of what is, after all, a massively complex and chaotic system we have only the barest understanding of at present.
I guess that puts me firmly in the 'Show Me' camp.