Ideally one should include the following:
- Some indicative results to give readers an idea of the typical or important behaviour of the model. This really helps understand a model's nature and also hints at its "purpose". This can be at a variety of levels - whatever helps to make the results meaningful and vivid. It could include visualisations of example runs as well as the normal graphs -- even following the events happening to a single agent, if that helps.
- A sensitivity analysis - checking how varying parameters affects the key results. This involves some judgement as it is usually impossible to do a comprehensive survey. What kind of sensitivity analysis, over what dimensions depends on the nature of the model, but not including ANY sensitivity analysis generally means that you are not (yet?) serious about the model (and if you are not taking it seriously others probably will not either).
- In the better papers, some hypotheses about the key mechanisms that seem to determine the significant results are explicitly stated and then tested with some focussed simulation experiments -- trying to falsify the explanations offered. These results with maybe some statistics should be exhibited [note 1].
- If the simulation is being validated or otherwise compared against data, this should be (a) shown then (b) measured. [note 2]
- If the simulation is claimed to be predictive, its success at repeatedly predicting data (unknown to the modeller at the time) should be tabulated. It is especially useful in this context to give an idea of when the model predicts and when it does not.
If you have no, or very few results, you should ask yourself if there is any point in publishing. In most occasions it might be better to wait until you have.
Note 1: p-values are probably not relevant here, since by doing enough runs one can pretty much get any p-value one desires. However checking you have the right power is probably important. See
Note 2: that only some aspects of the results will be considered significant and other aspects considered model artefacts - it is good practice to be explicit about this. See