This text calls into question this common assumption, that "structure is necessary for language modelling", by presenting a generator which produces appropriate natural language utterances without building structures along the way. It backs up this demonstration by an analysis of the generation task, which leads to the conclusion that massively parallel computation and numeric combination of evidence are, in fact intrinsically necessary for generation, and that, conversely, structure building is computationally awkward.