While traditional AI is good at analysing large datasets and predicting typical outcomes, that is, tuned for specific tasks, Large language models, LLMs, used in generative chatbots such as ChatGPT are now big enough to reach beyond the constraints of their training data. As a result, they can display unpredictable behaviours, including AI hallucination. Such behaviour reflects how uncertainty is not only inherent in generative AI systems but also a characteristic of the real world, and increasingly so when the collective knowledge base is unstable and shifting. Ideation is also conditioned by uncertainty or unpredictability. That is, an idea is just a proposal or suggestion which may, or may not be realised. And so, idea communication is aiming at removing as much uncertainty as possible about the idea's realisability. Still the final idea outcome is unpredictable - its uncertainty remains. So what about the strength of generative AI as a tool for ideation? Faced with complex user demands, in sociopolitical, economic and cultural contexts, generative AI cannot deliver certainty in human scenarios that require flexibility beyond its data-driven approach. The answer, then, lies in adaptive human-AI collaboration. For example, designers may use prompt engineering as a guide to LLM responses, an AI technique akin to human step-by-step problem solving.