Google has expanded its Gemini platform by adding AI image generation to its Personal Intelligence feature. As a result, users can now create customised visuals based on their own data. In addition, the update integrates generative capabilities directly with connected services such as Google Photos.
This feature builds on Gemini’s ability to access user-approved data. Therefore, it can produce outputs that reflect preferences, past activity, and stored content. At the same time, the system reduces the need for detailed prompts by using contextual information. Consequently, users can generate relevant visuals with minimal input.
Nano Banana 2 Powers Faster Results
The image generation capability runs on Google’s Nano Banana 2 model. Compared to earlier versions, the model delivers faster processing and improved image quality. Moreover, it enhances accuracy in rendering personalised content.
As part of a broader rollout, Google has deployed the model across multiple platforms. For instance, it now supports features within Gemini, Search, and other AI tools. Therefore, the company continues to strengthen its generative AI ecosystem.
Shift Toward Context-Aware AI Systems
This update reflects a wider shift in AI development. Instead of focusing only on responses, systems now generate tailored content. In addition, they rely on personal context to improve relevance.
Consequently, AI tools are evolving into more adaptive platforms. As these capabilities expand, personalised content generation is becoming a central part of user interaction.








