Mastering the Google Gemini API

A Comprehensive Guide to the Google Gemini API

Asad iqbal
Artificial Intelligence in Plain English

--

In this tutorial guide of Google Gemini API, 🌟 I will explain how to use Google Gemini API to generate text from prompt. 📝 Additionally, I’ll demonstrate how to generate content by passing images using the Google Gemini Vision model. This tutorial is going to be very exciting, so let’s begin! 🚀 Oh, and one more thing: I have a video tutorial as well! 🎥 You can find the video just below.

Video:

🚀Let’s begin!:

To utilize the power of Google Gemini Pro for text and chat conversations, and Google Gemini Pro Vision for images, you’ll need the ‘google-generativeai’ package. Start by installing it using ‘!pip install google-generativeai. 🔥

!pip install google-generativeai

Import some important libraries.

import google.generativeai as genai
import os
from google.colab import userdata
from IPython.display import Markdown
#give gemini api to google generative AI
geminiKey = userdata.get('geminiKey')
genai.configure(api_key = geminiKey)

This code used to set up authentication for interacting with the Gemini API by configuring the API key, and you can assess it from Google AI studio and then set it into your environment.

# get the model
model = genai.GenerativeModel('gemini-pro')
response = model.generate_content('Explain me Quantum Computing like I’m a 5-year-old ')
  1. model = genai.GenerativeModel('gemini-pro'): This line initializes a generative model object named model using the GenerativeModel class from the genai library. The model selected for initialization is specified as 'gemini-pro'.
  2. response = model.generate_content('Explain me Quantum Computing like I’m a 5-year-old '): This line generates content from the initialized model. It invokes the generate_content method of the model object, passing a prompt as input. The prompt provided is 'Explain me Quantum Computing like I’m a 5-year-old'. The model will attempt to generate content based on this prompt.
print(response.text)
  1. Printing response.text will output the generated text, allowing you to see the explanation of quantum computing in simple terms, as produced by the generative model.
Output from this prompt: Explain me Quantum Computing like I’m a 5-year-old
Markdown(response.text)

By enclosing response.text we convert the raw generated text to Markdown format.

Output in Markdown format

Gemini-Pro Chat Conversation

Again, Load Gemini pro model

model = genai.GenerativeModel('gemini-pro')
chat = model.start_chat(history=[])
  • Creates an object called model using the GenerativeModel class from the genai library.
  • The argument 'gemini-pro' specifies that we want to work with the Gemini-Pro model (a large language model from Google AI).
  • Starting a Chat Session:
  • chat = model.start_chat(history=[])
  • This line calls the start_chat method on the model object.
  • The start_chat method initiates a chat session with the chosen model (Gemini-Pro in this case).
  • The empty list [] passed as an argument to start_chat indicates that the chat starts without any previous conversation history.
response = chat.send_message('What is Quantum physics')
Markdown(response.text)

response = chat.send_message('What is Quantum physics'):

  • This line attempts to send a message (“What is Quantum physics”) through a chat object (chat).
  • Markdown(response.text):
  • This line response object returned from the chat.send_message call. It then extracts the text from the response and passes it to the Markdown function. The Markdown function would convert the raw format to Markdown format.
Output from this Prompt: What is Quantum physics

Send one more prompt

response = chat.send_message('Explain me LLM in simple words')
Markdown(response.text)
Output from this Prompt: Explain me LLM in simple words
for text in chat.history:
display(Markdown(f"**{text.role}**: {text.parts[0].text}"))

This code iterates through a chat history and displays each prompt and crossponding response in a formatted way.

History of the chat

Gemini-Pro Vision Model

First, Import an image

from PIL import Image
image = Image.open('/content/tesla truck.jpg')

Let's interact with a Google Generative AI model, Gemini-Pro Vision.

model = genai.GenerativeModel('gemini-pro-vision')
response = model.generate_content(image)
Markdown(response.text)

model = genai.GenerativeModel('gemini-pro-vision'):

  • Creates a model object called model.
  • The .GenerativeModel('gemini-pro-vision') part specifies that we are initializing a model for working with the "gemini-pro-vision" model.

response = model.generate_content(image):

  • This method sends the image data to the Gemini-Pro Vision model, prompting it to generate content based on the image. The content could be a description, interpretation, or related information.
  • The result of this method is stored in the response variable.

Markdown(response.text):

  • This extracted text into Markdown format.
Output of the model on image

We can also generate content based on an image like blog post, in order to generate content based on an image we need to pass image with prompt.

response = model.generate_content(['Write a blog post about that image', image])
Markdown(response.text)

Thanks 😀😀😀

Conclusion

we have explored the functionality of the Google Gemini API and demonstrated how it can be utilized for content generation without incurring costs. The power of the Gemini API to generate a wide range of content, from text to images, and more. This opens up exciting possibilities for creative projects, research endeavors, and innovative applications without the burden of financial constraints.

Thanks for reading; if you liked my content and want to support me, the best way is to —

  • Need help with ML & DL? Check out my Fiverr services!.
  • Subscribe my YouTube channel
  • Connect With Me On LinkedIn and Github where I keep sharing such free amazing content to become more productive and effective at what you do using Technology and AI.

In Plain English 🚀

Thank you for being a part of the In Plain English community! Before you go:

--

--