Gemini Pro LLM (Free Form Prompt) Integration with AssistEdge RPA20.0

In below article under stand different prompt parameter and how we can integrate with Gemini Pro using AE Bot

  1. Generate Gemeini Pro API key from https://makersuite.google.com/app/apikey
  2. Add a Rest API application in AE RPA (https://generativelanguage.googleapis.com)
  3. Create a process and add REST API activity
  4. Pass /v1beta3/models/text-bison-001:generateText?key=YOUR_API_KEY as API Path
  5. Add header Content-Type: application/json
  6. Pass following in input body

{
“contents”: [
{
“parts”: [
{
“text”: “<<This is your input prompt>>”
}
]
}
],
“generationConfig”: {
“temperature”: 0.9,
“topK”: 1,
“topP”: 1,
“maxOutputTokens”: 2048,
“stopSequences”:
},
“safetySettings”: [
{
“category”: “HARM_CATEGORY_HARASSMENT”,
“threshold”: “BLOCK_MEDIUM_AND_ABOVE”
},
{
“category”: “HARM_CATEGORY_HATE_SPEECH”,
“threshold”: “BLOCK_MEDIUM_AND_ABOVE”
},
{
“category”: “HARM_CATEGORY_SEXUALLY_EXPLICIT”,
“threshold”: “BLOCK_MEDIUM_AND_ABOVE”
},
{
“category”: “HARM_CATEGORY_DANGEROUS_CONTENT”,
“threshold”: “BLOCK_MEDIUM_AND_ABOVE”
}
]
}

Parameters Meaning

a) temperature" refers to a parameter that controls the randomness and creativity of the generated text. A higher temperature value (e.g., 1.0) leads to more surprising and diverse outputs, while a lower temperature value (e.g., 0.2) produces more predictable and coherent text.

b) topK : Refers to a parameter for controlling the randomness and diversity of the generated text. Higher topK (e.g., 50): Considers a wider range of possibilities leading to more diverse and unexpected outputs, but at the cost of potentially lower coherence and relevance to the prompt

c) topP : ier parameter used in prompts for large language models (LLMs) like me to control the randomness and creativity of the generated text

d) maxOutputTokens : parameter that controls the maximum number of tokens the model can generate in its response.

e) stopSequences : refers to a special set of signals or phrases that instruct the model to stop generating text

1 Like