This page covers recommended practices for getting the most out of AI Chat on the ACTIVATE platform.
The quality of AI responses depends heavily on how you frame your requests.
Vague prompts produce vague answers. Include relevant details, constraints, and the desired format in your prompt.
| Instead of | Try |
|---|---|
| "Explain Kubernetes" | "Explain how Kubernetes pod scheduling works, including the role of the scheduler and node affinity rules, in about 200 words." |
| "Write a script" | "Write a Bash script that finds all CSV files in a directory and combines them into a single file with one header row." |
Give the model the background it needs to produce a useful response:
If your provider supports system messages, use them to set the model's behavior for the entire conversation. For example, you can instruct the model to respond as a domain expert, use a particular coding style, or avoid certain topics.
If the first response is not quite right, refine your prompt rather than starting from scratch. Add clarifications, ask for a different format, or request that the model focus on a specific aspect.
Different models have different strengths. Selecting the right model for your task improves both response quality and efficiency.
Standard models (such as GPT-4o, GPT-4o-mini) are best for:
Reasoning models (such as o1, o3) are best for:
See Using Reasoning Models for details on reasoning effort configuration.
Larger and more capable models consume more tokens and may have higher usage costs. When working on routine tasks, consider using a smaller or faster model to conserve resources. Reserve more powerful models for tasks that genuinely benefit from their capabilities.
File attachments allow you to provide the model with additional context beyond what fits in a text prompt.
Rather than uploading a large document and asking a broad question, extract the relevant section or upload a smaller, targeted file. This helps the model focus on the content that matters and reduces token usage.
Regular files have a maximum size of 25 MB, and documents have a maximum size of 100 MB. If a file exceeds these limits, split it into smaller parts or extract the relevant sections.
See Attaching Files for supported formats and detailed usage instructions.
AI Chat's sharing and branching features support team workflows.
See Sharing Conversations for detailed instructions.
When evaluating different approaches, create branches from the same message rather than cluttering a single thread. Each branch maintains its own context, allowing you to compare model responses to different prompts or explore alternative solutions side by side.
See Branching Conversations for details.
Use the conversation itself as a record of your analysis. When you reach a conclusion, summarize the decision in a follow-up message so that anyone reviewing the shared conversation can quickly understand the outcome.
Be mindful of the data you include in prompts and file attachments. Messages are sent to the configured AI provider endpoint, which may be hosted externally.
If you manage AI Chat providers, follow these practices for API key security:
Limit provider access to the groups and users who need it. Review and audit provider permissions periodically to ensure that only authorized teams have access.
See Managing Provider Permissions for details.