This page covers common issues you may encounter when using AI Chat and how to resolve them.
No Providers Available
Symptom: The provider or model dropdown is empty, or you see a message indicating no providers are configured.
Cause: No AI Chat providers have been set up for your organization, or you do not have permission to use the available providers.
Solution:
Contact your organization administrator and ask them to configure an AI Chat provider. See AI Chat Providers for setup instructions.
If providers exist but you cannot see them, ask the administrator to grant your group access to the provider. See Managing Provider Permissions.
Connection Errors
Symptom: You receive an error when sending a message, such as "Failed to connect" or "Provider endpoint unreachable."
Cause: The underlying AI provider endpoint may be unavailable, misconfigured, or experiencing an outage.
Solution:
Verify that the provider endpoint is correct and accessible. For custom OpenAI-compatible providers, confirm that the endpoint URL is reachable from the ACTIVATE platform.
For session tunnel providers, ensure the tunnel is active and the compute session is running. Tunnels are only available while the associated session is active. See Session Tunnels.
For Azure OpenAI providers, verify the API key and endpoint in the provider configuration. Check the Azure status page for any ongoing outages.
Try sending another message after a few moments. Transient network issues may resolve on their own.
Slow or Stalled Responses
Symptom: The AI response takes a long time to appear or the loading indicator runs indefinitely.
Cause: This can happen for several reasons:
Reasoning models use an extended thinking phase before generating a response. This is expected behavior and can take significantly longer than standard models. See Using Reasoning Models.
High model load on the provider side can cause delays, especially with shared or rate-limited endpoints.
Large conversation context can increase processing time as the model processes more tokens.
Solution:
If using a reasoning model, wait for the response to complete. The thinking phase may take 30 seconds or longer for complex prompts.
For standard models, try starting a new conversation to reduce context size.
If the issue persists, check with your administrator whether the provider endpoint is under heavy load.
File Upload Failures
Symptom: A file upload fails or you see an error when attaching a file to a message.
Cause: The file may exceed size limits or be in an unsupported format.
Solution:
Check the file size. Regular files have a maximum size of 25 MB, while documents have a maximum size of 100 MB.
Verify that the file type is supported. Common supported types include text files, code files, PDFs, images, and office documents.
Try uploading a smaller file or splitting large files into smaller parts.
See Attaching Files for detailed information on supported formats and size limits.
Model Not Listed
Symptom: A specific model you expect to see is not available in the model dropdown.
Cause: The model may not be deployed on the provider, or the provider configuration may not include it.
Solution:
Verify with your administrator that the desired model is deployed and available on the provider endpoint.
For Azure OpenAI providers, confirm that the model deployment name is correctly configured.
For custom providers, ensure the endpoint's /v1/models response includes the model you are looking for.
Check that you have permission to use the provider that hosts the model.
CLI Chat Issues
Symptom: The pw ai-chat chat command fails or cannot connect.
Cause: Authentication, network, or provider name issues.
Solution:
Verify you are authenticated: run pw auth login if needed.
Confirm the provider name is correct. The argument must match the provider name exactly as configured.
Check your network connection to the ACTIVATE platform.